Model reduction methods for control design
NASA Technical Reports Server (NTRS)
Dunipace, K. R.
1988-01-01
Several different model reduction methods are developed and detailed implementation information is provided for those methods. Command files to implement the model reduction methods in a proprietary control law analysis and design package are presented. A comparison and discussion of the various reduction techniques is included.
Development and evaluation of thermal model reduction algorithms for spacecraft
NASA Astrophysics Data System (ADS)
Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus
2015-05-01
This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.
A decentralized linear quadratic control design method for flexible structures
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Craig, Roy R., Jr.
1990-01-01
A decentralized suboptimal linear quadratic control design procedure which combines substructural synthesis, model reduction, decentralized control design, subcontroller synthesis, and controller reduction is proposed for the design of reduced-order controllers for flexible structures. The procedure starts with a definition of the continuum structure to be controlled. An evaluation model of finite dimension is obtained by the finite element method. Then, the finite element model is decomposed into several substructures by using a natural decomposition called substructuring decomposition. Each substructure, at this point, still has too large a dimension and must be reduced to a size that is Riccati-solvable. Model reduction of each substructure can be performed by using any existing model reduction method, e.g., modal truncation, balanced reduction, Krylov model reduction, or mixed-mode method. Then, based on the reduced substructure model, a subcontroller is designed by an LQ optimal control method for each substructure independently. After all subcontrollers are designed, a controller synthesis method called substructural controller synthesis is employed to synthesize all subcontrollers into a global controller. The assembling scheme used is the same as that employed for the structure matrices. Finally, a controller reduction scheme, called the equivalent impulse response energy controller (EIREC) reduction algorithm, is used to reduce the global controller to a reasonable size for implementation. The EIREC reduced controller preserves the impulse response energy of the full-order controller and has the property of matching low-frequency moments and low-frequency power moments. An advantage of the substructural controller synthesis method is that it relieves the computational burden associated with dimensionality. Besides that, the SCS design scheme is also a highly adaptable controller synthesis method for structures with varying configuration, or varying mass and stiffness properties.
Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Youngsoo; Carlberg, Kevin Thomas
Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over allmore » space and time in a weighted ℓ 2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Wang, Jianhui; Liu, Hui
Abstract: In this paper, nonlinear model reduction for power systems is performed by the balancing of empirical controllability and observability covariances that are calculated around the operating region. Unlike existing model reduction methods, the external system does not need to be linearized but is directly dealt with as a nonlinear system. A transformation is found to balance the controllability and observability covariances in order to determine which states have the greatest contribution to the input-output behavior. The original system model is then reduced by Galerkin projection based on this transformation. The proposed method is tested and validated on a systemmore » comprised of a 16-machine 68-bus system and an IEEE 50-machine 145-bus system. The results show that by using the proposed model reduction the calculation efficiency can be greatly improved; at the same time, the obtained state trajectories are close to those for directly simulating the whole system or partitioning the system while not performing reduction. Compared with the balanced truncation method based on a linearized model, the proposed nonlinear model reduction method can guarantee higher accuracy and similar calculation efficiency. It is shown that the proposed method is not sensitive to the choice of the matrices for calculating the empirical covariances.« less
Recent advances in reduction methods for nonlinear problems. [in structural mechanics
NASA Technical Reports Server (NTRS)
Noor, A. K.
1981-01-01
Status and some recent developments in the application of reduction methods to nonlinear structural mechanics problems are summarized. The aspects of reduction methods discussed herein include: (1) selection of basis vectors in nonlinear static and dynamic problems, (2) application of reduction methods in nonlinear static analysis of structures subjected to prescribed edge displacements, and (3) use of reduction methods in conjunction with mixed finite element models. Numerical examples are presented to demonstrate the effectiveness of reduction methods in nonlinear problems. Also, a number of research areas which have high potential for application of reduction methods are identified.
Reduction of chemical reaction models
NASA Technical Reports Server (NTRS)
Frenklach, Michael
1991-01-01
An attempt is made to reconcile the different terminologies pertaining to reduction of chemical reaction models. The approaches considered include global modeling, response modeling, detailed reduction, chemical lumping, and statistical lumping. The advantages and drawbacks of each of these methods are pointed out.
NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Tsuha, Walter S.
1993-01-01
A two-stage model reduction methodology, combining the classical Component Mode Synthesis (CMS) method and the newly developed Enhanced Projection and Assembly (EP&A) method, is proposed in this research. The first stage of this methodology, called the COmponent Modes Projection and Assembly model REduction (COMPARE) method, involves the generation of CMS mode sets, such as the MacNeal-Rubin mode sets. These mode sets are then used to reduce the order of each component model in the Rayleigh-Ritz sense. The resultant component models are then combined to generate reduced-order system models at various system configurations. A composite mode set which retains important system modes at all system configurations is then selected from these reduced-order system models. In the second stage, the EP&A model reduction method is employed to reduce further the order of the system model generated in the first stage. The effectiveness of the COMPARE methodology has been successfully demonstrated on a high-order, finite-element model of the cruise-configured Galileo spacecraft.
Model and controller reduction of large-scale structures based on projection methods
NASA Astrophysics Data System (ADS)
Gildin, Eduardo
The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that is, the reduced-order controller implemented with the full-order plant. A controller reduction approach is proposed such that to guarantee closed-loop stability. It is based on the concept of dissipativity (or positivity) of linear dynamical systems. Utilizing passivity preserving model reduction together with dissipative-LQG controllers, effective low-order optimal controllers are obtained. Results are shown through simulations.
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2017-07-01
Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.
Assessment of methods for methyl iodide emission reduction and pest control using a simulation model
USDA-ARS?s Scientific Manuscript database
Various methods have been developed to reduce atmospheric emissions from the agricultural use of highly volatile pesticides and mitigate their adverse environmental effects. The effectiveness of various methods on emissions reduction and pest control was assessed using simulation model in this study...
Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B
1998-01-01
Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.
Complexity reduction of biochemical rate expressions.
Schmidt, Henning; Madsen, Mads F; Danø, Sune; Cedersund, Gunnar
2008-03-15
The current trend in dynamical modelling of biochemical systems is to construct more and more mechanistically detailed and thus complex models. The complexity is reflected in the number of dynamic state variables and parameters, as well as in the complexity of the kinetic rate expressions. However, a greater level of complexity, or level of detail, does not necessarily imply better models, or a better understanding of the underlying processes. Data often does not contain enough information to discriminate between different model hypotheses, and such overparameterization makes it hard to establish the validity of the various parts of the model. Consequently, there is an increasing demand for model reduction methods. We present a new reduction method that reduces complex rational rate expressions, such as those often used to describe enzymatic reactions. The method is a novel term-based identifiability analysis, which is easy to use and allows for user-specified reductions of individual rate expressions in complete models. The method is one of the first methods to meet the classical engineering objective of improved parameter identifiability without losing the systems biology demand of preserved biochemical interpretation. The method has been implemented in the Systems Biology Toolbox 2 for MATLAB, which is freely available from http://www.sbtoolbox2.org. The Supplementary Material contains scripts that show how to use it by applying the method to the example models, discussed in this article.
An error bound for a discrete reduced order model of a linear multivariable system
NASA Technical Reports Server (NTRS)
Al-Saggaf, Ubaid M.; Franklin, Gene F.
1987-01-01
The design of feasible controllers for high dimension multivariable systems can be greatly aided by a method of model reduction. In order for the design based on the order reduction to include a guarantee of stability, it is sufficient to have a bound on the model error. Previous work has provided such a bound for continuous-time systems for algorithms based on balancing. In this note an L-infinity bound is derived for model error for a method of order reduction of discrete linear multivariable systems based on balancing.
An interprovincial cooperative game model for air pollution control in China.
Xue, Jian; Zhao, Laijun; Fan, Longzhen; Qian, Ying
2015-07-01
The noncooperative air pollution reduction model (NCRM) that is currently adopted in China to manage air pollution reduction of each individual province has inherent drawbacks. In this paper, we propose a cooperative air pollution reduction game model (CRM) that consists of two parts: (1) an optimization model that calculates the optimal pollution reduction quantity for each participating province to meet the joint pollution reduction goal; and (2) a model that distribute the economic benefit of the cooperation (i.e., pollution reduction cost saving) among the provinces in the cooperation based on the Shapley value method. We applied the CRM to the case of SO2 reduction in the Beijing-Tianjin-Hebei region in China. The results, based on the data from 2003-2009, show that cooperation helps lower the overall SO2 pollution reduction cost from 4.58% to 11.29%. Distributed across the participating provinces, such a cost saving from interprovincial cooperation brings significant benefits to each local government and stimulates them for further cooperation in pollution reduction. Finally, sensitivity analysis is performed using the year 2009 data to test the parameters' effects on the pollution reduction cost savings. China is increasingly facing unprecedented pressure for immediate air pollution control. The current air pollution reduction policy does not allow cooperation and is less efficient. In this paper we developed a cooperative air pollution reduction game model that consists of two parts: (1) an optimization model that calculates the optimal pollution reduction quantity for each participating province to meet the joint pollution reduction goal; and (2) a model that distributes the cooperation gains (i.e., cost reduction) among the provinces in the cooperation based on the Shapley value method. The empirical case shows that such a model can help improve efficiency in air pollution reduction. The result of the model can serve as a reference for Chinese government pollution reduction policy design.
Dimension reduction method for SPH equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, Alexandre M.; Scheibe, Timothy D.
2011-08-26
Smoothed Particle Hydrodynamics model of a complex multiscale processe often results in a system of ODEs with an enormous number of unknowns. Furthermore, a time integration of the SPH equations usually requires time steps that are smaller than the observation time by many orders of magnitude. A direct solution of these ODEs can be extremely expensive. Here we propose a novel dimension reduction method that gives an approximate solution of the SPH ODEs and provides an accurate prediction of the average behavior of the modeled system. The method consists of two main elements. First, effective equationss for evolution of averagemore » variables (e.g. average velocity, concentration and mass of a mineral precipitate) are obtained by averaging the SPH ODEs over the entire computational domain. These effective ODEs contain non-local terms in the form of volume integrals of functions of the SPH variables. Second, a computational closure is used to close the system of the effective equations. The computational closure is achieved via short bursts of the SPH model. The dimension reduction model is used to simulate flow and transport with mixing controlled reactions and mineral precipitation. An SPH model is used model transport at the porescale. Good agreement between direct solutions of the SPH equations and solutions obtained with the dimension reduction method for different boundary conditions confirms the accuracy and computational efficiency of the dimension reduction model. The method significantly accelerates SPH simulations, while providing accurate approximation of the solution and accurate prediction of the average behavior of the system.« less
Component model reduction via the projection and assembly method
NASA Technical Reports Server (NTRS)
Bernard, Douglas E.
1989-01-01
The problem of acquiring a simple but sufficiently accurate model of a dynamic system is made more difficult when the dynamic system of interest is a multibody system comprised of several components. A low order system model may be created by reducing the order of the component models and making use of various available multibody dynamics programs to assemble them into a system model. The difficulty is in choosing the reduced order component models to meet system level requirements. The projection and assembly method, proposed originally by Eke, solves this difficulty by forming the full order system model, performing model reduction at the the system level using system level requirements, and then projecting the desired modes onto the components for component level model reduction. The projection and assembly method is analyzed to show the conditions under which the desired modes are captured exactly; to the numerical precision of the algorithm.
Model reduction in a subset of the original states
NASA Technical Reports Server (NTRS)
Yae, K. H.; Inman, D. J.
1992-01-01
A model reduction method is investigated to provide a smaller structural dynamic model for subsequent structural control design. A structural dynamic model is assumed to be derived from finite element analysis. It is first converted into the state space form, and is further reduced by the internal balancing method. Through the co-ordinate transformation derived from the states that are deleted during reduction, the reduced model is finally expressed with the states that are members of the original states. Therefore, the states in the final reduced model represent the degrees of freedom of the nodes that are selected by the designer. The procedure provides a more practical implementation of model reduction for applications in which specific nodes, such as sensor and/or actuator attachment points, are to be retained in the reduced model. Thus, it ensures that the reduced model is under the same input and output condition as the original physical model. The procedure is applied to two simple examples and comparisons are made between the full and reduced order models. The method can be applied to a linear, continuous and time-invariant model of structural dynamics with nonproportional viscous damping.
NASA Astrophysics Data System (ADS)
Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis
2016-11-01
Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.
NASA Astrophysics Data System (ADS)
Cunha-Filho, A. G.; Briend, Y. P. J.; de Lima, A. M. G.; Donadon, M. V.
2018-05-01
The flutter boundary prediction of complex aeroelastic systems is not an easy task. In some cases, these analyses may become prohibitive due to the high computational cost and time associated with the large number of degrees of freedom of the aeroelastic models, particularly when the aeroelastic model incorporates a control strategy with the aim of suppressing the flutter phenomenon, such as the use of viscoelastic treatments. In this situation, the use of a model reduction method is essential. However, the construction of a modal reduction basis for aeroviscoelastic systems is still a challenge, owing to the inherent frequency- and temperature-dependent behavior of the viscoelastic materials. Thus, the main contribution intended for the present study is to propose an efficient and accurate iterative enriched Ritz basis to deal with aeroviscoelastic systems. The main features and capabilities of the proposed model reduction method are illustrated in the prediction of flutter boundary for a thin three-layer sandwich flat panel and a typical aeronautical stiffened panel, both under supersonic flow.
Reduced-order modeling for hyperthermia: an extended balanced-realization-based approach.
Mattingly, M; Bailey, E A; Dutton, A W; Roemer, R B; Devasia, S
1998-09-01
Accurate thermal models are needed in hyperthermia cancer treatments for such tasks as actuator and sensor placement design, parameter estimation, and feedback temperature control. The complexity of the human body produces full-order models which are too large for effective execution of these tasks, making use of reduced-order models necessary. However, standard balanced-realization (SBR)-based model reduction techniques require a priori knowledge of the particular placement of actuators and sensors for model reduction. Since placement design is intractable (computationally) on the full-order models, SBR techniques must use ad hoc placements. To alleviate this problem, an extended balanced-realization (EBR)-based model-order reduction approach is presented. The new technique allows model order reduction to be performed over all possible placement designs and does not require ad hoc placement designs. It is shown that models obtained using the EBR method are more robust to intratreatment changes in the placement of the applied power field than those models obtained using the SBR method.
Exact model reduction of combinatorial reaction networks
Conzelmann, Holger; Fey, Dirk; Gilles, Ernst D
2008-01-01
Background Receptors and scaffold proteins usually possess a high number of distinct binding domains inducing the formation of large multiprotein signaling complexes. Due to combinatorial reasons the number of distinguishable species grows exponentially with the number of binding domains and can easily reach several millions. Even by including only a limited number of components and binding domains the resulting models are very large and hardly manageable. A novel model reduction technique allows the significant reduction and modularization of these models. Results We introduce methods that extend and complete the already introduced approach. For instance, we provide techniques to handle the formation of multi-scaffold complexes as well as receptor dimerization. Furthermore, we discuss a new modeling approach that allows the direct generation of exactly reduced model structures. The developed methods are used to reduce a model of EGF and insulin receptor crosstalk comprising 5,182 ordinary differential equations (ODEs) to a model with 87 ODEs. Conclusion The methods, presented in this contribution, significantly enhance the available methods to exactly reduce models of combinatorial reaction networks. PMID:18755034
A LATIN-based model reduction approach for the simulation of cycling damage
NASA Astrophysics Data System (ADS)
Bhattacharyya, Mainak; Fau, Amelie; Nackenhorst, Udo; Néron, David; Ladevèze, Pierre
2017-11-01
The objective of this article is to introduce a new method including model order reduction for the life prediction of structures subjected to cycling damage. Contrary to classical incremental schemes for damage computation, a non-incremental technique, the LATIN method, is used herein as a solution framework. This approach allows to introduce a PGD model reduction technique which leads to a drastic reduction of the computational cost. The proposed framework is exemplified for structures subjected to cyclic loading, where damage is considered to be isotropic and micro-defect closure effects are taken into account. A difficulty herein for the use of the LATIN method comes from the state laws which can not be transformed into linear relations through an internal variable transformation. A specific treatment of this issue is introduced in this work.
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Poissant, Dominique; Brissette, François
2015-11-01
This paper evaluated the effects of parametric reduction of a hydrological model on five regionalization methods and 267 catchments in the province of Quebec, Canada. The Sobol' variance-based sensitivity analysis was used to rank the model parameters by their influence on the model results and sequential parameter fixing was performed. The reduction in parameter correlations improved parameter identifiability, however this improvement was found to be minimal and was not transposed in the regionalization mode. It was shown that 11 of the HSAMI models' 23 parameters could be fixed with little or no loss in regionalization skill. The main conclusions were that (1) the conceptual lumped models used in this study did not represent physical processes sufficiently well to warrant parameter reduction for physics-based regionalization methods for the Canadian basins examined and (2) catchment descriptors did not adequately represent the relevant hydrological processes, namely snow accumulation and melt.
Nakagami-based total variation method for speckle reduction in thyroid ultrasound images.
Koundal, Deepika; Gupta, Savita; Singh, Sukhwinder
2016-02-01
A good statistical model is necessary for the reduction in speckle noise. The Nakagami model is more general than the Rayleigh distribution for statistical modeling of speckle in ultrasound images. In this article, the Nakagami-based noise removal method is presented to enhance thyroid ultrasound images and to improve clinical diagnosis. The statistics of log-compressed image are derived from the Nakagami distribution following a maximum a posteriori estimation framework. The minimization problem is solved by optimizing an augmented Lagrange and Chambolle's projection method. The proposed method is evaluated on both artificial speckle-simulated and real ultrasound images. The experimental findings reveal the superiority of the proposed method both quantitatively and qualitatively in comparison with other speckle reduction methods reported in the literature. The proposed method yields an average signal-to-noise ratio gain of more than 2.16 dB over the non-convex regularizer-based speckle noise removal method, 3.83 dB over the Aubert-Aujol model, 1.71 dB over the Shi-Osher model and 3.21 dB over the Rudin-Lions-Osher model on speckle-simulated synthetic images. Furthermore, visual evaluation of the despeckled images shows that the proposed method suppresses speckle noise well while preserving the textures and fine details. © IMechE 2015.
Influence of model order reduction methods on dynamical-optical simulations
NASA Astrophysics Data System (ADS)
Störkle, Johannes; Eberhard, Peter
2017-04-01
In this work, the influence of model order reduction (MOR) methods on optical aberrations is analyzed within a dynamical-optical simulation of a high precision optomechanical system. Therefore, an integrated modeling process and new methods have to be introduced for the computation and investigation of the overall dynamical-optical behavior. For instance, this optical system can be a telescope optic or a lithographic objective. In order to derive a simplified mechanical model for transient time simulations with low computational cost, the method of elastic multibody systems in combination with MOR methods can be used. For this, software tools and interfaces are defined and created. Furthermore, mechanical and optical simulation models are derived and implemented. With these, on the one hand, the mechanical sensitivity can be investigated for arbitrary external excitations and on the other hand, the related optical behavior can be predicted. In order to clarify these methods, academic examples are chosen and the influences of the MOR methods and simulation strategies are analyzed. Finally, the systems are investigated with respect to the mechanical-optical frequency responses, and in conclusion, some recommendations for the application of reduction methods are given.
Solid-State Kinetic Investigations of Nonisothermal Reduction of Iron Species Supported on SBA-15
2017-01-01
Iron oxide catalysts supported on nanostructured silica SBA-15 were synthesized with various iron loadings using two different precursors. Structural characterization of the as-prepared FexOy/SBA-15 samples was performed by nitrogen physisorption, X-ray diffraction, DR-UV-Vis spectroscopy, and Mössbauer spectroscopy. An increasing size of the resulting iron species correlated with an increasing iron loading. Significantly smaller iron species were obtained from (Fe(III), NH4)-citrate precursors compared to Fe(III)-nitrate precursors. Moreover, smaller iron species resulted in a smoother surface of the support material. Temperature-programmed reduction (TPR) of the FexOy/SBA-15 samples with H2 revealed better reducibility of the samples originating from Fe(III)-nitrate precursors. Varying the iron loading led to a change in reduction mechanism. TPR traces were analyzed by model-independent Kissinger method, Ozawa, Flynn, and Wall (OFW) method, and model-dependent Coats-Redfern method. JMAK kinetic analysis afforded a one-dimensional reduction process for the FexOy/SBA-15 samples. The Kissinger method yielded the lowest apparent activation energy for the lowest loaded citrate sample (Ea ≈ 39 kJ/mol). Conversely, the lowest loaded nitrate sample possessed the highest apparent activation energy (Ea ≈ 88 kJ/mol). For samples obtained from Fe(III)-nitrate precursors, Ea decreased with increasing iron loading. Apparent activation energies from model-independent analysis methods agreed well with those from model-dependent methods. Nucleation as rate-determining step in the reduction of the iron oxide species was consistent with the Mampel solid-state reaction model. PMID:29230346
Time Hierarchies and Model Reduction in Canonical Non-linear Models
Löwe, Hannes; Kremling, Andreas; Marin-Sanguino, Alberto
2016-01-01
The time-scale hierarchies of a very general class of models in differential equations is analyzed. Classical methods for model reduction and time-scale analysis have been adapted to this formalism and a complementary method is proposed. A unified theoretical treatment shows how the structure of the system can be much better understood by inspection of two sets of singular values: one related to the stoichiometric structure of the system and another to its kinetics. The methods are exemplified first through a toy model, then a large synthetic network and finally with numeric simulations of three classical benchmark models of real biological systems. PMID:27708665
Model Order Reduction of Aeroservoelastic Model of Flexible Aircraft
NASA Technical Reports Server (NTRS)
Wang, Yi; Song, Hongjun; Pant, Kapil; Brenner, Martin J.; Suh, Peter
2016-01-01
This paper presents a holistic model order reduction (MOR) methodology and framework that integrates key technological elements of sequential model reduction, consistent model representation, and model interpolation for constructing high-quality linear parameter-varying (LPV) aeroservoelastic (ASE) reduced order models (ROMs) of flexible aircraft. The sequential MOR encapsulates a suite of reduction techniques, such as truncation and residualization, modal reduction, and balanced realization and truncation to achieve optimal ROMs at grid points across the flight envelope. The consistence in state representation among local ROMs is obtained by the novel method of common subspace reprojection. Model interpolation is then exploited to stitch ROMs at grid points to build a global LPV ASE ROM feasible to arbitrary flight condition. The MOR method is applied to the X-56A MUTT vehicle with flexible wing being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies demonstrated that relative to the fullorder model, our X-56A ROM can accurately and reliably capture vehicles dynamics at various flight conditions in the target frequency regime while the number of states in ROM can be reduced by 10X (from 180 to 19), and hence, holds great promise for robust ASE controller synthesis and novel vehicle design.
Reduced modeling of signal transduction – a modular approach
Koschorreck, Markus; Conzelmann, Holger; Ebert, Sybille; Ederer, Michael; Gilles, Ernst Dieter
2007-01-01
Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen) was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good approximations especially for macroscopic variables. It can be combined with existing reduction methods without any difficulties. PMID:17854494
High precision NC lathe feeding system rigid-flexible coupling model reduction technology
NASA Astrophysics Data System (ADS)
Xuan, He; Hua, Qingsong; Cheng, Lianjun; Zhang, Hongxin; Zhao, Qinghai; Mao, Xinkai
2017-08-01
This paper proposes the use of dynamic substructure method of reduction of order to achieve effective reduction of feed system for high precision NC lathe feeding system rigid-flexible coupling model, namely the use of ADAMS to establish the rigid flexible coupling simulation model of high precision NC lathe, and then the vibration simulation of the period by using the FD 3D damper is very effective for feed system of bolt connection reduction of multi degree of freedom model. The vibration simulation calculation is more accurate, more quickly.
Buschbaum, Jan; Fremd, Rainer; Pohlemann, Tim; Kristen, Alexander
2017-08-01
Reduction is a crucial step in the surgical treatment of bone fractures. Finding an optimal path for restoring anatomical alignment is considered technically demanding because collisions as well as high forces caused by surrounding soft tissues can avoid desired reduction movements. The repetition of reduction movements leads to a trial-and-error process which causes a prolonged duration of surgery. By planning an appropriate reduction path-an optimal sequence of target-directed movements-these problems should be overcome. For this purpose, a computer-based method has been developed. Using the example of simple femoral shaft fractures, 3D models are generated out of CT images. A reposition algorithm aligns both fragments by reconstructing their broken edges. According to the criteria of a deduced planning strategy, a modified A*-algorithm searches collision-free route of minimal force from the dislocated into the computed target position. Muscular forces are considered using a musculoskeletal reduction model (OpenSim model), and bone collisions are detected by an appropriate method. Five femoral SYNBONE models were broken into different fracture classification types and were automatically reduced from ten randomly selected displaced positions. Highest mean translational and rotational error for achieving target alignment is [Formula: see text] and [Formula: see text]. Mean value and standard deviation of occurring forces are [Formula: see text] for M. tensor fasciae latae and [Formula: see text] for M. semitendinosus over all trials. These pathways are precise, collision-free, required forces are minimized, and thus regarded as optimal paths. A novel method for planning reduction paths under consideration of collisions and muscular forces is introduced. The results deliver additional knowledge for an appropriate tactical reduction procedure and can provide a basis for further navigated or robotic-assisted developments.
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.
2014-04-01
Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.
A Geometric Method for Model Reduction of Biochemical Networks with Polynomial Rate Functions.
Samal, Satya Swarup; Grigoriev, Dima; Fröhlich, Holger; Weber, Andreas; Radulescu, Ovidiu
2015-12-01
Model reduction of biochemical networks relies on the knowledge of slow and fast variables. We provide a geometric method, based on the Newton polytope, to identify slow variables of a biochemical network with polynomial rate functions. The gist of the method is the notion of tropical equilibration that provides approximate descriptions of slow invariant manifolds. Compared to extant numerical algorithms such as the intrinsic low-dimensional manifold method, our approach is symbolic and utilizes orders of magnitude instead of precise values of the model parameters. Application of this method to a large collection of biochemical network models supports the idea that the number of dynamical variables in minimal models of cell physiology can be small, in spite of the large number of molecular regulatory actors.
Effects of rotor model degradation on the accuracy of rotorcraft real time simulation
NASA Technical Reports Server (NTRS)
Houck, J. A.; Bowles, R. L.
1976-01-01
The effects are studied of degrading a rotating blade element rotor mathematical model to meet various real-time simulation requirements of rotorcraft. Three methods of degradation were studied: reduction of number of blades, reduction of number of blade segments, and increasing the integration interval, which has the corresponding effect of increasing blade azimuthal advance angle. The three degradation methods were studied through static trim comparisons, total rotor force and moment comparisons, single blade force and moment comparisons over one complete revolution, and total vehicle dynamic response comparisons. Recommendations are made concerning model degradation which should serve as a guide for future users of this mathematical model, and in general, they are in order of minimum impact on model validity: (1) reduction of number of blade segments, (2) reduction of number of blades, and (3) increase of integration interval and azimuthal advance angle. Extreme limits are specified beyond which the rotating blade element rotor mathematical model should not be used.
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.
2017-02-01
A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.
NASA Technical Reports Server (NTRS)
White, Allan L.; Palumbo, Daniel L.
1991-01-01
Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.
Computational aspects of real-time simulation of rotary-wing aircraft. M.S. Thesis
NASA Technical Reports Server (NTRS)
Houck, J. A.
1976-01-01
A study was conducted to determine the effects of degrading a rotating blade element rotor mathematical model suitable for real-time simulation of rotorcraft. Three methods of degradation were studied, reduction of number of blades, reduction of number of blade segments, and increasing the integration interval, which has the corresponding effect of increasing blade azimuthal advance angle. The three degradation methods were studied through static trim comparisons, total rotor force and moment comparisons, single blade force and moment comparisons over one complete revolution, and total vehicle dynamic response comparisons. Recommendations are made concerning model degradation which should serve as a guide for future users of this mathematical model, and in general, they are in order of minimum impact on model validity: (1) reduction of number of blade segments; (2) reduction of number of blades; and (3) increase of integration interval and azimuthal advance angle. Extreme limits are specified beyond which a different rotor mathematical model should be used.
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
Veiling glare reduction methods compared for ophthalmic applications
NASA Technical Reports Server (NTRS)
Buchele, D. R.
1981-01-01
Veiling glare in ocular viewing was simulated by viewing the retina of an eye model through a sheet of light-scattering material lit from the front. Four methods of glare reduction were compared, namely, optical scanning, polarized light, viewing and illumination paths either coaxial or intersecting at the object, and closed circuit TV. Photographs show the effect of these methods on visibility. Polarized light was required to eliminate light specularly reflected from the instrument optics. The greatest glare reduction was obtained when the first three methods were utilized together. Glare reduction using TV was limited by nonuniform distribution of scattered light over the image.
Establishment and correction of an Echelle cross-prism spectrogram reduction model
NASA Astrophysics Data System (ADS)
Zhang, Rui; Bayanheshig; Li, Xiaotian; Cui, Jicheng
2017-11-01
The accuracy of an echelle cross-prism spectrometer depends on the matching degree between the spectrum reduction model and the actual state of the spectrometer. However, the error of adjustment can change the actual state of the spectrometer and result in a reduction model that does not match. This produces an inaccurate wavelength calibration. Therefore, the calibration of a spectrogram reduction model is important for the analysis of any echelle cross-prism spectrometer. In this study, the spectrogram reduction model of an echelle cross-prism spectrometer was established. The image position laws of a spectrometer that varies with the system parameters were simulated to the influence of the changes in prism refractive index, focal length and so on, on the calculation results. The model was divided into different wavebands. The iterative method, least squares principle and element lamps with known characteristic wavelength were used to calibrate the spectral model in different wavebands to obtain the actual values of the system parameters. After correction, the deviation of actual x- and y-coordinates and the coordinates calculated by the model are less than one pixel. The model corrected by this method thus reflects the system parameters in the current spectrometer state and can assist in accurate wavelength extraction. The instrument installation and adjustment would be guided in model-repeated correction, reducing difficulty of equipment, respectively.
Methods of Sparse Modeling and Dimensionality Reduction to Deal with Big Data
2015-04-01
supervised learning (c). Our framework consists of two separate phases: (a) first find an initial space in an unsupervised manner; then (b) utilize label...model that can learn thousands of topics from a large set of documents and infer the topic mixture of each document, 2) a supervised dimension reduction...model that can learn thousands of topics from a large set of documents and infer the topic mixture of each document, (i) a method of supervised
NASA Astrophysics Data System (ADS)
Dandaroy, Indranil; Vondracek, Joseph; Hund, Ron; Hartley, Dayton
2005-09-01
The objective of this study was to develop a vibro-acoustic computational model of the Raytheon King Air 350 turboprop aircraft with an intent to reduce propfan noise in the cabin. To develop the baseline analysis, an acoustic cavity model of the aircraft interior and a structural dynamics model of the aircraft fuselage were created. The acoustic model was an indirect boundary element method representation using SYSNOISE, while the structural model was a finite-element method normal modes representation in NASTRAN and subsequently imported to SYSNOISE. In the acoustic model, the fan excitation sources were represented employing the Ffowcs Williams-Hawkings equation. The acoustic and the structural models were fully coupled in SYSNOISE and solved to yield the baseline response of acoustic pressure in the aircraft interior and vibration on the aircraft structure due to fan noise. Various vibration absorbers, tuned to fundamental blade passage tone (100 Hz) and its first harmonic (200 Hz), were applied to the structural model to study their effect on cabin noise reduction. Parametric studies were performed to optimize the number and location of these passive devices. Effects of synchrophasing and absorptive noise treatments applied to the aircraft interior were also investigated for noise reduction.
Model reductions using a projection formulation
NASA Technical Reports Server (NTRS)
De Villemagne, Christian; Skelton, Robert E.
1987-01-01
A new methodology for model reduction of MIMO systems exploits the notion of an oblique projection. A reduced model is uniquely defined by a projector whose range space and orthogonal to the null space are chosen among the ranges of generalized controllability and observability matrices. The reduced order models match various combinations (chosen by the designer) of four types of parameters of the full order system associated with (1) low frequency response, (2) high frequency response, (3) low frequency power spectral density, and (4) high frequency power spectral density. Thus, the proposed method is a computationally simple substitute for many existing methods, has an extreme flexibility to embrace combinations of existing methods and offers some new features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.
2014-10-01
Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximatedmore » Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This enables ROMs to be rigorously incorporated in uncertainty-quantification settings, as the error model can be treated as a source of epistemic uncertainty. This work was completed as part of a Truman Fellowship appointment. We note that much additional work was performed as part of the Fellowship. One salient project is the development of the Trilinos-based model-reduction software module Razor , which is currently bundled with the Albany PDE code and currently allows nonlinear reduced-order models to be constructed for any application supported in Albany. Other important projects include the following: 1. ROMES-equipped ROMs for Bayesian inference: K. Carlberg, M. Drohmann, F. Lu (Lawrence Berkeley National Laboratory), M. Morzfeld (Lawrence Berkeley National Laboratory). 2. ROM-enabled Krylov-subspace recycling: K. Carlberg, V. Forstall (University of Maryland), P. Tsuji, R. Tuminaro. 3. A pseudo balanced POD method using only dual snapshots: K. Carlberg, M. Sarovar. 4. An analysis of discrete v. continuous optimality in nonlinear model reduction: K. Carlberg, M. Barone, H. Antil (George Mason University). Journal articles for these projects are in progress at the time of this writing.« less
Nonisothermal Carbothermal Reduction Kinetics of Titanium-Bearing Blast Furnace Slag
NASA Astrophysics Data System (ADS)
Hu, Mengjun; Wei, Ruirui; Hu, Meilong; Wen, Liangying; Ying, Fangqing
2018-05-01
The kinetics of carbothermal reduction of titanium-bearing blast furnace (BF) slag has been studied by thermogravimetric analysis and quadrupole mass spectrometry. The kinetic parameters (activation energy, preexponential factor, and reaction model function) were determined using the Flynn-Wall-Ozawa and Šatava-Šesták methods. The results indicated that reduction of titanium-bearing BF slag can be divided into two stages, namely reduction of phases containing iron and gasification of carbon (< 1095°C), followed by reduction of phases containing titanium (> 1095°C). CO2 was the main off-gas in the temperature range of 530-700°C, whereas CO became the main off-gas when the temperature was greater than 900°C. The activation energy calculated using the Flynn-Wall-Ozawa method was 221.2 kJ/mol. D4 is the mechanism function for carbothermal reduction of titanium-bearing BF slag. Meanwhile, a nonisothermal reduction model is proposed based on the obtained kinetic parameters.
Modeling and Recovery of Iron (Fe) from Red Mud by Coal Reduction
NASA Astrophysics Data System (ADS)
Zhao, Xiancong; Li, Hongxu; Wang, Lei; Zhang, Lifeng
Recovery of Fe from red mud has been studied using statistically designed experiments. The effects of three factors, namely: reduction temperature, reduction time and proportion of additive on recovery of Fe have been investigated. Experiments have been carried out using orthogonal central composite design and factorial design methods. A model has been obtained through variance analysis at 92.5% confidence level.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2017-04-01
Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1985-01-01
The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.
Spatiotemporal Interpolation for Environmental Modelling
Susanto, Ferry; de Souza, Paulo; He, Jing
2016-01-01
A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497
NASA Astrophysics Data System (ADS)
Rachmawati, Vimala; Khusnul Arif, Didik; Adzkiya, Dieky
2018-03-01
The systems contained in the universe often have a large order. Thus, the mathematical model has many state variables that affect the computation time. In addition, generally not all variables are known, so estimations are needed to measure the magnitude of the system that cannot be measured directly. In this paper, we discuss the model reduction and estimation of state variables in the river system to measure the water level. The model reduction of a system is an approximation method of a system with a lower order without significant errors but has a dynamic behaviour that is similar to the original system. The Singular Perturbation Approximation method is one of the model reduction methods where all state variables of the equilibrium system are partitioned into fast and slow modes. Then, The Kalman filter algorithm is used to estimate state variables of stochastic dynamic systems where estimations are computed by predicting state variables based on system dynamics and measurement data. Kalman filters are used to estimate state variables in the original system and reduced system. Then, we compare the estimation results of the state and computational time between the original and reduced system.
Defeaturing CAD models using a geometry-based size field and facet-based reduction operators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quadros, William Roshan; Owen, Steven James
2010-04-01
We propose a method to automatically defeature a CAD model by detecting irrelevant features using a geometry-based size field and a method to remove the irrelevant features via facet-based operations on a discrete representation. A discrete B-Rep model is first created by obtaining a faceted representation of the CAD entities. The candidate facet entities are then marked for reduction by using a geometry-based size field. This is accomplished by estimating local mesh sizes based on geometric criteria. If the field value at a facet entity goes below a user specified threshold value then it is identified as an irrelevant featuremore » and is marked for reduction. The reduction of marked facet entities is primarily performed using an edge collapse operator. Care is taken to retain a valid geometry and topology of the discrete model throughout the procedure. The original model is not altered as the defeaturing is performed on a separate discrete model. Associativity between the entities of the discrete model and that of original CAD model is maintained in order to decode the attributes and boundary conditions applied on the original CAD entities onto the mesh via the entities of the discrete model. Example models are presented to illustrate the effectiveness of the proposed approach.« less
Diagram reduction in problem of critical dynamics of ferromagnets: 4-loop approximation
NASA Astrophysics Data System (ADS)
Adzhemyan, L. Ts; Ivanova, E. V.; Kompaniets, M. V.; Vorobyeva, S. Ye
2018-04-01
Within the framework of the renormalization group approach to the models of critical dynamics, we propose a method for a considerable reduction of the number of integrals needed to calculate the critical exponents. With this method we perform a calculation of the critical exponent z of model A at 4-loop level, where our method allows one to reduce the number of integrals from 66 to 17. The way of constructing the integrand in a Feynman representation of such diagrams is discussed. Integrals were estimated numerically with a sector decomposition technique.
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
Numerical solution for weight reduction model due to health campaigns in Spain
NASA Astrophysics Data System (ADS)
Mohammed, Maha A.; Noor, Noor Fadiya Mohd; Siri, Zailan; Ibrahim, Adriana Irawati Nur
2015-10-01
Transition model between three subpopulations based on Body Mass Index of Valencia community in Spain is considered. No changes in population nutritional habits and public health strategies on weight reduction until 2030 are assumed. The system of ordinary differential equations is solved using Runge-Kutta method of higher order. The numerical results obtained are compared with the predicted values of subpopulation proportion based on statistical estimation in 2013, 2015 and 2030. Relative approximate error is calculated. The consistency of the Runge-Kutta method in solving the model is discussed.
Zwawi, Mohammed A; Moslehy, Faissal A; Rose, Christopher; Huayamave, Victor; Kassab, Alain J; Divo, Eduardo; Jones, Brendan J; Price, Charles T
2017-08-01
This study utilized a computational biomechanical model and applied the least energy path principle to investigate two pathways for closed reduction of high grade infantile hip dislocation. The principle of least energy when applied to moving the femoral head from an initial to a final position considers all possible paths that connect them and identifies the path of least resistance. Clinical reports of severe hip dysplasia have concluded that reduction of the femoral head into the acetabulum may occur by a direct pathway over the posterior rim of the acetabulum when using the Pavlik harness, or by an indirect pathway with reduction through the acetabular notch when using the modified Hoffman-Daimler method. This computational study also compared the energy requirements for both pathways. The anatomical and muscular aspects of the model were derived using a combination of MRI and OpenSim data. Results of this study indicate that the path of least energy closely approximates the indirect pathway of the modified Hoffman-Daimler method. The direct pathway over the posterior rim of the acetabulum required more energy for reduction. This biomechanical analysis confirms the clinical observations of the two pathways for closed reduction of severe hip dysplasia. The path of least energy closely approximated the modified Hoffman-Daimler method. Further study of the modified Hoffman-Daimler method for reduction of severe hip dysplasia may be warranted based on this computational biomechanical analysis. © 2016 The Authors. Journal of Orthopaedic Research Published by Wiley Periodicals, Inc. on behalf of Orthopaedic Research Society. J Orthop Res 35:1799-1805, 2017. © 2016 The Authors. Journal of Orthopaedic Research Published by Wiley Periodicals, Inc. on behalf of Orthopaedic Research Society.
Mixed models and reduction method for dynamic analysis of anisotropic shells
NASA Technical Reports Server (NTRS)
Noor, A. K.; Peters, J. M.
1985-01-01
A time-domain computational procedure is presented for predicting the dynamic response of laminated anisotropic shells. The two key elements of the procedure are: (1) use of mixed finite element models having independent interpolation (shape) functions for stress resultants and generalized displacements for the spatial discretization of the shell, with the stress resultants allowed to be discontinuous at interelement boundaries; and (2) use of a dynamic reduction method, with the global approximation vectors consisting of the static solution and an orthogonal set of Lanczos vectors. The dynamic reduction is accomplished by means of successive application of the finite element method and the classical Rayleigh-Ritz technique. The finite element method is first used to generate the global approximation vectors. Then the Rayleigh-Ritz technique is used to generate a reduced system of ordinary differential equations in the amplitudes of these modes. The temporal integration of the reduced differential equations is performed by using an explicit half-station central difference scheme (Leap-frog method). The effectiveness of the proposed procedure is demonstrated by means of a numerical example and its advantages over reduction methods used with the displacement formulation are discussed.
Comparisons of non-Gaussian statistical models in DNA methylation analysis.
Ma, Zhanyu; Teschendorff, Andrew E; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-06-16
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.
Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis
Ma, Zhanyu; Teschendorff, Andrew E.; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-01-01
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance. PMID:24937687
Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.
2017-09-17
In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.
In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less
NASA Technical Reports Server (NTRS)
Noor, A. K.
1983-01-01
Advances in continuum modeling, progress in reduction methods, and analysis and modeling needs for large space structures are covered with specific attention given to repetitive lattice trusses. As far as continuum modeling is concerned, an effective and verified analysis capability exists for linear thermoelastic stress, birfurcation buckling, and free vibration problems of repetitive lattices. However, application of continuum modeling to nonlinear analysis needs more development. Reduction methods are very effective for bifurcation buckling and static (steady-state) nonlinear analysis. However, more work is needed to realize their full potential for nonlinear dynamic and time-dependent problems. As far as analysis and modeling needs are concerned, three areas are identified: loads determination, modeling and nonclassical behavior characteristics, and computational algorithms. The impact of new advances in computer hardware, software, integrated analysis, CAD/CAM stems, and materials technology is also discussed.
The Role of Hierarchy in Response Surface Modeling of Wind Tunnel Data
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2010-01-01
This paper is intended as a tutorial introduction to certain aspects of response surface modeling, for the experimentalist who has started to explore these methods as a means of improving productivity and quality in wind tunnel testing and other aerospace applications. A brief review of the productivity advantages of response surface modeling in aerospace research is followed by a description of the advantages of a common coding scheme that scales and centers independent variables. The benefits of model term reduction are reviewed. A constraint on model term reduction with coded factors is described in some detail, which requires such models to be well-formulated, or hierarchical. Examples illustrate the consequences of ignoring this constraint. The implication for automated regression model reduction procedures is discussed, and some opinions formed from the author s experience are offered on coding, model reduction, and hierarchy.
Dynamic test/analysis correlation using reduced analytical models
NASA Technical Reports Server (NTRS)
Mcgowan, Paul E.; Angelucci, A. Filippo; Javeed, Mehzad
1992-01-01
Test/analysis correlation is an important aspect of the verification of analysis models which are used to predict on-orbit response characteristics of large space structures. This paper presents results of a study using reduced analysis models for performing dynamic test/analysis correlation. The reduced test-analysis model (TAM) has the same number and orientation of DOF as the test measurements. Two reduction methods, static (Guyan) reduction and the Improved Reduced System (IRS) reduction, are applied to the test/analysis correlation of a laboratory truss structure. Simulated test results and modal test data are used to examine the performance of each method. It is shown that selection of DOF to be retained in the TAM is critical when large structural masses are involved. In addition, the use of modal test results may provide difficulties in TAM accuracy even if a large number of DOF are retained in the TAM.
NASA Technical Reports Server (NTRS)
Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San
1994-01-01
This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
Cost-effectiveness analysis of risk-reduction measures to reach water safety targets.
Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof; Pettersson, Thomas J R
2011-01-01
Identifying the most suitable risk-reduction measures in drinking water systems requires a thorough analysis of possible alternatives. In addition to the effects on the risk level, also the economic aspects of the risk-reduction alternatives are commonly considered important. Drinking water supplies are complex systems and to avoid sub-optimisation of risk-reduction measures, the entire system from source to tap needs to be considered. There is a lack of methods for quantification of water supply risk reduction in an economic context for entire drinking water systems. The aim of this paper is to present a novel approach for risk assessment in combination with economic analysis to evaluate risk-reduction measures based on a source-to-tap approach. The approach combines a probabilistic and dynamic fault tree method with cost-effectiveness analysis (CEA). The developed approach comprises the following main parts: (1) quantification of risk reduction of alternatives using a probabilistic fault tree model of the entire system; (2) combination of the modelling results with CEA; and (3) evaluation of the alternatives with respect to the risk reduction, the probability of not reaching water safety targets and the cost-effectiveness. The fault tree method and CEA enable comparison of risk-reduction measures in the same quantitative unit and consider costs and uncertainties. The approach provides a structured and thorough analysis of risk-reduction measures that facilitates transparency and long-term planning of drinking water systems in order to avoid sub-optimisation of available resources for risk reduction. Copyright © 2010 Elsevier Ltd. All rights reserved.
Jiang, Dong; Hao, Mengmeng; Wang, Qiao; Huang, Yaohuan; Fu, Xinyu
2014-01-01
The main purpose for developing biofuel is to reduce GHG (greenhouse gas) emissions, but the comprehensive environmental impact of such fuels is not clear. Life cycle analysis (LCA), as a complete comprehensive analysis method, has been widely used in bioenergy assessment studies. Great efforts have been directed toward establishing an efficient method for comprehensively estimating the greenhouse gas (GHG) emission reduction potential from the large-scale cultivation of energy plants by combining LCA with ecosystem/biogeochemical process models. LCA presents a general framework for evaluating the energy consumption and GHG emission from energy crop planting, yield acquisition, production, product use, and postprocessing. Meanwhile, ecosystem/biogeochemical process models are adopted to simulate the fluxes and storage of energy, water, carbon, and nitrogen in the soil-plant (energy crops) soil continuum. Although clear progress has been made in recent years, some problems still exist in current studies and should be addressed. This paper reviews the state-of-the-art method for estimating GHG emission reduction through developing energy crops and introduces in detail a new approach for assessing GHG emission reduction by combining LCA with biogeochemical process models. The main achievements of this study along with the problems in current studies are described and discussed. PMID:25045736
Model-size reduction for the buckling and vibration analyses of anisotropic panels
NASA Technical Reports Server (NTRS)
Noor, A. K.; Whitworth, S. L.
1986-01-01
A computational procedure is presented for reducing the size of the model used in the buckling and vibration analyses of symmetric anisotropic panels to that of the corresponding orthotropic model. The key elements of the procedure are the application of an operator splitting technique through the decomposition of the material stiffness matrix of the panel into the sum of orthotropic and nonorthotropic (anisotropic) parts and the use of a reduction method through successive application of the finite element method and the classical Rayleigh-Ritz technique. The effectiveness of the procedure is demonstrated by numerical examples.
Structural Health Monitoring of Large Structures
NASA Technical Reports Server (NTRS)
Kim, Hyoung M.; Bartkowicz, Theodore J.; Smith, Suzanne Weaver; Zimmerman, David C.
1994-01-01
This paper describes a damage detection and health monitoring method that was developed for large space structures using on-orbit modal identification. After evaluating several existing model refinement and model reduction/expansion techniques, a new approach was developed to identify the location and extent of structural damage with a limited number of measurements. A general area of structural damage is first identified and, subsequently, a specific damaged structural component is located. This approach takes advantage of two different model refinement methods (optimal-update and design sensitivity) and two different model size matching methods (model reduction and eigenvector expansion). Performance of the proposed damage detection approach was demonstrated with test data from two different laboratory truss structures. This space technology can also be applied to structural inspection of aircraft, offshore platforms, oil tankers, ridges, and buildings. In addition, its applications to model refinement will improve the design of structural systems such as automobiles and electronic packaging.
NASA Technical Reports Server (NTRS)
Hsia, Wei Shen
1989-01-01
A validated technology data base is being developed in the areas of control/structures interaction, deployment dynamics, and system performance for Large Space Structures (LSS). A Ground Facility (GF), in which the dynamics and control systems being considered for LSS applications can be verified, was designed and built. One of the important aspects of the GF is to verify the analytical model for the control system design. The procedure is to describe the control system mathematically as well as possible, then to perform tests on the control system, and finally to factor those results into the mathematical model. The reduction of the order of a higher order control plant was addressed. The computer program was improved for the maximum entropy principle adopted in Hyland's MEOP method. The program was tested against the testing problem. It resulted in a very close match. Two methods of model reduction were examined: Wilson's model reduction method and Hyland's optimal projection (OP) method. Design of a computer program for Hyland's OP method was attempted. Due to the difficulty encountered at the stage where a special matrix factorization technique is needed in order to obtain the required projection matrix, the program was successful up to the finding of the Linear Quadratic Gaussian solution but not beyond. Numerical results along with computer programs which employed ORACLS are presented.
Zhang, Ruibin; Qian, Xin; Zhu, Wenting; Gao, Hailong; Hu, Wei; Wang, Jinhua
2014-09-09
In the beginning of the 21st century, the deterioration of water quality in Taihu Lake, China, has caused widespread concern. The primary source of pollution in Taihu Lake is river inflows. Effective pollution load reduction scenarios need to be implemented in these rivers in order to improve the water quality of Taihu Lake. It is important to select appropriate pollution load reduction scenarios for achieving particular goals. The aim of this study was to facilitate the selection of appropriate scenarios. The QUAL2K model for river water quality was used to simulate the effects of a range of pollution load reduction scenarios in the Wujin River, which is one of the major inflow rivers of Taihu Lake. The model was calibrated for the year 2010 and validated for the year 2011. Various pollution load reduction scenarios were assessed using an analytic hierarchy process, and increasing rates of evaluation indicators were predicted using the Delphi method. The results showed that control of pollution from the source is the optimal method for pollution prevention and control, and the method of "Treatment after Pollution" has bad environmental, social and ecological effects. The method applied in this study can assist for environmental managers to select suitable pollution load reduction scenarios for achieving various objectives.
Zhang, Ruibin; Qian, Xin; Zhu, Wenting; Gao, Hailong; Hu, Wei; Wang, Jinhua
2014-01-01
In the beginning of the 21st century, the deterioration of water quality in Taihu Lake, China, has caused widespread concern. The primary source of pollution in Taihu Lake is river inflows. Effective pollution load reduction scenarios need to be implemented in these rivers in order to improve the water quality of Taihu Lake. It is important to select appropriate pollution load reduction scenarios for achieving particular goals. The aim of this study was to facilitate the selection of appropriate scenarios. The QUAL2K model for river water quality was used to simulate the effects of a range of pollution load reduction scenarios in the Wujin River, which is one of the major inflow rivers of Taihu Lake. The model was calibrated for the year 2010 and validated for the year 2011. Various pollution load reduction scenarios were assessed using an analytic hierarchy process, and increasing rates of evaluation indicators were predicted using the Delphi method. The results showed that control of pollution from the source is the optimal method for pollution prevention and control, and the method of “Treatment after Pollution” has bad environmental, social and ecological effects. The method applied in this study can assist for environmental managers to select suitable pollution load reduction scenarios for achieving various objectives. PMID:25207492
Multibody model reduction by component mode synthesis and component cost analysis
NASA Technical Reports Server (NTRS)
Spanos, J. T.; Mingori, D. L.
1990-01-01
The classical assumed-modes method is widely used in modeling the dynamics of flexible multibody systems. According to the method, the elastic deformation of each component in the system is expanded in a series of spatial and temporal functions known as modes and modal coordinates, respectively. This paper focuses on the selection of component modes used in the assumed-modes expansion. A two-stage component modal reduction method is proposed combining Component Mode Synthesis (CMS) with Component Cost Analysis (CCA). First, each component model is truncated such that the contribution of the high frequency subsystem to the static response is preserved. Second, a new CMS procedure is employed to assemble the system model and CCA is used to further truncate component modes in accordance with their contribution to a quadratic cost function of the system output. The proposed method is demonstrated with a simple example of a flexible two-body system.
Zhang, Xue-Ying; Wen, Zong-Guo
2014-11-01
To evaluate the reduction potential of industrial water pollutant emissions and to study the application of technology simulation in pollutant control and environment management, an Industrial Reduction Potential Analysis and Environment Management (IRPAEM) model was developed based on coupling of "material-process-technology-product". The model integrated bottom-up modeling and scenario analysis method, and was applied to China's paper industry. Results showed that under CM scenario, the reduction potentials of waster water, COD and ammonia nitrogen would reach 7 x 10(8) t, 39 x 10(4) t and 0.3 x 10(4) t, respectively in 2015, 13.8 x 10(8) t, 56 x 10(4) t and 0.5 x 10(4) t, respectively in 2020. Strengthening the end-treatment would still be the key method to reduce emissions during 2010-2020, while the reduction effect of structure adjustment would be more obvious during 2015-2020. Pollution production could basically reach the domestic or international advanced level of clean production in 2015 and 2020; the index of wastewater and ammonia nitrogen would basically meet the emission standards in 2015 and 2020 while COD would not.
An adaptive model order reduction by proper snapshot selection for nonlinear dynamical problems
NASA Astrophysics Data System (ADS)
Nigro, P. S. B.; Anndif, M.; Teixeira, Y.; Pimenta, P. M.; Wriggers, P.
2016-04-01
Model Order Reduction (MOR) methods are employed in many fields of Engineering in order to reduce the processing time of complex computational simulations. A usual approach to achieve this is the application of Galerkin projection to generate representative subspaces (reduced spaces). However, when strong nonlinearities in a dynamical system are present and this technique is employed several times along the simulation, it can be very inefficient. This work proposes a new adaptive strategy, which ensures low computational cost and small error to deal with this problem. This work also presents a new method to select snapshots named Proper Snapshot Selection (PSS). The objective of the PSS is to obtain a good balance between accuracy and computational cost by improving the adaptive strategy through a better snapshot selection in real time (online analysis). With this method, it is possible a substantial reduction of the subspace, keeping the quality of the model without the use of the Proper Orthogonal Decomposition (POD).
Tensor sufficient dimension reduction
Zhong, Wenxuan; Xing, Xin; Suslick, Kenneth
2015-01-01
Tensor is a multiway array. With the rapid development of science and technology in the past decades, large amount of tensor observations are routinely collected, processed, and stored in many scientific researches and commercial activities nowadays. The colorimetric sensor array (CSA) data is such an example. Driven by the need to address data analysis challenges that arise in CSA data, we propose a tensor dimension reduction model, a model assuming the nonlinear dependence between a response and a projection of all the tensor predictors. The tensor dimension reduction models are estimated in a sequential iterative fashion. The proposed method is applied to a CSA data collected for 150 pathogenic bacteria coming from 10 bacterial species and 14 bacteria from one control species. Empirical performance demonstrates that our proposed method can greatly improve the sensitivity and specificity of the CSA technique. PMID:26594304
A parametric model order reduction technique for poroelastic finite element models.
Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico
2017-10-01
This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.
Supersonic Aftbody Closure Wind-Tunnel Testing, Data Analysis, and Computational Results
NASA Technical Reports Server (NTRS)
Allen, Jerry; Martin, Grant; Kubiatko, Paul
1999-01-01
This paper reports on the model, test, and results from the Langley Supersonic Aftbody Closure wind tunnel test. This project is an experimental evaluation of the 1.5% Technology Concept Aircraft (TCA) aftbody closure model (Model 23) in the Langley Unitary Plan Wind Tunnel. The baseline TCA design is the result of a multidisciplinary, multipoint optimization process and was developed using linear design and analysis methods, supplemented with Euler and Navier-Stokes numerical methods. After a thorough design review, it was decided to use an upswept blade attached to the forebody as the mounting system. Structural concerns dictated that a wingtip support system would not be feasible. Only the aftbody part of the model is metric. The metric break was chosen to be at the fuselage station where prior aft-sting supported models had been truncated. Model 23 is thus a modified version of Model 20. The wing strongback, flap parts, and nacelles from Model 20 were used, whereas new aftbodies, a common forebody, and some new tails were fabricated. In summary, significant differences in longitudinal and direction stability and control characteristics between the ABF and ABB aftbody geometries were measured. Correcting the experimental data obtained for the TCA configuration with the flared aftbody to the representative of the baseline TCA closed aftbody will result in a significant reduction in longitudinal stability, a moderate reduction in stabilizer effectiveness and directional stability, and a moderate to significant reduction in rudder effectiveness. These reductions in the stability and control effectiveness levels of the baseline TCA closed aftbody are attributed to the reduction in carry-over area.
Yan, Mingquan; Chen, Zhanghao; Li, Na; Zhou, Yuxuan; Zhang, Chenyang; Korshin, Gregory
2018-06-01
This study examined the electrochemical (EC) reduction of iodinated contrast media (ICM) exemplified by iopamidol and diatrizoate. The method of rotating ring-disc electrode (RRDE) was used to elucidate rates and mechanisms of the EC reactions of the selected ICMs. Experiments were carried at varying hydrodynamic conditions, concentrations of iopamidol, diatrizoate, natural organic matter (NOM) and model compounds (resorcinol, catechol, guaiacol) which were used to examine interactions between products of the EC reduction of ICMs and halogenation-active species. The data showed that iopamidol and diatrizoate were EC-reduced at potentials < -0.45 V vs. s.c.e. In the range of potentials -0.65 to -0.85 V their reduction was mass transfer-controlled. The presence of NOM and model compounds did not affect the EC reduction of iopamidol and diatrizoate but active iodine species formed as a result of the EC-induced transformations of these ICMs reacted readily with NOM and model compounds. These data provide more insight into the nature of generation of iodine-containing by-products in the case of reductive degradation of ICMs. Copyright © 2018 Elsevier Ltd. All rights reserved.
Multi-Group Reductions of LTE Air Plasma Radiative Transfer in Cylindrical Geometries
NASA Technical Reports Server (NTRS)
Scoggins, James; Magin, Thierry Edouard Bertran; Wray, Alan; Mansour, Nagi N.
2013-01-01
Air plasma radiation in Local Thermodynamic Equilibrium (LTE) within cylindrical geometries is studied with an application towards modeling the radiative transfer inside arc-constrictors, a central component of constricted-arc arc jets. A detailed database of spectral absorption coefficients for LTE air is formulated using the NEQAIR code developed at NASA Ames Research Center. The database stores calculated absorption coefficients for 1,051,755 wavelengths between 0.04 µm and 200 µm over a wide temperature (500K to 15 000K) and pressure (0.1 atm to 10.0 atm) range. The multi-group method for spectral reduction is studied by generating a range of reductions including pure binning and banding reductions from the detailed absorption coefficient database. The accuracy of each reduction is compared to line-by-line calculations for cylindrical temperature profiles resembling typical profiles found in arc-constrictors. It is found that a reduction of only 1000 groups is sufficient to accurately model the LTE air radiation over a large temperature and pressure range. In addition to the reduction comparison, the cylindrical-slab formulation is compared with the finite-volume method for the numerical integration of the radiative flux inside cylinders with varying length. It is determined that cylindrical-slabs can be used to accurately model most arc-constrictors due to their high length to radius ratios.
Calibration of Reduced Dynamic Models of Power Systems using Phasor Measurement Unit (PMU) Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Lu, Shuai; Singh, Ruchi
2011-09-23
Accuracy of a power system dynamic model is essential to the secure and efficient operation of the system. Lower confidence on model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, identification algorithms have been developed to calibrate parameters of individual components using measurement data from staged tests. To facilitate online dynamic studies for large power system interconnections, this paper proposes a model reduction and calibration approach using phasor measurement unit (PMU) data. First, a model reduction method is used to reduce the number of dynamic components. Then, a calibration algorithm is developed to estimatemore » parameters of the reduced model. This approach will help to maintain an accurate dynamic model suitable for online dynamic studies. The performance of the proposed method is verified through simulation studies.« less
Modifiable Prostate Cancer Risk Reduction and Early Detection Behaviors in Black Men
ERIC Educational Resources Information Center
Odedina, Folakemi T.; Scrivens, John J., Jr.; Larose-Pierre, Margareth; Emanuel, Frank; Adams, Angela Denise; Dagne, Getachew A.; Pressey, Shannon Alexis; Odedina, Oladapo
2011-01-01
Objective: To explore the personal factors related to modifiable prostate cancer risk-reduction and detection behaviors among black men. Methods: Three thousand four hundred thirty (3430) black men were surveyed and structural equation modeling employed to test study hypotheses. Results: Modifiable prostate cancer risk-reduction behavior was found…
Model reduction for slow–fast stochastic systems with metastable behaviour
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruna, Maria, E-mail: bruna@maths.ox.ac.uk; Computational Science Laboratory, Microsoft Research, Cambridge CB1 2FB; Chapman, S. Jonathan
2014-05-07
The quasi-steady-state approximation (or stochastic averaging principle) is a useful tool in the study of multiscale stochastic systems, giving a practical method by which to reduce the number of degrees of freedom in a model. The method is extended here to slow–fast systems in which the fast variables exhibit metastable behaviour. The key parameter that determines the form of the reduced model is the ratio of the timescale for the switching of the fast variables between metastable states to the timescale for the evolution of the slow variables. The method is illustrated with two examples: one from biochemistry (a fast-species-mediatedmore » chemical switch coupled to a slower varying species), and one from ecology (a predator–prey system). Numerical simulations of each model reduction are compared with those of the full system.« less
Wolfe, Marlene K; Gallandat, Karin; Daniels, Kyle; Desmarais, Anne Marie; Scheinman, Pamela; Lantagne, Daniele
2017-01-01
To prevent Ebola transmission, frequent handwashing is recommended in Ebola Treatment Units and communities. However, little is known about which handwashing protocol is most efficacious. We evaluated six handwashing protocols (soap and water, alcohol-based hand sanitizer (ABHS), and 0.05% sodium dichloroisocyanurate, high-test hypochlorite, and stabilized and non-stabilized sodium hypochlorite solutions) for 1) efficacy of handwashing on the removal and inactivation of non-pathogenic model organisms and, 2) persistence of organisms in rinse water. Model organisms E. coli and bacteriophage Phi6 were used to evaluate handwashing with and without organic load added to simulate bodily fluids. Hands were inoculated with test organisms, washed, and rinsed using a glove juice method to retrieve remaining organisms. Impact was estimated by comparing the log reduction in organisms after handwashing to the log reduction without handwashing. Rinse water was collected to test for persistence of organisms. Handwashing resulted in a 1.94-3.01 log reduction in E. coli concentration without, and 2.18-3.34 with, soil load; and a 2.44-3.06 log reduction in Phi6 without, and 2.71-3.69 with, soil load. HTH performed most consistently well, with significantly greater log reductions than other handwashing protocols in three models. However, the magnitude of handwashing efficacy differences was small, suggesting protocols are similarly efficacious. Rinse water demonstrated a 0.28-4.77 log reduction in remaining E. coli without, and 0.21-4.49 with, soil load and a 1.26-2.02 log reduction in Phi6 without, and 1.30-2.20 with, soil load. Chlorine resulted in significantly less persistence of E. coli in both conditions and Phi6 without soil load in rinse water (p<0.001). Thus, chlorine-based methods may offer a benefit of reducing persistence in rinse water. We recommend responders use the most practical handwashing method to ensure hand hygiene in Ebola contexts, considering the potential benefit of chlorine-based methods in rinse water persistence.
Development and validation of a building design waste reduction model.
Llatas, C; Osmani, M
2016-10-01
Reduction in construction waste is a pressing need in many countries. The design of building elements is considered a pivotal process to achieve waste reduction at source, which enables an informed prediction of their wastage reduction levels. However the lack of quantitative methods linking design strategies to waste reduction hinders designing out waste practice in building projects. Therefore, this paper addresses this knowledge gap through the design and validation of a Building Design Waste Reduction Strategies (Waste ReSt) model that aims to investigate the relationships between design variables and their impact on onsite waste reduction. The Waste ReSt model was validated in a real-world case study involving 20 residential buildings in Spain. The validation process comprises three stages. Firstly, design waste causes were analyzed. Secondly, design strategies were applied leading to several alternative low waste building elements. Finally, their potential source reduction levels were quantified and discussed within the context of the literature. The Waste ReSt model could serve as an instrumental tool to simulate designing out strategies in building projects. The knowledge provided by the model could help project stakeholders to better understand the correlation between the design process and waste sources and subsequently implement design practices for low-waste buildings. Copyright © 2016 Elsevier Ltd. All rights reserved.
Model-based reinforcement learning with dimension reduction.
Tangkaratt, Voot; Morimoto, Jun; Sugiyama, Masashi
2016-12-01
The goal of reinforcement learning is to learn an optimal policy which controls an agent to acquire the maximum cumulative reward. The model-based reinforcement learning approach learns a transition model of the environment from data, and then derives the optimal policy using the transition model. However, learning an accurate transition model in high-dimensional environments requires a large amount of data which is difficult to obtain. To overcome this difficulty, in this paper, we propose to combine model-based reinforcement learning with the recently developed least-squares conditional entropy (LSCE) method, which simultaneously performs transition model estimation and dimension reduction. We also further extend the proposed method to imitation learning scenarios. The experimental results show that policy search combined with LSCE performs well for high-dimensional control tasks including real humanoid robot control. Copyright © 2016 Elsevier Ltd. All rights reserved.
A model for interprovincial air pollution control based on futures prices.
Zhao, Laijun; Xue, Jian; Gao, Huaizhu Oliver; Li, Changmin; Huang, Rongbing
2014-05-01
Based on the current status of research on tradable emission rights futures, this paper introduces basic market-related assumptions for China's interprovincial air pollution control problem. The authors construct an interprovincial air pollution control model based on futures prices: the model calculated the spot price of emission rights using a classic futures pricing formula, and determined the identities of buyers and sellers for various provinces according to a partitioning criterion, thereby revealing five trading markets. To ensure interprovincial cooperation, a rational allocation result for the benefits from this model was achieved using the Shapley value method to construct an optimal reduction program and to determine the optimal annual decisions for each province. Finally, the Beijing-Tianjin-Hebei region was used as a case study, as this region has recently experienced serious pollution. It was found that the model reduced the overall cost of reducing SO2 pollution. Moreover, each province can lower its cost for air pollution reduction, resulting in a win-win solution. Adopting the model would therefore enhance regional cooperation and promote the control of China's air pollution. The authors construct an interprovincial air pollution control model based on futures prices. The Shapley value method is used to rationally allocate the cooperation benefit. Interprovincial pollution control reduces the overall reduction cost of SO2. Each province can lower its cost for air pollution reduction by cooperation.
Draycott, T; van der Nelson, H; Montouchet, C; Ruff, L; Andersson, F
2016-02-10
In view of the increasing pressure on the UK's maternity units, new methods of labour induction are required to alleviate the burden on the National Health Service, while maintaining the quality of care for women during delivery. A model was developed to evaluate the resource use associated with misoprostol vaginal inserts (MVIs) and dinoprostone vaginal inserts (DVIs) for the induction of labour at term. The one-year Markov model estimated clinical outcomes in a hypothetical cohort of 1397 pregnant women (parous and nulliparous) induced with either MVI or DVI at Southmead Hospital, Bristol, UK. Efficacy and safety data were based on published and unpublished results from a phase III, double-blind, multicentre, randomised controlled trial. Resource use was modelled using data from labour induction during antenatal admission to patient discharge from Southmead Hospital. The model's sensitivity to key parameters was explored in deterministic multi-way and scenario-based analyses. Over one year, the model results indicated MVI use could lead to a reduction of 10,201 h (28.9%) in the time to vaginal delivery, and an increase of 121% and 52% in the proportion of women achieving vaginal delivery at 12 and 24 h, respectively, compared with DVI use. Inducing women with the MVI could lead to a 25.2% reduction in the number of midwife shifts spent managing labour induction and 451 fewer hospital bed days. These resource utilisation reductions may equate to a potential 27.4% increase in birthing capacity at Southmead Hospital, when using the MVI instead of the DVI. Resource use, in addition to clinical considerations, should be considered when making decisions about labour induction methods. Our model analysis suggests the MVI is an effective method for labour induction, and could lead to a considerable reduction in resource use compared with the DVI, thereby alleviating the increasing burden of labour induction in UK hospitals.
Reduction of variable-truncation artifacts from beam occlusion during in situ x-ray tomography
NASA Astrophysics Data System (ADS)
Borg, Leise; Jørgensen, Jakob S.; Frikel, Jürgen; Sporring, Jon
2017-12-01
Many in situ x-ray tomography studies require experimental rigs which may partially occlude the beam and cause parts of the projection data to be missing. In a study of fluid flow in porous chalk using a percolation cell with four metal bars drastic streak artifacts arise in the filtered backprojection (FBP) reconstruction at certain orientations. Projections with non-trivial variable truncation caused by the metal bars are the source of these variable-truncation artifacts. To understand the artifacts a mathematical model of variable-truncation data as a function of metal bar radius and distance to sample is derived and verified numerically and with experimental data. The model accurately describes the arising variable-truncation artifacts across simulated variations of the experimental setup. Three variable-truncation artifact-reduction methods are proposed, all aimed at addressing sinogram discontinuities that are shown to be the source of the streaks. The ‘reduction to limited angle’ (RLA) method simply keeps only non-truncated projections; the ‘detector-directed smoothing’ (DDS) method smooths the discontinuities; while the ‘reflexive boundary condition’ (RBC) method enforces a zero derivative at the discontinuities. Experimental results using both simulated and real data show that the proposed methods effectively reduce variable-truncation artifacts. The RBC method is found to provide the best artifact reduction and preservation of image features using both visual and quantitative assessment. The analysis and artifact-reduction methods are designed in context of FBP reconstruction motivated by computational efficiency practical for large, real synchrotron data. While a specific variable-truncation case is considered, the proposed methods can be applied to general data cut-offs arising in different in situ x-ray tomography experiments.
Ly, Cheng
2013-10-01
The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.
NASA Astrophysics Data System (ADS)
Esmaeilzad, Armin; Khanlari, Karen
2018-07-01
As the number of degrees of freedom (DOFs) in structural dynamic problems becomes larger, the analyzing complexity and CPU usage of computers increase drastically. Condensation (or reduction) method is an efficient technique to reduce the size of the full model or the dimension of the structural matrices by eliminating the unimportant DOFs. After the first presentation of condensation method by Guyan in 1965 for undamped structures, which ignores the dynamic effects of the mass term, various forms of dynamic condensation methods were presented to overcome this issue. Moreover, researchers have tried to expand the dynamic condensation method to non-classically damped structures. Dynamic reduction of such systems is far more complicated than undamped systems. The proposed non-iterative method in this paper is introduced as 'Maclaurin Expansion of the frequency response function in Laplace Domain' (MELD) applied for dynamic reduction of non-classically damped structures. The present approach is implemented in four numerical examples of 2D bending-shear-axial frames with various numbers of stories and spans and also a floating raft isolation system. The results of natural frequencies and dynamic responses of models are compared with each other before and after the dynamic reduction. It is shown that the result accuracy has acceptable convergence in both cases. In addition, it is indicated that the result of the proposed method is more accurate than the results of some other existing condensation methods.
Drag reduction in channel flow using nonlinear control
NASA Technical Reports Server (NTRS)
Keefe, Laurence R.
1993-01-01
Two nonlinear control schemes have been applied to the problem of drag reduction in channel flow. Both schemes have been tested using numerical simulations at a mass flux Reynolds numbers of 4408, utilizing 2D nonlinear neutral modes for goal dynamics. The OGY-method, which requires feedback, reduces drag to 60-80 percent of the turbulent value at the same Reynolds number, and employs forcing only within a thin region near the wall. The H-method, or model-based control, fails to achieve any drag reduction when starting from a fully turbulent initial condition, but shows potential for suppressing or retarding laminar-to-turbulent transition by imposing instead a transition to a low drag, nonlinear traveling wave solution to the Navier-Stokes equation. The drag in this state corresponds to that achieved by the OGY-method. Model-based control requires no feedback, but in experiments to date has required the forcing be imposed within a thicker layer than the OGY-method. Control energy expenditures in both methods are small, representing less than 0.1 percent of the uncontrolled flow's energy.
Utterance independent bimodal emotion recognition in spontaneous communication
NASA Astrophysics Data System (ADS)
Tao, Jianhua; Pan, Shifeng; Yang, Minghao; Li, Ya; Mu, Kaihui; Che, Jianfeng
2011-12-01
Emotion expressions sometimes are mixed with the utterance expression in spontaneous face-to-face communication, which makes difficulties for emotion recognition. This article introduces the methods of reducing the utterance influences in visual parameters for the audio-visual-based emotion recognition. The audio and visual channels are first combined under a Multistream Hidden Markov Model (MHMM). Then, the utterance reduction is finished by finding the residual between the real visual parameters and the outputs of the utterance related visual parameters. This article introduces the Fused Hidden Markov Model Inversion method which is trained in the neutral expressed audio-visual corpus to solve the problem. To reduce the computing complexity the inversion model is further simplified to a Gaussian Mixture Model (GMM) mapping. Compared with traditional bimodal emotion recognition methods (e.g., SVM, CART, Boosting), the utterance reduction method can give better results of emotion recognition. The experiments also show the effectiveness of our emotion recognition system when it was used in a live environment.
Fowler, Nicholas J.; Blanford, Christopher F.
2017-01-01
Abstract Blue copper proteins, such as azurin, show dramatic changes in Cu2+/Cu+ reduction potential upon mutation over the full physiological range. Hence, they have important functions in electron transfer and oxidation chemistry and have applications in industrial biotechnology. The details of what determines these reduction potential changes upon mutation are still unclear. Moreover, it has been difficult to model and predict the reduction potential of azurin mutants and currently no unique procedure or workflow pattern exists. Furthermore, high‐level computational methods can be accurate but are too time consuming for practical use. In this work, a novel approach for calculating reduction potentials of azurin mutants is shown, based on a combination of continuum electrostatics, density functional theory and empirical hydrophobicity factors. Our method accurately reproduces experimental reduction potential changes of 30 mutants with respect to wildtype within experimental error and highlights the factors contributing to the reduction potential change. Finally, reduction potentials are predicted for a series of 124 new mutants that have not yet been investigated experimentally. Several mutants are identified that are located well over 10 Å from the copper center that change the reduction potential by more than 85 mV. The work shows that secondary coordination sphere mutations mostly lead to long‐range electrostatic changes and hence can be modeled accurately with continuum electrostatics. PMID:28815759
NASA Astrophysics Data System (ADS)
Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia
2017-10-01
Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
NOx Emission Reduction and its Effects on Ozone during the 2008 Olympic Games
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qing; Wang, Yuhang; Zhao, Chun
2011-07-15
We applied a daily-assimilated inversion method to estimate NOx (NO+NO2) emissions for June-September 2007 and 2008 on the basis of the Aura Ozone Monitoring Instrument (OMI) observations of nitrogen dioxide (NO2) and model simulations using the Regional chEmistry and trAnsport Model (REAM). Over urban Beijing, rural Beijing, and the Huabei Plain, OMI column NO2 reductions are approximately 45%, 33%, and 14%, respectively, while the corresponding anthropogenic NOx emission reductions are only 28%, 24%, and 6%, during the full emission control period (July 20 – Sep 20, 2008). The emission reduction began in early July and was in full force bymore » July 20, corresponding to the scheduled implementation of emission controls over Beijing. The emissions did not appear to recover after the emission control period. Meteorological change from summer 2007 to 2008 is the main factor contributing to the column NO2 decreases not accounted for by the emission reduction. Model simulations suggest that the effect of emission reduction on ozone concentrations over Beijing is relatively minor using a standard VOC emission inventory in China. With an adjustment of the model emissions to reflect in situ observations of VOCs in Beijing, the model simulation suggests a larger effect of the emission reduction.« less
Koch, Ina; Nöthen, Joachim; Schleiff, Enrico
2017-01-01
Motivation: Arabidopsis thaliana is a well-established model system for the analysis of the basic physiological and metabolic pathways of plants. Nevertheless, the system is not yet fully understood, although many mechanisms are described, and information for many processes exists. However, the combination and interpretation of the large amount of biological data remain a big challenge, not only because data sets for metabolic paths are still incomplete. Moreover, they are often inconsistent, because they are coming from different experiments of various scales, regarding, for example, accuracy and/or significance. Here, theoretical modeling is powerful to formulate hypotheses for pathways and the dynamics of the metabolism, even if the biological data are incomplete. To develop reliable mathematical models they have to be proven for consistency. This is still a challenging task because many verification techniques fail already for middle-sized models. Consequently, new methods, like decomposition methods or reduction approaches, are developed to circumvent this problem. Methods: We present a new semi-quantitative mathematical model of the metabolism of Arabidopsis thaliana . We used the Petri net formalism to express the complex reaction system in a mathematically unique manner. To verify the model for correctness and consistency we applied concepts of network decomposition and network reduction such as transition invariants, common transition pairs, and invariant transition pairs. Results: We formulated the core metabolism of Arabidopsis thaliana based on recent knowledge from literature, including the Calvin cycle, glycolysis and citric acid cycle, glyoxylate cycle, urea cycle, sucrose synthesis, and the starch metabolism. By applying network decomposition and reduction techniques at steady-state conditions, we suggest a straightforward mathematical modeling process. We demonstrate that potential steady-state pathways exist, which provide the fixed carbon to nearly all parts of the network, especially to the citric acid cycle. There is a close cooperation of important metabolic pathways, e.g., the de novo synthesis of uridine-5-monophosphate, the γ-aminobutyric acid shunt, and the urea cycle. The presented approach extends the established methods for a feasible interpretation of biological network models, in particular of large and complex models.
Koch, Ina; Nöthen, Joachim; Schleiff, Enrico
2017-01-01
Motivation: Arabidopsis thaliana is a well-established model system for the analysis of the basic physiological and metabolic pathways of plants. Nevertheless, the system is not yet fully understood, although many mechanisms are described, and information for many processes exists. However, the combination and interpretation of the large amount of biological data remain a big challenge, not only because data sets for metabolic paths are still incomplete. Moreover, they are often inconsistent, because they are coming from different experiments of various scales, regarding, for example, accuracy and/or significance. Here, theoretical modeling is powerful to formulate hypotheses for pathways and the dynamics of the metabolism, even if the biological data are incomplete. To develop reliable mathematical models they have to be proven for consistency. This is still a challenging task because many verification techniques fail already for middle-sized models. Consequently, new methods, like decomposition methods or reduction approaches, are developed to circumvent this problem. Methods: We present a new semi-quantitative mathematical model of the metabolism of Arabidopsis thaliana. We used the Petri net formalism to express the complex reaction system in a mathematically unique manner. To verify the model for correctness and consistency we applied concepts of network decomposition and network reduction such as transition invariants, common transition pairs, and invariant transition pairs. Results: We formulated the core metabolism of Arabidopsis thaliana based on recent knowledge from literature, including the Calvin cycle, glycolysis and citric acid cycle, glyoxylate cycle, urea cycle, sucrose synthesis, and the starch metabolism. By applying network decomposition and reduction techniques at steady-state conditions, we suggest a straightforward mathematical modeling process. We demonstrate that potential steady-state pathways exist, which provide the fixed carbon to nearly all parts of the network, especially to the citric acid cycle. There is a close cooperation of important metabolic pathways, e.g., the de novo synthesis of uridine-5-monophosphate, the γ-aminobutyric acid shunt, and the urea cycle. The presented approach extends the established methods for a feasible interpretation of biological network models, in particular of large and complex models. PMID:28713420
Bellesi, Luca; Wyttenbach, Rolf; Gaudino, Diego; Colleoni, Paolo; Pupillo, Francesco; Carrara, Mauro; Braghetti, Antonio; Puligheddu, Carla; Presilla, Stefano
2017-01-01
The aim of this work was to evaluate detection of low-contrast objects and image quality in computed tomography (CT) phantom images acquired at different tube loadings (i.e. mAs) and reconstructed with different algorithms, in order to find appropriate settings to reduce the dose to the patient without any image detriment. Images of supraslice low-contrast objects of a CT phantom were acquired using different mAs values. Images were reconstructed using filtered back projection (FBP), hybrid and iterative model-based methods. Image quality parameters were evaluated in terms of modulation transfer function; noise, and uniformity using two software resources. For the definition of low-contrast detectability, studies based on both human (i.e. four-alternative forced-choice test) and model observers were performed across the various images. Compared to FBP, image quality parameters were improved by using iterative reconstruction (IR) algorithms. In particular, IR model-based methods provided a 60% noise reduction and a 70% dose reduction, preserving image quality and low-contrast detectability for human radiological evaluation. According to the model observer, the diameters of the minimum detectable detail were around 2 mm (up to 100 mAs). Below 100 mAs, the model observer was unable to provide a result. IR methods improve CT protocol quality, providing a potential dose reduction while maintaining a good image detectability. Model observer can in principle be useful to assist human performance in CT low-contrast detection tasks and in dose optimisation.
Automated Design Tools for Integrated Mixed-Signal Microsystems (NeoCAD)
2005-02-01
method, Model Order Reduction (MOR) tools, system-level, mixed-signal circuit synthesis and optimization tools, and parsitic extraction tools. A unique...Mission Area: Command and Control mixed signal circuit simulation parasitic extraction time-domain simulation IC design flow model order reduction... Extraction 1.2 Overall Program Milestones CHAPTER 2 FAST TIME DOMAIN MIXED-SIGNAL CIRCUIT SIMULATION 2.1 HAARSPICE Algorithms 2.1.1 Mathematical Background
A Fourier dimensionality reduction model for big data interferometric imaging
NASA Astrophysics Data System (ADS)
Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves
2017-06-01
Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the proposed reduction method is available on GitHub.
Data-Driven Model Reduction and Transfer Operator Approximation
NASA Astrophysics Data System (ADS)
Klus, Stefan; Nüske, Feliks; Koltai, Péter; Wu, Hao; Kevrekidis, Ioannis; Schütte, Christof; Noé, Frank
2018-06-01
In this review paper, we will present different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out similarities and differences between methods developed independently by the dynamical systems, fluid dynamics, and molecular dynamics communities such as time-lagged independent component analysis, dynamic mode decomposition, and their respective generalizations. As a result, extensions and best practices developed for one particular method can be carried over to other related methods.
Feasibility study for automatic reduction of phase change imagery
NASA Technical Reports Server (NTRS)
Nossaman, G. O.
1971-01-01
The feasibility of automatically reducing a form of pictorial aerodynamic heating data is discussed. The imagery, depicting the melting history of a thin coat of fusible temperature indicator painted on an aerodynamically heated model, was previously reduced by manual methods. Careful examination of various lighting theories and approaches led to an experimentally verified illumination concept capable of yielding high-quality imagery. Both digital and video image processing techniques were applied to reduction of the data, and it was demonstrated that either method can be used to develop superimposed contours. Mathematical techniques were developed to find the model-to-image and the inverse image-to-model transformation using six conjugate points, and methods were developed using these transformations to determine heating rates on the model surface. A video system was designed which is able to reduce the imagery rapidly, economically and accurately. Costs for this system were estimated. A study plan was outlined whereby the mathematical transformation techniques developed to produce model coordinate heating data could be applied to operational software, and methods were discussed and costs estimated for obtaining the digital information necessary for this software.
Modeling hexavalent chromium removal in a Bacillus sp. fixed-film bioreactor.
Nkhalambayausi-Chirwa, Evans M; Wang, Yi-Tin
2004-09-30
A one-dimensional diffusion-reaction model was developed to simulate Cr(VI) reduction in a Bacillus sp. pure culture biofilm reactor with glucose as a sole supplied carbon and energy source. Substrate utilization and Cr(VI) reduction in the biofilm was best represented by a system of (second-order) partial differential equations (PDEs). The PDE system was solved by the (fourth-order) Runge-Kutta method adjusted for mass transport resistance using the (second-order) Crank-Nicholson and Backward Euler finite difference methods. A heuristic procedure (genetic search algorithm) was used to find global optimum values of Cr(VI) reduction and substrate utilization rate kinetic parameters. The fixed-film bioreactor system yielded higher values of the maximum specific Cr(VI) reduction rate coefficient and Cr(VI) reduction capacity (kmc = 0.062 1/h, and Rc = 0.13 mg/mg, respectively) than previously determined in batch reactors (kmc = 0.022 1/h and Rc = 0.012 mg/mg). The model predicted effluent Cr(VI) concentration well with 98.9% confidence (sigmay2 = 2.37 mg2/L2, N = 119) and effluent glucose with 96.4 % confidence (sigmay(w)2 = 5402 mg2/L2, N = 121, w = 100) over a wide range of Cr(VI) loadings (10-498 mg Cr(VI)/L/d). Copyright 2004 Wiley Periodicals, Inc.
Potential reductions in ambient NO2 concentrations from meeting diesel vehicle emissions standards
NASA Astrophysics Data System (ADS)
von Schneidemesser, Erika; Kuik, Friderike; Mar, Kathleen A.; Butler, Tim
2017-11-01
Exceedances of the concentration limit value for ambient nitrogen dioxide (NO2) at roadside sites are an issue in many cities throughout Europe. This is linked to the emissions of light duty diesel vehicles which have on-road emissions that are far greater than the regulatory standards. These exceedances have substantial implications for human health and economic loss. This study explores the possible gains in ambient air quality if light duty diesel vehicles were able to meet the regulatory standards (including both emissions standards from Europe and the United States). We use two independent methods: a measurement-based and a model-based method. The city of Berlin is used as a case study. The measurement-based method used data from 16 monitoring stations throughout the city of Berlin to estimate annual average reductions in roadside NO2 of 9.0 to 23 µg m-3 and in urban background NO2 concentrations of 1.2 to 2.7 µg m-3. These ranges account for differences in fleet composition assumptions, and the stringency of the regulatory standard. The model simulations showed reductions in urban background NO2 of 2.0 µg m-3, and at the scale of the greater Berlin area of 1.6 to 2.0 µg m-3 depending on the setup of the simulation and resolution of the model. Similar results were found for other European cities. The similarities in results using the measurement- and model-based methods support our ability to draw robust conclusions that are not dependent on the assumptions behind either methodology. The results show the significant potential for NO2 reductions if regulatory standards for light duty diesel vehicles were to be met under real-world operating conditions. Such reductions could help improve air quality by reducing NO2 exceedances in urban areas, but also have broader implications for improvements in human health and other benefits.
Assessment of methods for methyl iodide emission reduction and pest control using a simulation model
NASA Astrophysics Data System (ADS)
Luo, Lifang; Ashworth, Daniel J.; Šimunek, Jirka; Xuan, Richeng; Yates, Scott R.
2013-02-01
The increasing registration of the fumigant methyl iodide within the USA has led to more concerns about its toxicity to workers and bystanders. Emission mitigation strategies are needed to protect the public and environmental health while providing effective pest control. The effectiveness of various methods on emissions reduction and pest control was assessed using a process-based mathematical model in this study. Firstly, comparisons between the simulated and laboratory measured emission fluxes and cumulative emissions were made for methyl iodide (MeI) under four emission reduction treatments: 1) control, 2) using soil with high organic matter content (HOM), 3) being covered by virtually impermeable film (VIF), and 4) irrigating soil surface following fumigation (Irrigation). Then the model was extended to simulate a broader range of emission reduction strategies for MeI, including 5) being covered by high density polyethylene (HDPE), 6) increasing injection depth from 30 cm to 46 cm (Deep), 7) HDPE + Deep, 8) adding a reagent at soil surface (Reagent), 9) Reagent + Irrigation, and 10) Reagent + HDPE. Furthermore, the survivability of three types of soil-borne pests (citrus nematodes [Tylenchulus semipenetrans], barnyard seeds [Echinochloa crus-galli], fungi [Fusarium oxysporum]) was also estimated for each scenario. Overall, the trend of the measured emission fluxes as well as total emission were reasonably reproduced by the model for treatments 1 through 4. Based on the numerical simulation, the ranking of effectiveness in total emission reduction was VIF (82.4%) > Reagent + HDPE (73.2%) > Reagent + Irrigation (43.0%) > Reagent (23.5%) > Deep + HDPE (19.3%) > HOM (17.6%) > Deep (13.0%) > Irrigation (11.9%) > HDPE (5.8%). The order for pest control efficacy suggests, VIF had the highest pest control efficacy, followed by Deep + HDPE, Irrigation, Reagent + Irrigation, HDPE, Deep, Reagent + HDPE, Reagent, and HOM. Therefore, VIF is the optimal method disregarding the cost of the film since it maximizes efficacy while minimizing volatility losses. Otherwise, the integrated methods such as Deep + HDPE and Reagent + Irrigation, are recommended.
A Control Concept for Large Flexible Spacecraft Using Order Reduction Techniques
NASA Technical Reports Server (NTRS)
Thieme, G.; Roth, H.
1985-01-01
Results found during the investigation of control problems of large flexible spacecraft are given. A triple plate configuration of such a spacecraft is defined and studied. The model is defined by modal data derived from infinite element modeling. The order reduction method applied is briefly described. An attitude control concept with low and high authority control has been developed to design an attitude controller for the reduced model. The stability and response of the original system together with the reduced controller is analyzed.
ERIC Educational Resources Information Center
Pustejovsky, James E.; Runyon, Christopher
2014-01-01
Direct observation recording procedures produce reductive summary measurements of an underlying stream of behavior. Previous methodological studies of these recording procedures have employed simulation methods for generating random behavior streams, many of which amount to special cases of a statistical model known as the alternating renewal…
A Consensus Method to Reduce Conflict
ERIC Educational Resources Information Center
Main, Allen P.; Roark, Albert E.
1975-01-01
Describes a five-step method of conflict reduction suitable for use by practicing counselors. Presents the model in how-to-do-it fashion, supplementing it with illustrations. Describes reactions of eight counselors who used the model in 37 conflict cases. Presents responses of the persons involved in the conflicts. (Author)
Automatic Black-Box Model Order Reduction using Radial Basis Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stephanson, M B; Lee, J F; White, D A
Finite elements methods have long made use of model order reduction (MOR), particularly in the context of fast freqeucny sweeps. In this paper, we discuss a black-box MOR technique, applicable to a many solution methods and not restricted only to spectral responses. We also discuss automated methods for generating a reduced order model that meets a given error tolerance. Numerical examples demonstrate the effectiveness and wide applicability of the method. With the advent of improved computing hardware and numerous fast solution techniques, the field of computational electromagnetics are progressed rapidly in terms of the size and complexity of problems thatmore » can be solved. Numerous applications, however, require the solution of a problem for many different configurations, including optimization, parameter exploration, and uncertainly quantification, where the parameters that may be changed include frequency, material properties, geometric dimensions, etc. In such cases, thousands of solutions may be needed, so solve times of even a few minutes can be burdensome. Model order reduction (MOR) may alleviate this difficulty by creating a small model that can be evaluated quickly. Many MOR techniques have been applied to electromagnetic problems over the past few decades, particularly in the context of fast frequency sweeps. Recent works have extended these methods to allow more than one parameter and to allow the parameters to represent material and geometric properties. There are still limitations with these methods, however. First, they almost always assume that the finite element method is used to solve the problem, so that the system matrix is a known function of the parameters. Second, although some authors have presented adaptive methods (e.g., [2]), the order of the model is often determined before the MOR process begins, with little insight about what order is actually needed to reach the desired accuracy. Finally, it not clear how to efficiently extend most methods to the multiparameter case. This paper address the above shortcomings be developing a method that uses a block-box approach to the solution method, is adaptive, and is easily extensible to many parameters.« less
Prediction With Dimension Reduction of Multiple Molecular Data Sources for Patient Survival.
Kaplan, Adam; Lock, Eric F
2017-01-01
Predictive modeling from high-dimensional genomic data is often preceded by a dimension reduction step, such as principal component analysis (PCA). However, the application of PCA is not straightforward for multisource data, wherein multiple sources of 'omics data measure different but related biological components. In this article, we use recent advances in the dimension reduction of multisource data for predictive modeling. In particular, we apply exploratory results from Joint and Individual Variation Explained (JIVE), an extension of PCA for multisource data, for prediction of differing response types. We conduct illustrative simulations to illustrate the practical advantages and interpretability of our approach. As an application example, we consider predicting survival for patients with glioblastoma multiforme from 3 data sources measuring messenger RNA expression, microRNA expression, and DNA methylation. We also introduce a method to estimate JIVE scores for new samples that were not used in the initial dimension reduction and study its theoretical properties; this method is implemented in the R package R.JIVE on CRAN, in the function jive.predict.
Gallandat, Karin; Daniels, Kyle; Desmarais, Anne Marie; Scheinman, Pamela; Lantagne, Daniele
2017-01-01
To prevent Ebola transmission, frequent handwashing is recommended in Ebola Treatment Units and communities. However, little is known about which handwashing protocol is most efficacious. We evaluated six handwashing protocols (soap and water, alcohol-based hand sanitizer (ABHS), and 0.05% sodium dichloroisocyanurate, high-test hypochlorite, and stabilized and non-stabilized sodium hypochlorite solutions) for 1) efficacy of handwashing on the removal and inactivation of non-pathogenic model organisms and, 2) persistence of organisms in rinse water. Model organisms E. coli and bacteriophage Phi6 were used to evaluate handwashing with and without organic load added to simulate bodily fluids. Hands were inoculated with test organisms, washed, and rinsed using a glove juice method to retrieve remaining organisms. Impact was estimated by comparing the log reduction in organisms after handwashing to the log reduction without handwashing. Rinse water was collected to test for persistence of organisms. Handwashing resulted in a 1.94–3.01 log reduction in E. coli concentration without, and 2.18–3.34 with, soil load; and a 2.44–3.06 log reduction in Phi6 without, and 2.71–3.69 with, soil load. HTH performed most consistently well, with significantly greater log reductions than other handwashing protocols in three models. However, the magnitude of handwashing efficacy differences was small, suggesting protocols are similarly efficacious. Rinse water demonstrated a 0.28–4.77 log reduction in remaining E. coli without, and 0.21–4.49 with, soil load and a 1.26–2.02 log reduction in Phi6 without, and 1.30–2.20 with, soil load. Chlorine resulted in significantly less persistence of E. coli in both conditions and Phi6 without soil load in rinse water (p<0.001). Thus, chlorine-based methods may offer a benefit of reducing persistence in rinse water. We recommend responders use the most practical handwashing method to ensure hand hygiene in Ebola contexts, considering the potential benefit of chlorine-based methods in rinse water persistence. PMID:28231311
Applying multi-criteria decision-making to improve the waste reduction policy in Taiwan.
Su, Jun-Pin; Hung, Ming-Lung; Chao, Chia-Wei; Ma, Hwong-wen
2010-01-01
Over the past two decades, the waste reduction problem has been a major issue in environmental protection. Both recycling and waste reduction policies have become increasingly important. As the complexity of decision-making has increased, it has become evident that more factors must be considered in the development and implementation of policies aimed at resource recycling and waste reduction. There are many studies focused on waste management excluding waste reduction. This study paid more attention to waste reduction. Social, economic, and management aspects of waste treatment policies were considered in this study. Further, a life-cycle assessment model was applied as an evaluation system for the environmental aspect. Results of both quantitative and qualitative analyses on the social, economic, and management aspects were integrated via the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method into the comprehensive decision-making support system of multi-criteria decision-making (MCDM). A case study evaluating the waste reduction policy in Taoyuan County is presented to demonstrate the feasibility of this model. In the case study, reinforcement of MSW sorting was shown to be the best practice. The model in this study can be applied to other cities faced with the waste reduction problems.
NASA Astrophysics Data System (ADS)
Ikeura, Takuro; Nozaki, Takayuki; Shiota, Yoichi; Yamamoto, Tatsuya; Imamura, Hiroshi; Kubota, Hitoshi; Fukushima, Akio; Suzuki, Yoshishige; Yuasa, Shinji
2018-04-01
Using macro-spin modeling, we studied the reduction in the write error rate (WER) of voltage-induced dynamic magnetization switching by enhancing the effective thermal stability of the free layer using a voltage-controlled magnetic anisotropy change. Marked reductions in WER can be achieved by introducing reverse bias voltage pulses both before and after the write pulse. This procedure suppresses the thermal fluctuations of magnetization in the initial and final states. The proposed reverse bias method can offer a new way of improving the writing stability of voltage-driven spintronic devices.
Widdowson, M.A.; Chapelle, F.H.; Brauner, J.S.; ,
2003-01-01
A method is developed for optimizing monitored natural attenuation (MNA) and the reduction in the aqueous source zone concentration (??C) required to meet a site-specific regulatory target concentration. The mathematical model consists of two one-dimensional equations of mass balance for the aqueous phase contaminant, to coincide with up to two distinct zones of transformation, and appropriate boundary and intermediate conditions. The solution is written in terms of zone-dependent Peclet and Damko??hler numbers. The model is illustrated at a chlorinated solvent site where MNA was implemented following source treatment using in-situ chemical oxidation. The results demonstrate that by not taking into account a variable natural attenuation capacity (NAC), a lower target ??C is predicted, resulting in unnecessary source concentration reduction and cost with little benefit to achieving site-specific remediation goals.
Fowler, Nicholas J; Blanford, Christopher F; Warwicker, Jim; de Visser, Sam P
2017-11-02
Blue copper proteins, such as azurin, show dramatic changes in Cu 2+ /Cu + reduction potential upon mutation over the full physiological range. Hence, they have important functions in electron transfer and oxidation chemistry and have applications in industrial biotechnology. The details of what determines these reduction potential changes upon mutation are still unclear. Moreover, it has been difficult to model and predict the reduction potential of azurin mutants and currently no unique procedure or workflow pattern exists. Furthermore, high-level computational methods can be accurate but are too time consuming for practical use. In this work, a novel approach for calculating reduction potentials of azurin mutants is shown, based on a combination of continuum electrostatics, density functional theory and empirical hydrophobicity factors. Our method accurately reproduces experimental reduction potential changes of 30 mutants with respect to wildtype within experimental error and highlights the factors contributing to the reduction potential change. Finally, reduction potentials are predicted for a series of 124 new mutants that have not yet been investigated experimentally. Several mutants are identified that are located well over 10 Å from the copper center that change the reduction potential by more than 85 mV. The work shows that secondary coordination sphere mutations mostly lead to long-range electrostatic changes and hence can be modeled accurately with continuum electrostatics. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Optimization of air injection parameters toward optimum fuel saving effect for ships
NASA Astrophysics Data System (ADS)
Lee, Inwon; Park, Seong Hyeon
2016-11-01
Air lubrication method is the most promising commercial strategy for the frictional drag reduction of ocean going vessels. Air bubbles are injected through the array of holes or the slots installed onto the flat bottom surface of vessel and a sufficient supply of air is required to ensure the formation of stable air layer by the by the coalescence of the bubbles. The air layer drag reduction becomes economically meaningful when the power gain through the drag reduction exceeds the pumping power consumption. In this study, a model ship of 50k medium range tanker is employed to investigate air lubrication method. The experiments were conducted in the 100m long towing tank facility at the Pusan National University. To create the effective air lubrication with lower air flow rate, various configurations including the layout of injection holes, employment of side fences and static trim have been tested. In the preliminary series of model tests, the maximum 18.13%(at 15kts) of reduction of model resistance was achieved. This research was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MEST) through GCRC-SOP (Grant No. 2011-0030013).
Introduction of Energy and Climate Mitigation Policy Issues in Energy - Environment Model of Latvia
NASA Astrophysics Data System (ADS)
Klavs, G.; Rekis, J.
2016-12-01
The present research is aimed at contributing to the Latvian national climate policy development by projecting total GHG emissions up to 2030, by evaluating the GHG emission reduction path in the non-ETS sector at different targets set for emissions reduction and by evaluating the obtained results within the context of the obligations defined by the EU 2030 policy framework for climate and energy. The method used in the research was bottom-up, linear programming optimisation model MARKAL code adapted as the MARKAL-Latvia model with improvements for perfecting the integrated assessment of climate policy. The modelling results in the baseline scenario, reflecting national economic development forecasts and comprising the existing GHG emissions reduction policies and measures, show that in 2030 emissions will increase by 19.1 % compared to 2005. GHG emissions stabilisation and reduction in 2030, compared to 2005, were researched in respective alternative scenarios. Detailed modelling and analysis of the Latvian situation according to the scenario of non-ETS sector GHG emissions stabilisation and reduction in 2030 compared to 2005 have revealed that to implement a cost effective strategy of GHG emissions reduction first of all a policy should be developed that ensures effective absorption of the available energy efficiency potential in all consumer sectors. The next group of emissions reduction measures includes all non-ETS sectors (industry, services, agriculture, transport, and waste management).
Identification of cracks in thick beams with a cracked beam element model
NASA Astrophysics Data System (ADS)
Hou, Chuanchuan; Lu, Yong
2016-12-01
The effect of a crack on the vibration of a beam is a classical problem, and various models have been proposed, ranging from the basic stiffness reduction method to the more sophisticated model involving formulation based on the additional flexibility due to a crack. However, in the damage identification or finite element model updating applications, it is still common practice to employ a simple stiffness reduction factor to represent a crack in the identification process, whereas the use of a more realistic crack model is rather limited. In this paper, the issues with the simple stiffness reduction method, particularly concerning thick beams, are highlighted along with a review of several other crack models. A robust finite element model updating procedure is then presented for the detection of cracks in beams. The description of the crack parameters is based on the cracked beam flexibility formulated by means of the fracture mechanics, and it takes into consideration of shear deformation and coupling between translational and longitudinal vibrations, and thus is particularly suitable for thick beams. The identification procedure employs a global searching technique using Genetic Algorithms, and there is no restriction on the location, severity and the number of cracks to be identified. The procedure is verified to yield satisfactory identification for practically any configurations of cracks in a beam.
Tobacco Town: Computational Modeling of Policy Options to Reduce Tobacco Retailer Density
Luke, Douglas A.; Hammond, Ross A.; Combs, Todd; Sorg, Amy; Kasman, Matt; Mack-Crane, Austen; Ribisl, Kurt M.; Henriksen, Lisa
2017-01-01
Objectives To identify the behavioral mechanisms and effects of tobacco control policies designed to reduce tobacco retailer density. Methods We developed the Tobacco Town agent-based simulation model to examine 4 types of retailer reduction policies: (1) random retailer reduction, (2) restriction by type of retailer, (3) limiting proximity of retailers to schools, and (4) limiting proximity of retailers to each other. The model examined the effects of these policies alone and in combination across 4 different types of towns, defined by 2 levels of population density (urban vs suburban) and 2 levels of income (higher vs lower). Results Model results indicated that reduction of retailer density has the potential to decrease accessibility of tobacco products by driving up search and purchase costs. Policy effects varied by town type: proximity policies worked better in dense, urban towns whereas retailer type and random retailer reduction worked better in less-dense, suburban settings. Conclusions Comprehensive retailer density reduction policies have excellent potential to reduce the public health burden of tobacco use in communities. PMID:28398792
NASA Astrophysics Data System (ADS)
Nasri, Mohamed Aziz; Robert, Camille; Ammar, Amine; El Arem, Saber; Morel, Franck
2018-02-01
The numerical modelling of the behaviour of materials at the microstructural scale has been greatly developed over the last two decades. Unfortunately, conventional resolution methods cannot simulate polycrystalline aggregates beyond tens of loading cycles, and they do not remain quantitative due to the plasticity behaviour. This work presents the development of a numerical solver for the resolution of the Finite Element modelling of polycrystalline aggregates subjected to cyclic mechanical loading. The method is based on two concepts. The first one consists in maintaining a constant stiffness matrix. The second uses a time/space model reduction method. In order to analyse the applicability and the performance of the use of a space-time separated representation, the simulations are carried out on a three-dimensional polycrystalline aggregate under cyclic loading. Different numbers of elements per grain and two time increments per cycle are investigated. The results show a significant CPU time saving while maintaining good precision. Moreover, increasing the number of elements and the number of time increments per cycle, the model reduction method is faster than the standard solver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less
Alimonti, Luca; Atalla, Noureddine; Berry, Alain; Sgard, Franck
2014-05-01
Modeling complex vibroacoustic systems including poroelastic materials using finite element based methods can be unfeasible for practical applications. For this reason, analytical approaches such as the transfer matrix method are often preferred to obtain a quick estimation of the vibroacoustic parameters. However, the strong assumptions inherent within the transfer matrix method lead to a lack of accuracy in the description of the geometry of the system. As a result, the transfer matrix method is inherently limited to the high frequency range. Nowadays, hybrid substructuring procedures have become quite popular. Indeed, different modeling techniques are typically sought to describe complex vibroacoustic systems over the widest possible frequency range. As a result, the flexibility and accuracy of the finite element method and the efficiency of the transfer matrix method could be coupled in a hybrid technique to obtain a reduction of the computational burden. In this work, a hybrid methodology is proposed. The performances of the method in predicting the vibroacoutic indicators of flat structures with attached homogeneous acoustic treatments are assessed. The results prove that, under certain conditions, the hybrid model allows for a reduction of the computational effort while preserving enough accuracy with respect to the full finite element solution.
Model reduction in integrated controls-structures design
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.
1993-01-01
It is the objective of this paper to present a model reduction technique developed for the integrated controls-structures design of flexible structures. Integrated controls-structures design problems are typically posed as nonlinear mathematical programming problems, where the design variables consist of both structural and control parameters. In the solution process, both structural and control design variables are constantly changing; therefore, the dynamic characteristics of the structure are also changing. This presents a problem in obtaining a reduced-order model for active control design and analysis which will be valid for all design points within the design space. In other words, the frequency and number of the significant modes of the structure (modes that should be included) may vary considerably throughout the design process. This is also true as the locations and/or masses of the sensors and actuators change. Moreover, since the number of design evaluations in the integrated design process could easily run into thousands, any feasible order-reduction method should not require model reduction analysis at every design iteration. In this paper a novel and efficient technique for model reduction in the integrated controls-structures design process, which addresses these issues, is presented.
ERIC Educational Resources Information Center
Armoni, Michal; Gal-Ezer, Judith
2005-01-01
When dealing with a complex problem, solving it by reduction to simpler problems, or problems for which the solution is already known, is a common method in mathematics and other scientific disciplines, as in computer science and, specifically, in the field of computability. However, when teaching computational models (as part of computability)…
Velikina, Julia V.; Samsonov, Alexey A.
2014-01-01
Purpose To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models pre-estimated from training data. Theory We introduce the MOdel Consistency COndition (MOCCO) technique that utilizes temporal models to regularize the reconstruction without constraining the solution to be low-rank as performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Methods Our method was compared to standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE MRA) and cardiac CINE imaging. We studied sensitivity of all methods to rank-reduction and temporal subspace modeling errors. Results MOCCO demonstrated reduced sensitivity to modeling errors compared to the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. Conclusions MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. PMID:25399724
Gamma model and its analysis for phase measuring profilometry.
Liu, Kai; Wang, Yongchang; Lau, Daniel L; Hao, Qi; Hassebrook, Laurence G
2010-03-01
Phase measuring profilometry is a method of structured light illumination whose three-dimensional reconstructions are susceptible to error from nonunitary gamma in the associated optical devices. While the effects of this distortion diminish with an increasing number of employed phase-shifted patterns, gamma distortion may be unavoidable in real-time systems where the number of projected patterns is limited by the presence of target motion. A mathematical model is developed for predicting the effects of nonunitary gamma on phase measuring profilometry, while also introducing an accurate gamma calibration method and two strategies for minimizing gamma's effect on phase determination. These phase correction strategies include phase corrections with and without gamma calibration. With the reduction in noise, for three-step phase measuring profilometry, analysis of the root mean squared error of the corrected phase will show a 60x reduction in phase error when the proposed gamma calibration is performed versus 33x reduction without calibration.
An alternative to Guyan reduction of finite-element models
NASA Technical Reports Server (NTRS)
Lin, Jiguan Gene
1988-01-01
Structural modeling is a key part of structural system identification for large space structures. Finite-element structural models are commonly used in practice because of their general applicability and availability. The initial models generated by using a standard computer program such as NASTRAN, ANSYS, SUPERB, STARDYNE, STRUDL, etc., generally contain tens of thousands of degrees of freedom. The models must be reduced for purposes of identification. Not only does the magnitude of the identification effort grow exponentially as a function of the number of degrees of freedom, but numerical procedures may also break down because of accumulated round-off errors. Guyan reduction is usually applied after a static condensation. Misapplication of Guyan reduction can lead to serious modeling errors. It is quite unfortunate and disappointing, since the accuracy of the original detailed finite-element model one tries very hard to achieve is lost by the reduction. First, why and how Guyan reduction always causes loss of accuracy is examined. An alternative approach is then introduced. The alternative can be thought of as an improvement of Guyan reduction, the Rayleigh-Ritz method, and in particular the recent algorithm of Wilson, Yuan, and Dickens. Unlike Guyan reduction, the use of the alternative does not need any special insight, experience, or skill for partitioning the structural degrees of freedom. In addition to model condensation, this alternative approach can also be used for predicting analytically, quickly, and economically, what are those structural modes that are excitable by a force actuator at a given trial location. That is, in the excitation of the structural modes for identification, it can be used for guiding the placement of the force actuators.
NASA Astrophysics Data System (ADS)
Jansen Van Rensburg, G. J.; Kok, S.; Wilke, D. N.
2017-10-01
Different roll pass reduction schedules have different effects on the through-thickness properties of hot-rolled metal slabs. In order to assess or improve a reduction schedule using the finite element method, a material model is required that captures the relevant deformation mechanisms and physics. The model should also report relevant field quantities to assess variations in material state through the thickness of a simulated rolled metal slab. In this paper, a dislocation density-based material model with recrystallization is presented and calibrated on the material response of a high-strength low-alloy steel. The model has the ability to replicate and predict material response to a fair degree thanks to the physically motivated mechanisms it is built on. An example study is also presented to illustrate the possible effect different reduction schedules could have on the through-thickness material state and the ability to assess these effects based on finite element simulations.
Meng, Qing-chun; Rong, Xiao-xia; Zhang, Yi-min; Wan, Xiao-le; Liu, Yuan-yuan; Wang, Yu-zhi
2016-01-01
CO2 emission influences not only global climate change but also international economic and political situations. Thus, reducing the emission of CO2, a major greenhouse gas, has become a major issue in China and around the world as regards preserving the environmental ecology. Energy consumption from coal, oil, and natural gas is primarily responsible for the production of greenhouse gases and air pollutants such as SO2 and NOX, which are the main air pollutants in China. In this study, a mathematical multi-objective optimization method was adopted to analyze the collaborative emission reduction of three kinds of gases on the basis of their common restraints in different ways of energy consumption to develop an economic, clean, and efficient scheme for energy distribution. The first part introduces the background research, the collaborative emission reduction for three kinds of gases, the multi-objective optimization, the main mathematical modeling, and the optimization method. The second part discusses the four mathematical tools utilized in this study, which include the Granger causality test to analyze the causality between air quality and pollutant emission, a function analysis to determine the quantitative relation between energy consumption and pollutant emission, a multi-objective optimization to set up the collaborative optimization model that considers energy consumption, and an optimality condition analysis for the multi-objective optimization model to design the optimal-pole algorithm and obtain an efficient collaborative reduction scheme. In the empirical analysis, the data of pollutant emission and final consumption of energies of Tianjin in 1996-2012 was employed to verify the effectiveness of the model and analyze the efficient solution and the corresponding dominant set. In the last part, several suggestions for collaborative reduction are recommended and the drawn conclusions are stated.
Zhang, Yi-min; Wan, Xiao-le; Liu, Yuan-yuan; Wang, Yu-zhi
2016-01-01
CO2 emission influences not only global climate change but also international economic and political situations. Thus, reducing the emission of CO2, a major greenhouse gas, has become a major issue in China and around the world as regards preserving the environmental ecology. Energy consumption from coal, oil, and natural gas is primarily responsible for the production of greenhouse gases and air pollutants such as SO2 and NOX, which are the main air pollutants in China. In this study, a mathematical multi-objective optimization method was adopted to analyze the collaborative emission reduction of three kinds of gases on the basis of their common restraints in different ways of energy consumption to develop an economic, clean, and efficient scheme for energy distribution. The first part introduces the background research, the collaborative emission reduction for three kinds of gases, the multi-objective optimization, the main mathematical modeling, and the optimization method. The second part discusses the four mathematical tools utilized in this study, which include the Granger causality test to analyze the causality between air quality and pollutant emission, a function analysis to determine the quantitative relation between energy consumption and pollutant emission, a multi-objective optimization to set up the collaborative optimization model that considers energy consumption, and an optimality condition analysis for the multi-objective optimization model to design the optimal-pole algorithm and obtain an efficient collaborative reduction scheme. In the empirical analysis, the data of pollutant emission and final consumption of energies of Tianjin in 1996–2012 was employed to verify the effectiveness of the model and analyze the efficient solution and the corresponding dominant set. In the last part, several suggestions for collaborative reduction are recommended and the drawn conclusions are stated. PMID:27010658
System identification and model reduction using modulating function techniques
NASA Technical Reports Server (NTRS)
Shen, Yan
1993-01-01
Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.
Xie, Yujing; Zhao, Laijun; Xue, Jian; Hu, Qingmi; Xu, Xiang; Wang, Hongbo
2016-12-15
How to effectively control severe regional air pollution has become a focus of global concern recently. The non-cooperative reduction model (NCRM) is still the main air pollution control pattern in China, but it is both ineffective and costly, because each province must independently fight air pollution. Thus, we proposed a cooperative reduction model (CRM), with the goal of maximizing the reduction in adverse health effects (AHEs) at the lowest cost by encouraging neighboring areas to jointly control air pollution. CRM has two parts: a model of optimal pollutant removal rates using two optimization objectives (maximizing the reduction in AHEs and minimizing pollutant reduction cost) while meeting the regional pollution control targets set by the central government, and a model that allocates the cooperation benefits (i.e., health improvement and cost reduction) among the participants according to their contributions using the Shapley value method. We applied CRM to the case of sulfur dioxide (SO 2 ) reduction in Yangtze River Delta region. Based on data from 2003 to 2013, and using mortality due to respiratory and cardiovascular diseases as the health endpoints, CRM saves 437 more lives than NCRM, amounting to 12.1% of the reduction under NCRM. CRM also reduced costs by US $65.8×10 6 compared with NCRM, which is 5.2% of the total cost of NCRM. Thus, CRM performs significantly better than NCRM. Each province obtains significant benefits from cooperation, which can motivate them to actively cooperate in the long term. A sensitivity analysis was performed to quantify the effects of parameter values on the cooperation benefits. Results shown that the CRM is not sensitive to the changes in each province's pollutant carrying capacity and the minimum pollutant removal capacity, but sensitive to the maximum pollutant reduction capacity. Moreover, higher cooperation benefits will be generated when a province's maximum pollutant reduction capacity increases. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Yajun
A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.
Stream-wise distribution of skin-friction drag reduction on a flat plate with bubble injection
NASA Astrophysics Data System (ADS)
Qin, Shijie; Chu, Ning; Yao, Yan; Liu, Jingting; Huang, Bin; Wu, Dazhuan
2017-03-01
To investigate the stream-wise distribution of skin-friction drag reduction on a flat plate with bubble injection, both experiments and simulations of bubble drag reduction (BDR) have been conducted in this paper. Drag reductions at various flow speeds and air injection rates have been tested in cavitation tunnel experiments. Visualization of bubble flow pattern is implemented synchronously. The computational fluid dynamics (CFD) method, in the framework of Eulerian-Eulerian two fluid modeling, coupled with population balance model (PBM) is used to simulate the bubbly flow along the flat plate. A wide range of bubble sizes considering bubble breakup and coalescence is modeled based on experimental bubble distribution images. Drag and lift forces are fully modeled based on applicable closure models. Both predicted drag reductions and bubble distributions are in reasonable concordance with experimental results. Stream-wise distribution of BDR is revealed based on CFD-PBM numerical results. In particular, four distinct regions with different BDR characteristics are first identified and discussed in this study. Thresholds between regions are extracted and discussed. And it is highly necessary to fully understand the stream-wise distribution of BDR in order to establish a universal scaling law. Moreover, mechanism of stream-wise distribution of BDR is analysed based on the near-wall flow parameters. The local drag reduction is a direct result of near-wall max void fraction. And the near-wall velocity gradient modified by the presence of bubbles is considered as another important factor for bubble drag reduction.
Pathogen Reduction in Human Plasma Using an Ultrashort Pulsed Laser
Tsen, Shaw-Wei D.; Kingsley, David H.; Kibler, Karen; Jacobs, Bert; Sizemore, Sara; Vaiana, Sara M.; Anderson, Jeanne; Tsen, Kong-Thon; Achilefu, Samuel
2014-01-01
Pathogen reduction is a viable approach to ensure the continued safety of the blood supply against emerging pathogens. However, the currently licensed pathogen reduction techniques are ineffective against non-enveloped viruses such as hepatitis A virus, and they introduce chemicals with concerns of side effects which prevent their widespread use. In this report, we demonstrate the inactivation of both enveloped and non-enveloped viruses in human plasma using a novel chemical-free method, a visible ultrashort pulsed laser. We found that laser treatment resulted in 2-log, 1-log, and 3-log reductions in human immunodeficiency virus, hepatitis A virus, and murine cytomegalovirus in human plasma, respectively. Laser-treated plasma showed ≥70% retention for most coagulation factors tested. Furthermore, laser treatment did not alter the structure of a model coagulation factor, fibrinogen. Ultrashort pulsed lasers are a promising new method for chemical-free, broad-spectrum pathogen reduction in human plasma. PMID:25372037
Wavelet median denoising of ultrasound images
NASA Astrophysics Data System (ADS)
Macey, Katherine E.; Page, Wyatt H.
2002-05-01
Ultrasound images are contaminated with both additive and multiplicative noise, which is modeled by Gaussian and speckle noise respectively. Distinguishing small features such as fallopian tubes in the female genital tract in the noisy environment is problematic. A new method for noise reduction, Wavelet Median Denoising, is presented. Wavelet Median Denoising consists of performing a standard noise reduction technique, median filtering, in the wavelet domain. The new method is tested on 126 images, comprised of 9 original images each with 14 levels of Gaussian or speckle noise. Results for both separable and non-separable wavelets are evaluated, relative to soft-thresholding in the wavelet domain, using the signal-to-noise ratio and subjective assessment. The performance of Wavelet Median Denoising is comparable to that of soft-thresholding. Both methods are more successful in removing Gaussian noise than speckle noise. Wavelet Median Denoising outperforms soft-thresholding for a larger number of cases of speckle noise reduction than of Gaussian noise reduction. Noise reduction is more successful using non-separable wavelets than separable wavelets. When both methods are applied to ultrasound images obtained from a phantom of the female genital tract a small improvement is seen; however, a substantial improvement is required prior to clinical use.
A Bayesian nonparametric approach to dynamical noise reduction
NASA Astrophysics Data System (ADS)
Kaloudis, Konstantinos; Hatjispyros, Spyridon J.
2018-06-01
We propose a Bayesian nonparametric approach for the noise reduction of a given chaotic time series contaminated by dynamical noise, based on Markov Chain Monte Carlo methods. The underlying unknown noise process (possibly) exhibits heavy tailed behavior. We introduce the Dynamic Noise Reduction Replicator model with which we reconstruct the unknown dynamic equations and in parallel we replicate the dynamics under reduced noise level dynamical perturbations. The dynamic noise reduction procedure is demonstrated specifically in the case of polynomial maps. Simulations based on synthetic time series are presented.
A systematic way for the cost reduction of density fitting methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kállay, Mihály, E-mail: kallay@mail.bme.hu
2014-12-28
We present a simple approach for the reduction of the size of auxiliary basis sets used in methods exploiting the density fitting (resolution of identity) approximation for electron repulsion integrals. Starting out of the singular value decomposition of three-center two-electron integrals, new auxiliary functions are constructed as linear combinations of the original fitting functions. The new functions, which we term natural auxiliary functions (NAFs), are analogous to the natural orbitals widely used for the cost reduction of correlation methods. The use of the NAF basis enables the systematic truncation of the fitting basis, and thereby potentially the reduction of themore » computational expenses of the methods, though the scaling with the system size is not altered. The performance of the new approach has been tested for several quantum chemical methods. It is demonstrated that the most pronounced gain in computational efficiency can be expected for iterative models which scale quadratically with the size of the fitting basis set, such as the direct random phase approximation. The approach also has the promise of accelerating local correlation methods, for which the processing of three-center Coulomb integrals is a bottleneck.« less
A diffusion modelling approach to understanding contextual cueing effects in children with ADHD
Weigard, Alexander; Huang-Pollock, Cynthia
2014-01-01
Background Strong theoretical models suggest implicit learning deficits may exist among children with Attention Deficit Hyperactivity Disorder (ADHD). Method We examine implicit contextual cueing (CC) effects among children with ADHD (n=72) and non-ADHD Controls (n=36). Results Using Ratcliff’s drift diffusion model, we found that among Controls, the CC effect is due to improvements in attentional guidance and to reductions in response threshold. Children with ADHD did not show a CC effect; although they were able to use implicitly acquired information to deploy attentional focus, they had more difficulty adjusting their response thresholds. Conclusions Improvements in attentional guidance and reductions in response threshold together underlie the CC effect. Results are consistent with neurocognitive models of ADHD that posit sub-cortical dysfunction but intact spatial attention, and encourage the use of alternative data analytic methods when dealing with reaction time data. PMID:24798140
Köppl, Tobias; Santin, Gabriele; Haasdonk, Bernard; Helmig, Rainer
2018-05-06
In this work, we consider two kinds of model reduction techniques to simulate blood flow through the largest systemic arteries, where a stenosis is located in a peripheral artery i.e. in an artery that is located far away from the heart. For our simulations we place the stenosis in one of the tibial arteries belonging to the right lower leg (right post tibial artery). The model reduction techniques that are used are on the one hand dimensionally reduced models (1-D and 0-D models, the so-called mixed-dimension model) and on the other hand surrogate models produced by kernel methods. Both methods are combined in such a way that the mixed-dimension models yield training data for the surrogate model, where the surrogate model is parametrised by the degree of narrowing of the peripheral stenosis. By means of a well-trained surrogate model, we show that simulation data can be reproduced with a satisfactory accuracy and that parameter optimisation or state estimation problems can be solved in a very efficient way. Furthermore it is demonstrated that a surrogate model enables us to present after a very short simulation time the impact of a varying degree of stenosis on blood flow, obtaining a speedup of several orders over the full model. This article is protected by copyright. All rights reserved.
Surrogate based wind farm layout optimization using manifold mapping
NASA Astrophysics Data System (ADS)
Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester
2016-09-01
High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
2015-03-11
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
NASA Astrophysics Data System (ADS)
Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo
2017-03-01
We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.
Transport coefficient computation based on input/output reduced order models
NASA Astrophysics Data System (ADS)
Hurst, Joshua L.
The guiding purpose of this thesis is to address the optimal material design problem when the material description is a molecular dynamics model. The end goal is to obtain a simplified and fast model that captures the property of interest such that it can be used in controller design and optimization. The approach is to examine model reduction analysis and methods to capture a specific property of interest, in this case viscosity, or more generally complex modulus or complex viscosity. This property and other transport coefficients are defined by a input/output relationship and this motivates model reduction techniques that are tailored to preserve input/output behavior. In particular Singular Value Decomposition (SVD) based methods are investigated. First simulation methods are identified that are amenable to systems theory analysis. For viscosity, these models are of the Gosling and Lees-Edwards type. They are high order nonlinear Ordinary Differential Equations (ODEs) that employ Periodic Boundary Conditions. Properties can be calculated from the state trajectories of these ODEs. In this research local linear approximations are rigorously derived and special attention is given to potentials that are evaluated with Periodic Boundary Conditions (PBC). For the Gosling description LTI models are developed from state trajectories but are found to have limited success in capturing the system property, even though it is shown that full order LTI models can be well approximated by reduced order LTI models. For the Lees-Edwards SLLOD type model nonlinear ODEs will be approximated by a Linear Time Varying (LTV) model about some nominal trajectory and both balanced truncation and Proper Orthogonal Decomposition (POD) will be used to assess the plausibility of reduced order models to this system description. An immediate application of the derived LTV models is Quasilinearization or Waveform Relaxation. Quasilinearization is a Newton's method applied to the ODE operator equation. Its a recursive method that solves nonlinear ODE's by solving a LTV systems at each iteration to obtain a new closer solution. LTV models are derived for both Gosling and Lees-Edwards type models. Particular attention is given to SLLOD Lees-Edwards models because they are in a form most amenable to performing Taylor series expansion, and the most commonly used model to examine viscosity. With linear models developed a method is presented to calculate viscosity based on LTI Gosling models but is shown to have some limitations. To address these issues LTV SLLOD models are analyzed with both Balanced Truncation and POD and both show that significant order reduction is possible. By examining the singular values of both techniques it is shown that Balanced Truncation has a potential to offer greater reduction, which should be expected as it is based on the input/output mapping instead of just the state information as in POD. Obtaining reduced order systems that capture the property of interest is challenging. For Balanced Truncation reduced order models for 1-D LJ and FENE systems are obtained and are shown to capture the output of interest fairly well. However numerical challenges currently limit this analysis to small order systems. Suggestions are presented to extend this method to larger systems. In addition reduced 2nd order systems are obtained from POD. Here the challenge is extending the solution beyond the original period used for the projection, in particular identifying the manifold the solution travels along. The remaining challenges are presented and discussed.
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narasimha S.
2012-01-01
In this paper a modeling method based on data reductions is investigated which includes pre analyzed MERRA atmospheric fields for quantitative estimates of uncertainties introduced in the integrated path differential absorption methods for the sensing of various molecules including CO2. This approach represents the extension of our existing lidar modeling framework previously developed and allows effective on- and offline wavelength optimizations and weighting function analysis to minimize the interference effects such as those due to temperature sensitivity and water vapor absorption. The new simulation methodology is different from the previous implementation in that it allows analysis of atmospheric effects over annual spans and the entire Earth coverage which was achieved due to the data reduction methods employed. The effectiveness of the proposed simulation approach is demonstrated with application to the mixing ratio retrievals for the future ASCENDS mission. Independent analysis of multiple accuracy limiting factors including the temperature, water vapor interferences, and selected system parameters is further used to identify favorable spectral regions as well as wavelength combinations facilitating the reduction in total errors in the retrieved XCO2 values.
NASA Astrophysics Data System (ADS)
Samanta, Gaurab; Beris, Antony; Handler, Robert; Housiadas, Kostas
2009-03-01
Karhunen-Loeve (KL) analysis of DNS data of viscoelastic turbulent channel flows helps us to reveal more information on the time-dependent dynamics of viscoelastic modification of turbulence [Samanta et. al., J. Turbulence (in press), 2008]. A selected set of KL modes can be used for a data reduction modeling of these flows. However, it is pertinent that verification be done against established DNS results. For this purpose, we did comparisons of velocity and conformations statistics and probability density functions (PDFs) of relevant quantities obtained from DNS and reconstructed fields using selected KL modes and time-dependent coefficients. While the velocity statistics show good agreement between results from DNS and KL reconstructions even with just hundreds of KL modes, tens of thousands of KL modes are required to adequately capture the trace of polymer conformation resulting from DNS. New modifications to KL method have therefore been attempted to account for the differences in conformation statistics. The applicability and impact of these new modified KL methods will be discussed in the perspective of data reduction modeling.
NASA Astrophysics Data System (ADS)
Malawi, Abdulrahman A.
2013-06-01
We present here a detailed explanation of the reduction method that we use to determine the angular diameters of the stars occulted by the dark limb of the moon. This is a main part of the lunar occultation observation program running at King Abdul Aziz University observatory since late 1993. The process is based on the least square model fitting method of analyzing occultation data, first introduced by Nather et al. (Astron. J. 75:963, 1970).
Wang, Guang-Ye; Huang, Wen-Jun; Song, Qi; Qin, Yun-Tian; Liang, Jin-Feng
2016-12-01
Acetabular fractures have always been very challenging for orthopedic surgeons; therefore, appropriate preoperative evaluation and planning are particularly important. This study aimed to explore the application methods and clinical value of preoperative computer simulation (PCS) in treating pelvic and acetabular fractures. Spiral computed tomography (CT) was performed on 13 patients with pelvic and acetabular fractures, and Digital Imaging and Communications in Medicine (DICOM) data were then input into Mimics software to reconstruct three-dimensional (3D) models of actual pelvic and acetabular fractures for preoperative simulative reduction and fixation, and to simulate each surgical procedure. The times needed for virtual surgical modeling and reduction and fixation were also recorded. The average fracture-modeling time was 45 min (30-70 min), and the average time for bone reduction and fixation was 28 min (16-45 min). Among the surgical approaches planned for these 13 patients, 12 were finally adopted; 12 cases used the simulated surgical fixation, and only 1 case used a partial planned fixation method. PCS can provide accurate surgical plans and data support for actual surgeries.
NASA Astrophysics Data System (ADS)
Angel, Erin; Yaghmai, Nazanin; Matilda Jude, Cecilia; DeMarco, John J.; Cagnon, Christopher H.; Goldin, Jonathan G.; Primak, Andrew N.; Stevens, Donna M.; Cody, Dianna D.; McCollough, Cynthia H.; McNitt-Gray, Michael F.
2009-02-01
Tube current modulation was designed to reduce radiation dose in CT imaging while maintaining overall image quality. This study aims to develop a method for evaluating the effects of tube current modulation (TCM) on organ dose in CT exams of actual patient anatomy. This method was validated by simulating a TCM and a fixed tube current chest CT exam on 30 voxelized patient models and estimating the radiation dose to each patient's glandular breast tissue. This new method for estimating organ dose was compared with other conventional estimates of dose reduction. Thirty detailed voxelized models of patient anatomy were created based on image data from female patients who had previously undergone clinically indicated CT scans including the chest area. As an indicator of patient size, the perimeter of the patient was measured on the image containing at least one nipple using a semi-automated technique. The breasts were contoured on each image set by a radiologist and glandular tissue was semi-automatically segmented from this region. Previously validated Monte Carlo models of two multidetector CT scanners were used, taking into account details about the source spectra, filtration, collimation and geometry of the scanner. TCM data were obtained from each patient's clinical scan and factored into the model to simulate the effects of TCM. For each patient model, two exams were simulated: a fixed tube current chest CT and a tube current modulated chest CT. X-ray photons were transported through the anatomy of the voxelized patient models, and radiation dose was tallied in the glandular breast tissue. The resulting doses from the tube current modulated simulations were compared to the results obtained from simulations performed using a fixed mA value. The average radiation dose to the glandular breast tissue from a fixed tube current scan across all patient models was 19 mGy. The average reduction in breast dose using the tube current modulated scan was 17%. Results were size dependent with smaller patients getting better dose reduction (up to 64% reduction) and larger patients getting a smaller reduction, and in some cases the dose actually increased when using tube current modulation (up to 41% increase). The results indicate that radiation dose to glandular breast tissue generally decreases with the use of tube current modulated CT acquisition, but that patient size (and in some cases patient positioning) may affect dose reduction.
Modeling and reduction with applications to semiconductor processing
NASA Astrophysics Data System (ADS)
Newman, Andrew Joseph
This thesis consists of several somewhat distinct but connected parts, with an underlying motivation in problems pertaining to control and optimization of semiconductor processing. The first part (Chapters 3 and 4) addresses problems in model reduction for nonlinear state-space control systems. In 1993, Scherpen generalized the balanced truncation method to the nonlinear setting. However, the Scherpen procedure is not easily computable and has not yet been applied in practice. We offer a method for computing a working approximation to the controllability energy function, one of the main objects involved in the method. Moreover, we show that for a class of second-order mechanical systems with dissipation, under certain conditions related to the dissipation, an exact formula for the controllability function can be derived. We then present an algorithm for a numerical implementation of the Morse-Palais lemma, which produces a local coordinate transformation under which a real-valued function with a non-degenerate critical point is quadratic on a neighborhood of the critical point. Application of the algorithm to the controllabilty function plays a key role in computing the balanced representation. We then apply our methods and algorithms to derive balanced realizations for nonlinear state-space models of two example mechanical systems: a simple pendulum and a double pendulum. The second part (Chapter 5) deals with modeling of rapid thermal chemical vapor deposition (RTCVD) for growth of silicon thin films, via first-principles and empirical analysis. We develop detailed process-equipment models and study the factors that influence deposition uniformity, such as temperature, pressure, and precursor gas flow rates, through analysis of experimental and simulation results. We demonstrate that temperature uniformity does not guarantee deposition thickness uniformity in a particular commercial RTCVD reactor of interest. In the third part (Chapter 6) we continue the modeling effort, specializing to a control system for RTCVD heat transfer. We then develop and apply ad-hoc versions of prominent model reduction approaches to derive reduced models and perform a comparative study.
Guo, Xiaopeng; Ren, Dongfang; Guo, Xiaodan
2018-06-01
Recently, Chinese state environmental protection administration has brought out several PM10 reduction policies to control the coal consumption strictly and promote the adjustment of power structure. Under this new policy environment, a suitable analysis method is required to simulate the upcoming major shift of China's electric power structure. Firstly, a complete system dynamics model is built to simulate China's evolution path of power structure with constraints of PM10 reduction considering both technical and economical factors. Secondly, scenario analyses are conducted under different clean-power capacity growth rates to seek applicable policy guidance for PM10 reduction. The results suggest the following conclusions. (1) The proportion of thermal power installed capacity will decrease to 67% in 2018 with a dropping speed, and there will be an accelerated decline in 2023-2032. (2) The system dynamics model can effectively simulate the implementation of the policy, for example, the proportion of coal consumption in the forecast model is 63.3% (the accuracy rate is 95.2%), below policy target 65% in 2017. (3) China should promote clean power generation such as nuclear power to meet PM10 reduction target.
Substructural controller synthesis
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Craig, Roy R., Jr.
1989-01-01
A decentralized design procedure which combines substructural synthesis, model reduction, decentralized controller design, subcontroller synthesis, and controller reduction is proposed for the control design of flexible structures. The structure to be controlled is decomposed into several substructures, which are modeled by component mode synthesis methods. For each substructure, a subcontroller is designed by using the linear quadratic optimal control theory. Then, a controller synthesis scheme called Substructural Controller Synthesis (SCS) is used to assemble the subcontrollers into a system controller, which is to be used to control the whole structure.
Assessing the utility of frequency dependent nudging for reducing biases in biogeochemical models
NASA Astrophysics Data System (ADS)
Lagman, Karl B.; Fennel, Katja; Thompson, Keith R.; Bianucci, Laura
2014-09-01
Bias errors, resulting from inaccurate boundary and forcing conditions, incorrect model parameterization, etc. are a common problem in environmental models including biogeochemical ocean models. While it is important to correct bias errors wherever possible, it is unlikely that any environmental model will ever be entirely free of such errors. Hence, methods for bias reduction are necessary. A widely used technique for online bias reduction is nudging, where simulated fields are continuously forced toward observations or a climatology. Nudging is robust and easy to implement, but suppresses high-frequency variability and introduces artificial phase shifts. As a solution to this problem Thompson et al. (2006) introduced frequency dependent nudging where nudging occurs only in prescribed frequency bands, typically centered on the mean and the annual cycle. They showed this method to be effective for eddy resolving ocean circulation models. Here we add a stability term to the previous form of frequency dependent nudging which makes the method more robust for non-linear biological models. Then we assess the utility of frequency dependent nudging for biological models by first applying the method to a simple predator-prey model and then to a 1D ocean biogeochemical model. In both cases we only nudge in two frequency bands centered on the mean and the annual cycle, and then assess how well the variability in higher frequency bands is recovered. We evaluate the effectiveness of frequency dependent nudging in comparison to conventional nudging and find significant improvements with the former.
Reduced order modeling and active flow control of an inlet duct
NASA Astrophysics Data System (ADS)
Ge, Xiaoqing
Many aerodynamic applications require the modeling of compressible flows in or around a body, e.g., the design of aircraft, inlet or exhaust duct, wind turbines, or tall buildings. Traditional methods use wind tunnel experiments and computational fluid dynamics (CFD) to investigate the spatial and temporal distribution of the flows. Although they provide a great deal of insight into the essential characteristics of the flow field, they are not suitable for control analysis and design due to the high physical/computational cost. Many model reduction methods have been studied to reduce the complexity of the flow model. There are two main approaches: linearization based input/output modeling and proper orthogonal decomposition (POD) based model reduction. The former captures mostly the local behavior near a steady state, which is suitable to model laminar flow dynamics. The latter obtains a reduced order model by projecting the governing equation onto an "optimal" subspace and is able to model complex nonlinear flow phenomena. In this research we investigate various model reduction approaches and compare them in flow modeling and control design. We propose an integrated model-based control methodology and apply it to the reduced order modeling and active flow control of compressible flows within a very aggressive (length to exit diameter ratio, L/D, of 1.5) inlet duct and its upstream contraction section. The approach systematically applies reduced order modeling, estimator design, sensor placement and control design to improve the aerodynamic performance. The main contribution of this work is the development of a hybrid model reduction approach that attempts to combine the best features of input/output model identification and POD method. We first identify a linear input/output model by using a subspace algorithm. We next project the difference between CFD response and the identified model response onto a set of POD basis. This trajectory is fit to a nonlinear dynamical model to augment the linear input/output model. Thus, the full system is decomposed into a dominant linear subsystem and a low order nonlinear subsystem. The hybrid model is then used for control design and compared with other modeling methods in CFD simulations. Numerical results indicate that the hybrid model accurately predicts the nonlinear behavior of the flow for a 2D diffuser contraction section model. It also performs best in terms of feedback control design and learning control. Since some outputs of interest (e.g., the AIP pressure recovery) are not observable during normal operations, static and dynamic estimators are designed to recreate the information from available sensor measurements. The latter also provides a state estimation for feedback controller. Based on the reduced order models and estimators, different controllers are designed to improve the aerodynamic performance of the contraction section and inlet duct. The integrated control methodology is evaluated with CFD simulations. Numerical results demonstrate the feasibility and efficacy of the active flow control based on reduced order models. Our reduced order models not only generate a good approximation of the nonlinear flow dynamics over a wide input range, but also help to design controllers that significantly improve the flow response. The tools developed for model reduction, estimator and control design can also be applied to wind tunnel experiment.
QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.
Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.
NASA Astrophysics Data System (ADS)
Wutsqa, D. U.; Marwah, M.
2017-06-01
In this paper, we consider spatial operation median filter to reduce the noise in the cervical images yielded by colposcopy tool. The backpropagation neural network (BPNN) model is applied to the colposcopy images to classify cervical cancer. The classification process requires an image extraction by using a gray level co-occurrence matrix (GLCM) method to obtain image features that are used as inputs of BPNN model. The advantage of noise reduction is evaluated by comparing the performances of BPNN models with and without spatial operation median filter. The experimental result shows that the spatial operation median filter can improve the accuracy of the BPNN model for cervical cancer classification.
How to Compare the Security Quality Requirements Engineering (SQUARE) Method with Other Methods
2007-08-01
Attack Trees for Modeling and Analysis 10 2.8 Misuse and Abuse Cases 10 2.9 Formal Methods 11 2.9.1 Software Cost Reduction 12 2.9.2 Common...modern or efficient techniques. • Requirements analysis typically is either not performed at all (identified requirements are directly specified without...any analysis or modeling) or analysis is restricted to functional re- quirements and ignores quality requirements, other nonfunctional requirements
Tangen, C M; Koch, G G
1999-03-01
In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.
Hobbs, Brian P.; Carlin, Bradley P.; Mandrekar, Sumithra J.; Sargent, Daniel J.
2011-01-01
Summary Bayesian clinical trial designs offer the possibility of a substantially reduced sample size, increased statistical power, and reductions in cost and ethical hazard. However when prior and current information conflict, Bayesian methods can lead to higher than expected Type I error, as well as the possibility of a costlier and lengthier trial. This motivates an investigation of the feasibility of hierarchical Bayesian methods for incorporating historical data that are adaptively robust to prior information that reveals itself to be inconsistent with the accumulating experimental data. In this paper, we present several models that allow for the commensurability of the information in the historical and current data to determine how much historical information is used. A primary tool is elaborating the traditional power prior approach based upon a measure of commensurability for Gaussian data. We compare the frequentist performance of several methods using simulations, and close with an example of a colon cancer trial that illustrates a linear models extension of our adaptive borrowing approach. Our proposed methods produce more precise estimates of the model parameters, in particular conferring statistical significance to the observed reduction in tumor size for the experimental regimen as compared to the control regimen. PMID:21361892
Polynomic nonlinear dynamical systems - A residual sensitivity method for model reduction
NASA Technical Reports Server (NTRS)
Yurkovich, S.; Bugajski, D.; Sain, M.
1985-01-01
The motivation for using polynomic combinations of system states and inputs to model nonlinear dynamics systems is founded upon the classical theories of analysis and function representation. A feature of such representations is the need to make available all possible monomials in these variables, up to the degree specified, so as to provide for the description of widely varying functions within a broad class. For a particular application, however, certain monomials may be quite superfluous. This paper examines the possibility of removing monomials from the model in accordance with the level of sensitivity displayed by the residuals to their absence. Critical in these studies is the effect of system input excitation, and the effect of discarding monomial terms, upon the model parameter set. Therefore, model reduction is approached iteratively, with inputs redesigned at each iteration to ensure sufficient excitation of remaining monomials for parameter approximation. Examples are reported to illustrate the performance of such model reduction approaches.
Shi, Chengdi; Cai, Leyi; Hu, Wei; Sun, Junying
2017-09-19
ABSTRACTS Objective: To study the method of X-ray diagnosis of unstable pelvic fractures displaced in three-dimensional (3D) space and its clinical application in closed reduction. Five models of hemipelvic displacement were made in an adult pelvic specimen. Anteroposterior radiographs of the pelvis were analyzed in PACS. The method of X-ray diagnosis was applied in closed reductions. From February 2012 to June 2016, 23 patients (15 men, 8 women; mean age, 43.4 years) with unstable pelvic fractures were included. All patients were treated by closed reduction and percutaneous cannulate screw fixation of the pelvic ring. According to Tile's classification, the patients were classified into type B1 in 7 cases, B2 in 3, B3 in 3, C1 in 5, C2 in 3, and C3 in 2. The operation time and intraoperative blood loss were recorded. Postoperative images were evaluated by Matta radiographic standards. Five models of displacement were made successfully. The X-ray features of the models were analyzed. For clinical patients, the average operation time was 44.8 min (range, 20-90 min) and the average intraoperative blood loss was 35.7 (range, 20-100) mL. According to the Matta standards, 7 cases were excellent, 12 cases were good, and 4 were fair. The displacements in 3D space of unstable pelvic fractures can be diagnosed rapidly by X-ray analysis to guide closed reduction, with a satisfactory clinical outcome.
Hybrid Wing Body Aircraft System Noise Assessment with Propulsion Airframe Aeroacoustic Experiments
NASA Technical Reports Server (NTRS)
Thomas, Russell H.; Burley, Casey L.; Olson, Erik D.
2010-01-01
A system noise assessment of a hybrid wing body configuration was performed using NASA s best available aircraft models, engine model, and system noise assessment method. A propulsion airframe aeroacoustic effects experimental database for key noise sources and interaction effects was used to provide data directly in the noise assessment where prediction methods are inadequate. NASA engine and aircraft system models were created to define the hybrid wing body aircraft concept as a twin engine aircraft with a 7500 nautical mile mission. The engines were modeled as existing technology high bypass ratio turbofans. The baseline hybrid wing body aircraft was assessed at 22 dB cumulative below the FAA Stage 4 certification level. To determine the potential for noise reduction with relatively near term technologies, seven other configurations were assessed beginning with moving the engines two fan nozzle diameters upstream of the trailing edge and then adding technologies for reduction of the highest noise sources. Aft radiated noise was expected to be the most challenging to reduce and, therefore, the experimental database focused on jet nozzle and pylon configurations that could reduce jet noise through a combination of source reduction and shielding effectiveness. The best configuration for reduction of jet noise used state-of-the-art technology chevrons with a pylon above the engine in the crown position. This configuration resulted in jet source noise reduction, favorable azimuthal directivity, and noise source relocation upstream where it is more effectively shielded by the limited airframe surface, and additional fan noise attenuation from acoustic liner on the crown pylon internal surfaces. Vertical and elevon surfaces were also assessed to add shielding area. The elevon deflection above the trailing edge showed some small additional noise reduction whereas vertical surfaces resulted in a slight noise increase. With the effects of the configurations from the database included, the best available noise reduction was 40 dB cumulative. Projected effects from additional technologies were assessed for an advanced noise reduction configuration including landing gear fairings and advanced pylon and chevron nozzles. Incorporating the three additional technology improvements, an aircraft noise is projected of 42.4 dB cumulative below the Stage 4 level.
Gönen, Mehmet
2014-01-01
Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks. PMID:24532862
Gönen, Mehmet
2014-03-01
Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F 1 , and micro F 1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.
Reduction of peak energy demand based on smart appliances energy consumption adjustment
NASA Astrophysics Data System (ADS)
Powroźnik, P.; Szulim, R.
2017-08-01
In the paper the concept of elastic model of energy management for smart grid and micro smart grid is presented. For the proposed model a method for reducing peak demand in micro smart grid has been defined. The idea of peak demand reduction in elastic model of energy management is to introduce a balance between demand and supply of current power for the given Micro Smart Grid in the given moment. The results of the simulations studies were presented. They were carried out on real household data available on UCI Machine Learning Repository. The results may have practical application in the smart grid networks, where there is a need for smart appliances energy consumption adjustment. The article presents a proposal to implement the elastic model of energy management as the cloud computing solution. This approach of peak demand reduction might have application particularly in a large smart grid.
Singhal, Naresh; Islam, Jahangir
2008-02-19
This paper uses the findings from a column study to develop a reactive model for exploring the interactions occurring in leachate-contaminated soils. The changes occurring in the concentrations of acetic acid, sulphate, suspended and attached biomass, Fe(II), Mn(II), calcium, carbonate ions, and pH in the column are assessed. The mathematical model considers geochemical equilibrium, kinetic biodegradation, precipitation-dissolution reactions, bacterial and substrate transport, and permeability reduction arising from bacterial growth and gas production. A two-step sequential operator splitting method is used to solve the coupled transport and biogeochemical reaction equations. The model gives satisfactory fits to experimental data and the simulations show that the transport of metals in soil is controlled by multiple competing biotic and abiotic reactions. These findings suggest that bioaccumulation and gas formation, compared to chemical precipitation, have a larger influence on hydraulic conductivity reduction.
Engine-propeller power plant aircraft community noise reduction key methods
NASA Astrophysics Data System (ADS)
Moshkov P., A.; Samokhin V., F.; Yakovlev A., A.
2018-04-01
Basic methods of aircraft-type flying vehicle engine-propeller power plant noise reduction were considered including single different-structure-and-arrangement propellers and piston engines. On the basis of a semiempirical model the expressions for blade diameter and number effect evaluation upon propeller noise tone components under thrust constancy condition were proposed. Acoustic tests performed at Moscow Aviation institute airfield on the whole qualitatively proved the obtained ratios. As an example of noise and detectability reduction provision a design-and-experimental estimation of propeller diameter effect upon unmanned aircraft audibility boundaries was performed. Future investigation ways were stated to solve a low-noise power plant design problem for light aircraft and unmanned aerial vehicles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Shaobu; Lu, Shuai; Zhou, Ning
In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highestmore » similarity, forming a suboptimal ‘basis’ of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.« less
Final Report. Analysis and Reduction of Complex Networks Under Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef M.; Coles, T.; Spantini, A.
2013-09-30
The project was a collaborative effort among MIT, Sandia National Laboratories (local PI Dr. Habib Najm), the University of Southern California (local PI Prof. Roger Ghanem), and The Johns Hopkins University (local PI Prof. Omar Knio, now at Duke University). Our focus was the analysis and reduction of large-scale dynamical systems emerging from networks of interacting components. Such networks underlie myriad natural and engineered systems. Examples important to DOE include chemical models of energy conversion processes, and elements of national infrastructure—e.g., electric power grids. Time scales in chemical systems span orders of magnitude, while infrastructure networks feature both local andmore » long-distance connectivity, with associated clusters of time scales. These systems also blend continuous and discrete behavior; examples include saturation phenomena in surface chemistry and catalysis, and switching in electrical networks. Reducing size and stiffness is essential to tractable and predictive simulation of these systems. Computational singular perturbation (CSP) has been effectively used to identify and decouple dynamics at disparate time scales in chemical systems, allowing reduction of model complexity and stiffness. In realistic settings, however, model reduction must contend with uncertainties, which are often greatest in large-scale systems most in need of reduction. Uncertainty is not limited to parameters; one must also address structural uncertainties—e.g., whether a link is present in a network—and the impact of random perturbations, e.g., fluctuating loads or sources. Research under this project developed new methods for the analysis and reduction of complex multiscale networks under uncertainty, by combining computational singular perturbation (CSP) with probabilistic uncertainty quantification. CSP yields asymptotic approximations of reduceddimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing uncertainty in this context raised fundamentally new issues, e.g., how is the topology of slow manifolds transformed by parametric uncertainty? How to construct dynamical models on these uncertain manifolds? To address these questions, we used stochastic spectral polynomial chaos (PC) methods to reformulate uncertain network models and analyzed them using CSP in probabilistic terms. Finding uncertain manifolds involved the solution of stochastic eigenvalue problems, facilitated by projection onto PC bases. These problems motivated us to explore the spectral properties stochastic Galerkin systems. We also introduced novel methods for rank-reduction in stochastic eigensystems—transformations of a uncertain dynamical system that lead to lower storage and solution complexity. These technical accomplishments are detailed below. This report focuses on the MIT portion of the joint project.« less
Effects of Correctional-Based Programs for Female Inmates: A Systematic Review
ERIC Educational Resources Information Center
Tripodi, Stephen J.; Bledsoe, Sarah E.; Kim, Johnny S.; Bender, Kimberly
2011-01-01
Objective: To examine the effectiveness of interventions for incarcerated women. Method: The researchers use a two-model system: the risk-reduction model for studies analyzing interventions to reduce recidivism rates, and the enhancement model for studies that target psychological and physical well-being. Results: Incarcerated women who…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saakyan, D.B.
The variant of the Kirkpatrick-Sherrington model generalized by Derrida for the case of arbitrary spin is considered. When the number of simultaneously interacting neighbors tends to infinity, a solution to the model is obtained not only by reduction to the random-energy model but also by means of the replica method with the Parisi ansatz.
The determination of third order linear models from a seventh order nonlinear jet engine model
NASA Technical Reports Server (NTRS)
Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex
1989-01-01
Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.
NASA Astrophysics Data System (ADS)
Yan, Zhen-Ya; Xie, Fu-Ding; Zhang, Hong-Qing
2001-07-01
Both the direct method due to Clarkson and Kruskal and the improved direct method due to Lou are extended to reduce the high-order modified Boussinesq equation with the damping term (HMBEDT) arising in the general Fermi-Pasta-Ulam model. As a result, several types of similarity reductions are obtained. It is easy to show that the nonlinear wave equation is not integrable under the sense of Ablowitz's conjecture from the reduction results obtained. In addition, kink-shaped solitary wave solutions, which are of important physical significance, are found for HMBEDT based on the obtained reduction equation. The project supported by National Natural Science Foundation of China under Grant No. 19572022, the National Key Basic Research Development Project Program of China under Grant No. G1998030600 and Doctoral Foundation of China under Grant No. 98014119
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Y. B.; Zhu, X. W., E-mail: xiaowuzhu1026@znufe.edu.cn; Dai, H. H.
Though widely used in modelling nano- and micro- structures, Eringen’s differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen’s two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings aremore » considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.« less
Velikina, Julia V; Samsonov, Alexey A
2015-11-01
To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models preestimated from training data. We introduce the model consistency condition (MOCCO) technique, which utilizes temporal models to regularize reconstruction without constraining the solution to be low-rank, as is performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Our method was compared with a standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE-MRA) and cardiac CINE imaging. We studied the sensitivity of all methods to rank reduction and temporal subspace modeling errors. MOCCO demonstrated reduced sensitivity to modeling errors compared with the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE-MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.
2017-12-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2018-03-26
In this paper we present a framework for the reduction and linking of physiologically based pharmacokinetic (PBPK) models with models of systems biology to describe the effects of drug administration across multiple scales. To address the issue of model complexity, we propose the reduction of each type of model separately prior to being linked. We highlight the use of balanced truncation in reducing the linear components of PBPK models, whilst proper lumping is shown to be efficient in reducing typically nonlinear systems biology type models. The overall methodology is demonstrated via two example systems; a model of bacterial chemotactic signalling in Escherichia coli and a model of extracellular regulatory kinase activation mediated via the extracellular growth factor and nerve growth factor receptor pathways. Each system is tested under the simulated administration of three hypothetical compounds; a strong base, a weak base, and an acid, mirroring the parameterisation of pindolol, midazolam, and thiopental, respectively. Our method can produce up to an 80% decrease in simulation time, allowing substantial speed-up for computationally intensive applications including parameter fitting or agent based modelling. The approach provides a straightforward means to construct simplified Quantitative Systems Pharmacology models that still provide significant insight into the mechanisms of drug action. Such a framework can potentially bridge pre-clinical and clinical modelling - providing an intermediate level of model granularity between classical, empirical approaches and mechanistic systems describing the molecular scale.
Behavioral Treatment of Children's Fears and Phobias: A Review.
ERIC Educational Resources Information Center
Morris, Richard J.; Kratochwill, Thomas R.
1985-01-01
An overview of the behaviorally-oriented fear reduction methods for children is presented. Systematic desensitization and related procedures, flooding-related therapies, contingency management approaches, modeling procedures, and self-control methods are discussed after reviewing normative and prevalence data regarding children's fears. Research…
Space logistics simulation: Launch-on-time
NASA Technical Reports Server (NTRS)
Nii, Kendall M.
1990-01-01
During 1989-1990 the Center for Space Construction developed the Launch-On-Time (L-O-T) Model to help asses and improve the likelihood of successfully supporting space construction requiring multi-logistic delivery flights. The model chose a reference by which the L-O-T probability and improvements to L-O-T probability can be judged. The measure of improvement was chosen as the percent reduction in E(S(sub N)), the total expected amount of unscheduled 'hold' time. We have also previously developed an approach to determining the reduction in E(S(sub N)) by reducing some of the causes of unscheduled holds and increasing the speed at which the problems causing the holds may be 'fixed.' We provided a mathematical (binary linear programming) model for measuring the percent reduction in E(S(sub N)) given such improvements. In this presentation we shall exercise the model which was developed and draw some conclusions about the following: methods used, data available and needed, and make suggestions for areas of improvement in 'real world' application of the model.
Interior noise control prediction study for high-speed propeller-driven aircraft
NASA Technical Reports Server (NTRS)
Rennison, D. C.; Wilby, J. F.; Marsh, A. H.; Wilby, E. G.
1979-01-01
An analytical model was developed to predict the noise levels inside propeller-driven aircraft during cruise at M = 0.8. The model was applied to three study aircraft with fuselages of different size (wide body, narrow body and small diameter) in order to determine the noise reductions required to achieve the goal of an A-weighted sound level which does not exceed 80 dB. The model was then used to determine noise control methods which could achieve the required noise reductions. Two classes of noise control treatments were investigated: add-on treatments which can be added to existing structures, and advanced concepts which would require changes to the fuselage primary structure. Only one treatment, a double wall with limp panel, provided the required noise reductions. Weight penalties associated with the treatment were estimated for the three study aircraft.
Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less
Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method
Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray
2016-01-01
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less
Dimensional Reduction for the General Markov Model on Phylogenetic Trees.
Sumner, Jeremy G
2017-03-01
We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.
Constructive methods of invariant manifolds for kinetic problems
NASA Astrophysics Data System (ADS)
Gorban, Alexander N.; Karlin, Iliya V.; Zinovyev, Andrei Yu.
2004-06-01
The concept of the slow invariant manifold is recognized as the central idea underpinning a transition from micro to macro and model reduction in kinetic theories. We present the Constructive Methods of Invariant Manifolds for model reduction in physical and chemical kinetics, developed during last two decades. The physical problem of reduced description is studied in the most general form as a problem of constructing the slow invariant manifold. The invariance conditions are formulated as the differential equation for a manifold immersed in the phase space ( the invariance equation). The equation of motion for immersed manifolds is obtained ( the film extension of the dynamics). Invariant manifolds are fixed points for this equation, and slow invariant manifolds are Lyapunov stable fixed points, thus slowness is presented as stability. A collection of methods to derive analytically and to compute numerically the slow invariant manifolds is presented. Among them, iteration methods based on incomplete linearization, relaxation method and the method of invariant grids are developed. The systematic use of thermodynamics structures and of the quasi-chemical representation allow to construct approximations which are in concordance with physical restrictions. The following examples of applications are presented: nonperturbative deviation of physically consistent hydrodynamics from the Boltzmann equation and from the reversible dynamics, for Knudsen numbers Kn∼1; construction of the moment equations for nonequilibrium media and their dynamical correction (instead of extension of list of variables) to gain more accuracy in description of highly nonequilibrium flows; determination of molecules dimension (as diameters of equivalent hard spheres) from experimental viscosity data; model reduction in chemical kinetics; derivation and numerical implementation of constitutive equations for polymeric fluids; the limits of macroscopic description for polymer molecules, etc.
Modelling and simulation of a heat exchanger
NASA Technical Reports Server (NTRS)
Xia, Lei; Deabreu-Garcia, J. Alex; Hartley, Tom T.
1991-01-01
Two models for two different control systems are developed for a parallel heat exchanger. First by spatially lumping a heat exchanger model, a good approximate model which has a high system order is produced. Model reduction techniques are applied to these to obtain low order models that are suitable for dynamic analysis and control design. The simulation method is discussed to ensure a valid simulation result.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, J; Followill, D; Howell, R
2015-06-15
Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less
Aghaie, A; Pourfatollah, A A; Bathaie, S Z; Moazzeni, S M; Khorsand Mohammad Pour, H; Sharifi, Z
2008-01-01
The safety of plasma derived medicinal products, such as immunoglobulin, depends on viral inactivation steps that are incorporated into the production process. Several attempts have been made to validate the effectiveness of these inactivation methods against a range of physio-chemically diverse viruses. Treatment with solvent/detergent (S/D) and pasteurization (P) has been continuously used in our IgG production and these methods were analysed in this study as models of viral inactivation. Bovine Viral Diarrhoea Virus (BVDV), Herpes Simplex Virus (HSV) and Vesicular Stomatitis Virus (VSV) were employed as models of HCV, HBV and HIV respectively. Polio and Reo viruses also were used as stable viruses to chemical substances. The infectivity of a range of viruses before and after treatment with two methods of viral inactivation was measured by end point titration and their effectiveness expressed as Logarithmic Reduction Factors (LRF). Solvent/detergent treatment reduced the amount of enveloped viruses by 5-6 logs. The reduction factor was between 5-6 logs for all viruses used in the pasteurization process. A final log reduction factor was obtained as the sum of the two individual methods. Both inactivation methods have advantages and disadvantages with respect to their ability to inactivate viruses. Thus,combination of two robust virus inactivation steps, solvent/detergent and pasteurization, increases the safety margin of immunoglobulin preparations.
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2018-05-01
In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.
Cohen, Trevor; Schvaneveldt, Roger; Widdows, Dominic
2010-04-01
The discovery of implicit connections between terms that do not occur together in any scientific document underlies the model of literature-based knowledge discovery first proposed by Swanson. Corpus-derived statistical models of semantic distance such as Latent Semantic Analysis (LSA) have been evaluated previously as methods for the discovery of such implicit connections. However, LSA in particular is dependent on a computationally demanding method of dimension reduction as a means to obtain meaningful indirect inference, limiting its ability to scale to large text corpora. In this paper, we evaluate the ability of Random Indexing (RI), a scalable distributional model of word associations, to draw meaningful implicit relationships between terms in general and biomedical language. Proponents of this method have achieved comparable performance to LSA on several cognitive tasks while using a simpler and less computationally demanding method of dimension reduction than LSA employs. In this paper, we demonstrate that the original implementation of RI is ineffective at inferring meaningful indirect connections, and evaluate Reflective Random Indexing (RRI), an iterative variant of the method that is better able to perform indirect inference. RRI is shown to lead to more clearly related indirect connections and to outperform existing RI implementations in the prediction of future direct co-occurrence in the MEDLINE corpus. 2009 Elsevier Inc. All rights reserved.
A new drilling method-Earthworm-like vibration drilling.
Wang, Peng; Ni, Hongjian; Wang, Ruihe
2018-01-01
The load transfer difficulty caused by borehole wall friction severely limits the penetration rate and extended-reach limit of complex structural wells. A new friction reduction technology termed "earthworm-like drilling" is proposed in this paper to improve the load transfer of complex structural wells. A mathematical model based on a "soft-string" model is developed and solved. The results show that earthworm-like drilling is more effective than single-point vibration drilling. The amplitude and frequency of the pulse pressure and the installation position of the shakers have a substantial impact on friction reduction and load transfer. An optimization model based on the projection gradient method is developed and used to optimize the position of three shakers in a horizontal well. The results verify the feasibility and advantages of earthworm-like drilling, and establish a solid theoretical foundation for its application in oil field drilling.
Nasadem Global Elevation Model: Methods and Progress
NASA Astrophysics Data System (ADS)
Crippen, R.; Buckley, S.; Agram, P.; Belz, E.; Gurrola, E.; Hensley, S.; Kobrick, M.; Lavalle, M.; Martin, J.; Neumann, M.; Nguyen, Q.; Rosen, P.; Shimada, J.; Simard, M.; Tung, W.
2016-06-01
NASADEM is a near-global elevation model that is being produced primarily by completely reprocessing the Shuttle Radar Topography Mission (SRTM) radar data and then merging it with refined ASTER GDEM elevations. The new and improved SRTM elevations in NASADEM result from better vertical control of each SRTM data swath via reference to ICESat elevations and from SRTM void reductions using advanced interferometric unwrapping algorithms. Remnant voids will be filled primarily by GDEM3, but with reduction of GDEM glitches (mostly related to clouds) and therefore with only minor need for secondary sources of fill.
Ensemble Learning Method for Outlier Detection and its Application to Astronomical Light Curves
NASA Astrophysics Data System (ADS)
Nun, Isadora; Protopapas, Pavlos; Sim, Brandon; Chen, Wesley
2016-09-01
Outlier detection is necessary for automated data analysis, with specific applications spanning almost every domain from financial markets to epidemiology to fraud detection. We introduce a novel mixture of the experts outlier detection model, which uses a dynamically trained, weighted network of five distinct outlier detection methods. After dimensionality reduction, individual outlier detection methods score each data point for “outlierness” in this new feature space. Our model then uses dynamically trained parameters to weigh the scores of each method, allowing for a finalized outlier score. We find that the mixture of experts model performs, on average, better than any single expert model in identifying both artificially and manually picked outliers. This mixture model is applied to a data set of astronomical light curves, after dimensionality reduction via time series feature extraction. Our model was tested using three fields from the MACHO catalog and generated a list of anomalous candidates. We confirm that the outliers detected using this method belong to rare classes, like Novae, He-burning, and red giant stars; other outlier light curves identified have no available information associated with them. To elucidate their nature, we created a website containing the light-curve data and information about these objects. Users can attempt to classify the light curves, give conjectures about their identities, and sign up for follow up messages about the progress made on identifying these objects. This user submitted data can be used further train of our mixture of experts model. Our code is publicly available to all who are interested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sommer, A., E-mail: a.sommer@lte.uni-saarland.de; Farle, O., E-mail: o.farle@lte.uni-saarland.de; Dyczij-Edlinger, R., E-mail: edlinger@lte.uni-saarland.de
2015-10-15
This paper presents a fast numerical method for computing certified far-field patterns of phased antenna arrays over broad frequency bands as well as wide ranges of steering and look angles. The proposed scheme combines finite-element analysis, dual-corrected model-order reduction, and empirical interpolation. To assure the reliability of the results, improved a posteriori error bounds for the radiated power and directive gain are derived. Both the reduced-order model and the error-bounds algorithm feature offline–online decomposition. A real-world example is provided to demonstrate the efficiency and accuracy of the suggested approach.
NASA Technical Reports Server (NTRS)
Yam, Y.; Lang, J. H.; Johnson, T. L.; Shih, S.; Staelin, D. H.
1983-01-01
A model reduction procedure based on aggregation with respect to sensor and actuator influences rather than modes is presented for large systems of coupled second-order differential equations. Perturbation expressions which can predict the effects of spillover on both the aggregated and residual states are derived. These expressions lead to the development of control system design constraints which are sufficient to guarantee, to within the validity of the perturbations, that the residual states are not destabilized by control systems designed from the reduced model. A numerical example is provided to illustrate the application of the aggregation and control system design method.
Application of Lanczos vectors to control design of flexible structures
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.; Su, Tzu-Jeng
1990-01-01
This report covers research conducted during the first year of the two-year grant. The research, entitled 'Application of Lanczos Vectors to Control Design of Flexible Structures' concerns various ways to obtain reduced-order mathematical models for use in dynamic response analyses and in control design studies. This report summarizes research described in several reports and papers that were written under this contract. Extended abstracts are presented for technical papers covering the following topics: controller reduction by preserving impulse response energy; substructuring decomposition and controller synthesis; model reduction methods for structural control design; and recent literature on structural modeling, identification, and analysis.
Davidson, Shaun M; Docherty, Paul D; Murray, Rua
2017-03-01
Parameter identification is an important and widely used process across the field of biomedical engineering. However, it is susceptible to a number of potential difficulties, such as parameter trade-off, causing premature convergence at non-optimal parameter values. The proposed Dimensional Reduction Method (DRM) addresses this issue by iteratively reducing the dimension of hyperplanes where trade off occurs, and running subsequent identification processes within these hyperplanes. The DRM was validated using clinical data to optimize 4 parameters of the widely used Bergman Minimal Model of glucose and insulin kinetics, as well as in-silico data to optimize 5 parameters of the Pulmonary Recruitment (PR) Model. Results were compared with the popular Levenberg-Marquardt (LMQ) Algorithm using a Monte-Carlo methodology, with both methods afforded equivalent computational resources. The DRM converged to a lower or equal residual value in all tests run using the Bergman Minimal Model and actual patient data. For the PR model, the DRM attained significantly lower overall median parameter error values and lower residuals in the vast majority of tests. This shows the DRM has potential to provide better resolution of optimum parameter values for the variety of biomedical models in which significant levels of parameter trade-off occur. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Denis-Bacelar, Ana M.; Chittenden, Sarah J.; Murray, Iain; Divoli, Antigoni; McCready, V. Ralph; Dearnaley, David P.; O'Sullivan, Joe M.; Johnson, Bernadette; Flux, Glenn D.
2017-04-01
Skeletal tumour burden is a biomarker of prognosis and survival in cancer patients. This study proposes a novel method based on the linear quadratic model to predict the reduction in metastatic tumour burden as a function of the absorbed doses delivered from molecular radiotherapy treatments. The range of absorbed doses necessary to eradicate all the bone lesions and to reduce the metastatic burden was investigated in a cohort of 22 patients with bone metastases from castration-resistant prostate cancer. A metastatic burden reduction curve was generated for each patient, which predicts the reduction in metastatic burden as a function of the patient mean absorbed dose, defined as the mean of all the lesion absorbed doses in any given patient. In the patient cohort studied, the median of the patient mean absorbed dose predicted to reduce the metastatic burden by 50% was 89 Gy (interquartile range: 83-105 Gy), whilst a median of 183 Gy (interquartile range: 107-247 Gy) was found necessary to eradicate all metastases in a given patient. The absorbed dose required to eradicate all the lesions was strongly correlated with the variability of the absorbed doses delivered to multiple lesions in a given patient (r = 0.98, P < 0.0001). The metastatic burden reduction curves showed a potential large reduction in metastatic burden for a small increase in absorbed dose in 91% of patients. The results indicate the range of absorbed doses required to potentially obtain a significant survival benefit. The metastatic burden reduction method provides a simple tool that could be used in routine clinical practice for patient selection and to indicate the required administered activity to achieve a predicted patient mean absorbed dose and reduction in metastatic tumour burden.
Thomsen, Jakob Borup; Arp, Dennis Tideman; Carl, Jesper
2012-05-01
To investigate a novel method for sparing urethra in external beam radiotherapy of prostate cancer and to evaluate the efficacy of such a treatment in terms of tumour control using a mathematical model. This theoretical study includes 20 patients previously treated for prostate cancer using external beam radiotherapy. All patients had a Nickel-Titanium (Ni-Ti) stent inserted into the prostate part of urethra. The stent has been used during the treatment course as an internal marker for patient positioning prior to treatment. In this study the stent is used for delineating urethra while intensity modulated radiotherapy was used for lowering dose to urethra. Evaluation of the dose plans were performed using a tumour control probability model based on the concept of uniform equivalent dose. The feasibility of the urethra dose reduction method is validated and a reduction of about 17% is shown to be possible. Calculations suggest a nearly preserved tumour control probability. A new concept for urethra dose reduction is presented. The method relies on the use of a Ni-Ti stent as a fiducial marker combined with intensity modulated radiotherapy. Theoretical calculations suggest preserved tumour control. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Green, Christopher T.; Jurgens, Bryant; Zhang, Yong; Starn, Jeffrey; Singleton, Michael J.; Esser, Bradley K.
2016-01-01
Rates of oxygen and nitrate reduction are key factors in determining the chemical evolution of groundwater. Little is known about how these rates vary and covary in regional groundwater settings, as few studies have focused on regional datasets with multiple tracers and methods of analysis that account for effects of mixed residence times on apparent reaction rates. This study provides insight into the characteristics of residence times and rates of O2 reduction and denitrification (NO3− reduction) by comparing reaction rates using multi-model analytical residence time distributions (RTDs) applied to a data set of atmospheric tracers of groundwater age and geochemical data from 141 well samples in the Central Eastern San Joaquin Valley, CA. The RTD approach accounts for mixtures of residence times in a single sample to provide estimates of in-situ rates. Tracers included SF6, CFCs, 3H, He from 3H (tritiogenic He),14C, and terrigenic He. Parameter estimation and multi-model averaging were used to establish RTDs with lower error variances than those produced by individual RTD models. The set of multi-model RTDs was used in combination with NO3− and dissolved gas data to estimate zero order and first order rates of O2 reduction and denitrification. Results indicated that O2 reduction and denitrification rates followed approximately log-normal distributions. Rates of O2 and NO3− reduction were correlated and, on an electron milliequivalent basis, denitrification rates tended to exceed O2 reduction rates. Estimated historical NO3− trends were similar to historical measurements. Results show that the multi-model approach can improve estimation of age distributions, and that relatively easily measured O2 rates can provide information about trends in denitrification rates, which are more difficult to estimate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Christopher T.; Jurgens, Bryant C.; Zhang, Yong
Rates of oxygen and nitrate reduction are key factors in determining the chemical evolution of groundwater. Little is known about how these rates vary and covary in regional groundwater settings, as few studies have focused on regional datasets with multiple tracers and methods of analysis that account for effects of mixed residence times on apparent reaction rates. This study provides insight into the characteristics of residence times and rates of O 2 reduction and denitrification (NO 3 – reduction) by comparing reaction rates using multi-model analytical residence time distributions (RTDs) applied to a data set of atmospheric tracers of groundwatermore » age and geochemical data from 141 well samples in the Central Eastern San Joaquin Valley, CA. The RTD approach accounts for mixtures of residence times in a single sample to provide estimates of in-situ rates. Tracers included SF 6, CFCs, 3H, He from 3H (tritiogenic He), 14C, and terrigenic He. Parameter estimation and multi-model averaging were used to establish RTDs with lower error variances than those produced by individual RTD models. The set of multi-model RTDs was used in combination with NO 3 – and dissolved gas data to estimate zero order and first order rates of O 2 reduction and denitrification. Results indicated that O 2 reduction and denitrification rates followed approximately log-normal distributions. Rates of O 2 and NO 3 – reduction were correlated and, on an electron milliequivalent basis, denitrification rates tended to exceed O 2 reduction rates. Estimated historical NO 3 – trends were similar to historical measurements. Here, results show that the multi-model approach can improve estimation of age distributions, and that relatively easily measured O 2 rates can provide information about trends in denitrification rates, which are more difficult to estimate.« less
Green, Christopher T.; Jurgens, Bryant C.; Zhang, Yong; ...
2016-05-14
Rates of oxygen and nitrate reduction are key factors in determining the chemical evolution of groundwater. Little is known about how these rates vary and covary in regional groundwater settings, as few studies have focused on regional datasets with multiple tracers and methods of analysis that account for effects of mixed residence times on apparent reaction rates. This study provides insight into the characteristics of residence times and rates of O 2 reduction and denitrification (NO 3 – reduction) by comparing reaction rates using multi-model analytical residence time distributions (RTDs) applied to a data set of atmospheric tracers of groundwatermore » age and geochemical data from 141 well samples in the Central Eastern San Joaquin Valley, CA. The RTD approach accounts for mixtures of residence times in a single sample to provide estimates of in-situ rates. Tracers included SF 6, CFCs, 3H, He from 3H (tritiogenic He), 14C, and terrigenic He. Parameter estimation and multi-model averaging were used to establish RTDs with lower error variances than those produced by individual RTD models. The set of multi-model RTDs was used in combination with NO 3 – and dissolved gas data to estimate zero order and first order rates of O 2 reduction and denitrification. Results indicated that O 2 reduction and denitrification rates followed approximately log-normal distributions. Rates of O 2 and NO 3 – reduction were correlated and, on an electron milliequivalent basis, denitrification rates tended to exceed O 2 reduction rates. Estimated historical NO 3 – trends were similar to historical measurements. Here, results show that the multi-model approach can improve estimation of age distributions, and that relatively easily measured O 2 rates can provide information about trends in denitrification rates, which are more difficult to estimate.« less
Evaluation of SSME test data reduction methods
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1994-01-01
Accurate prediction of hardware and flow characteristics within the Space Shuttle Main Engine (SSME) during transient and main-stage operation requires a significant integration of ground test data, flight experience, and computational models. The process of integrating SSME test measurements with physical model predictions is commonly referred to as data reduction. Uncertainties within both test measurements and simplified models of the SSME flow environment compound the data integration problem. The first objective of this effort was to establish an acceptability criterion for data reduction solutions. The second objective of this effort was to investigate the data reduction potential of the ROCETS (Rocket Engine Transient Simulation) simulation platform. A simplified ROCETS model of the SSME was obtained from the MSFC Performance Analysis Branch . This model was examined and tested for physical consistency. Two modules were constructed and added to the ROCETS library to independently check the mass and energy balances of selected engine subsystems including the low pressure fuel turbopump, the high pressure fuel turbopump, the low pressure oxidizer turbopump, the high pressure oxidizer turbopump, the fuel preburner, the oxidizer preburner, the main combustion chamber coolant circuit, and the nozzle coolant circuit. A sensitivity study was then conducted to determine the individual influences of forty-two hardware characteristics on fourteen high pressure region prediction variables as returned by the SSME ROCETS model.
The efficacy of serostatus disclosure for HIV Transmission risk reduction.
O'Connell, Ann A; Reed, Sandra J; Serovich, Julianne A
2015-02-01
Interventions to assist HIV+ persons in disclosing their serostatus to sexual partners can play an important role in curbing rates of HIV transmission among men who have sex with men (MSM). Based on the methods of Pinkerton and Galletly (AIDS Behav 11:698-705, 2007), we develop a mathematical probability model for evaluating effectiveness of serostatus disclosure in reducing the risk of HIV transmission and extend the model to examine the impact of serosorting. In baseline data from 164 HIV+ MSM participating in a randomized controlled trial of a disclosure intervention, disclosure is associated with a 45.0 % reduction in the risk of HIV transmission. Accounting for serosorting, a 61.2 % reduction in risk due to disclosure was observed in serodisconcordant couples. The reduction in risk for seroconcordant couples was 38.4 %. Evidence provided supports the value of serostatus disclosure as a risk reduction strategy in HIV+ MSM. Interventions to increase serostatus disclosure and that address serosorting behaviors are needed.
The Efficacy of Serostatus Disclosure for HIV Transmission Risk Reduction
O’Connell, Ann A.; Serovich, Julianne A.
2015-01-01
Interventions to assist HIV+ persons in disclosing their serostatus to sexual partners can play an important role in curbing rates of HIV transmission among men who have sex with men (MSM). Based on the methods of Pinkerton and Galletly (AIDS Behav 11:698–705, 2007), we develop a mathematical probability model for evaluating effectiveness of serostatus disclosure in reducing the risk of HIV transmission and extend the model to examine the impact of serosorting. In baseline data from 164 HIV+ MSM participating in a randomized controlled trial of a disclosure intervention, disclosure is associated with a 45.0 % reduction in the risk of HIV transmission. Accounting for serosorting, a 61.2 % reduction in risk due to disclosure was observed in serodisconcordant couples. The reduction in risk for seroconcordant couples was 38.4 %. Evidence provided supports the value of serostatus disclosure as a risk reduction strategy in HIV+ MSM. Interventions to increase serostatus disclosure and that address serosorting behaviors are needed. PMID:25164375
A novel description of FDG excretion in the renal system: application to metformin-treated models
NASA Astrophysics Data System (ADS)
Garbarino, S.; Caviglia, G.; Sambuceti, G.; Benvenuto, F.; Piana, M.
2014-05-01
This paper introduces a novel compartmental model describing the excretion of 18F-fluoro-deoxyglucose (FDG) in the renal system and a numerical method based on the maximum likelihood for its reduction. This approach accounts for variations in FDG concentration due to water re-absorption in renal tubules and the increase of the bladder’s volume during the FDG excretion process. From the computational viewpoint, the reconstruction of the tracer kinetic parameters is obtained by solving the maximum likelihood problem iteratively, using a non-stationary, steepest descent approach that explicitly accounts for the Poisson nature of nuclear medicine data. The reliability of the method is validated against two sets of synthetic data realized according to realistic conditions. Finally we applied this model to describe FDG excretion in the case of animal models treated with metformin. In particular we show that our approach allows the quantitative estimation of the reduction of FDG de-phosphorylation induced by metformin.
NASA Astrophysics Data System (ADS)
Uno, Takanori; Ichikawa, Kouji; Mabuchi, Yuichi; Nakamura, Atsushi; Okazaki, Yuji; Asai, Hideki
In this paper, we studied the use of common-mode noise reduction technique for in-vehicle electronic equipment in an actual instrument design. We have improved the circuit model of the common-mode noise that flows to the wire harness to add the effect of a bypass capacitor located near the LSI. We analyzed the improved circuit model using a circuit simulator and verified the effectiveness of the noise reduction condition derived from the circuit model. It was also confirmed that offsetting the impedance mismatch in the PCB section requires to make a circuit constant larger than that necessary for doing the impedance mismatch in the LSI section. An evaluation circuit board comprising an automotive microcomputer was prototyped to experiment on the common-mode noise reduction effect of the board. The experimental results showed the noise reduction effect of the board. The experimental results also revealed that the degree of impedance mismatch in the LSI section can be estimated by using a PCB having a known impedance. We further inquired into the optimization of impedance parameters, which is difficult for actual products at present. To satisfy the noise reduction condition composed of numerous parameters, we proposed a design method using an optimization algorithm and an electromagnetic field simulator, and confirmed its effectiveness.
Agarwal, Sanjiv; Fulgoni, Victor L; Spence, Lisa; Samuel, Priscilla
2015-11-01
Limiting dietary sodium intake has been a consistent dietary recommendation. Using NHANES 2007-2010 data, we estimated current sodium intake and modeled the potential impact of a new sodium reduction technology on sodium intake. NHANES 2007-2010 data were used to assess current sodium intake. The National Cancer Institute method was used for usual intake determination. Suggested sodium reductions using SODA-LO (®) Salt Microspheres ranged from 20% to 30% in 953 foods and usual intakes were modeled by using various reduction factors and levels of market penetration. SAS 9.2, SUDAAN 11, and NHANES survey weights were used in all calculations with assessment across gender and age groups. Current (2007-2010) sodium intake (mg/day) exceeds recommendations across all age gender groups and has not changed during the last decade. However, sodium intake measured as a function of food intake (mg/g food) has decreased significantly during the last decade. Two food categories contribute about 2/3rd of total sodium intake: "Grain Products" and "Meat, Poultry, Fish & Mixtures". Sodium reduction, with 100% market penetration of the new technology, was estimated to be 230-300 mg/day or 7-9% of intake depending upon age and gender group. Sodium reduction innovations like SODA-LO (®) Salt Microspheres could contribute to meaningful reductions in sodium intake.
Human Health and Economic Impacts of Ozone Reductions by Income Group.
Saari, Rebecca K; Thompson, Tammy M; Selin, Noelle E
2017-02-21
Low-income households may be disproportionately affected by ozone pollution and ozone policy. We quantify how three factors affect the relative benefits of ozone policies with household income: (1) unequal ozone reductions; (2) policy delay; and (3) economic valuation methods. We model ozone concentrations under baseline and policy conditions across the full continental United States to estimate the distribution of ozone-related health impacts across nine income groups. We enhance an economic model to include these impacts across household income categories, and present its first application to evaluate the benefits of ozone reductions for low-income households. We find that mortality incidence rates decrease with increasing income. Modeled ozone levels yield a median of 11 deaths per 100 000 people in 2005. Proposed policy reduces these rates by 13%. Ozone reductions are highest among low-income households, which increases their relative welfare gains by up to 4% and decreases them for the rich by up to 8%. The median value of reductions in 2015 is either $30 billion (in 2006 U.S. dollars) or $1 billion if reduced mortality risks are valued with willingness-to-pay or as income from increased life expectancy. Ozone reductions were relatively twice as beneficial for the lowest- compared to the highest-income households. The valuation approach affected benefits more than a policy delay or differential ozone reductions with income.
Numerical Implementation of the Cohesive Soil Bounding Surface Plasticity Model. Volume I.
1983-02-01
AD-R24 866 NUMERICAL IMPLEMENTATION OF THE COHESIVE SOIL BOUNDING 1/2 SURFACE PLASTICITY ..(U) CALIFORNIA UNIV DAVIS DEPT OF CIVIL ENGINEERING L R...a study of various numerical means for implementing the bounding surface plasticity model for cohesive soils is presented. A comparison is made of... Plasticity Models 17 3.4 Selection Of Methods For Comparison 17 3.5 Theory 20 3.5.1 Solution Methods 20 3.5.2 Reduction Of The Number Of Equation
Kinetic study of nickel laterite reduction roasting by palm kernel shell charcoal
NASA Astrophysics Data System (ADS)
Sugiarto, E.; Putera, A. D. P.; Petrus, H. T. B. M.
2017-05-01
Demand to process nickel-bearing laterite ore increase as continuous depletion of high-grade nickel-bearing sulfide ore takes place. Due to its common nickel association with iron, processing nickel laterite ore into nickel pig iron (NPI) has been developed by some industries. However, to achieve satisfying nickel recoveries, the process needs massive high-grade metallurgical coke consumption. Concerning on the sustainability of coke supply and positive carbon emission, reduction of nickel laterite ore using biomass-based reductor was being studied.In this study, saprolitic nickel laterite ore was being reduced by palm kernel shell charcoal at several temperatures (800-1000 °C). Variation of biomass-laterite composition was also conducted to study the reduction mechanism. X-ray diffraction and gravimetry analysis were applied to justify the phenomenon and predict kinetic model of the reduction. Results of this study provide information that palm kernel shell charcoal has similar reducing result compared with the conventional method. Reduction, however, was carried out by carbon monoxide rather than solid carbon. Regarding kinetics, Ginstling-Brouhnstein kinetic model provides satisfying results to predict the reduction phenomenon.
Mastication noise reduction method for fully implantable hearing aid using piezo-electric sensor.
Na, Sung Dae; Lee, Gihyoun; Wei, Qun; Seong, Ki Woong; Cho, Jin Ho; Kim, Myoung Nam
2017-07-20
Fully implantable hearing devices (FIHDs) can be affected by generated biomechanical noise such as mastication noise. To reduce the mastication noise using a piezo-electric sensor, the mastication noise is measured with the piezo-electric sensor, and noise reduction is practiced by the energy difference. For the experiment on mastication noise, a skull model was designed using artificial skull model and a piezo-electric sensor that can measure the vibration signals better than other sensors. A 1 kHz pure-tone sound through a standard speaker was applied to the model while the lower jawbone of the model was moved in a masticatory fashion. The correlation coefficients and signal-to-noise ratio (SNR) before and after application of the proposed method were compared. It was found that the signal-to-noise ratio and correlation coefficients increased by 4.48 dB and 0.45, respectively. The mastication noise is measured by piezo-electric sensor as the mastication noise that occurred during vibration. In addition, the noise was reduced by using the proposed method in conjunction with MATLAB. In order to confirm the performance of the proposed method, the correlation coefficients and signal-to-noise ratio before and after signal processing were calculated. In the future, an implantable microphone for real-time processing will be developed.
Singh, Jay; Chattterjee, Kalyan; Vishwakarma, C B
2018-01-01
Load frequency controller has been designed for reduced order model of single area and two-area reheat hydro-thermal power system through internal model control - proportional integral derivative (IMC-PID) control techniques. The controller design method is based on two degree of freedom (2DOF) internal model control which combines with model order reduction technique. Here, in spite of taking full order system model a reduced order model has been considered for 2DOF-IMC-PID design and the designed controller is directly applied to full order system model. The Logarithmic based model order reduction technique is proposed to reduce the single and two-area high order power systems for the application of controller design.The proposed IMC-PID design of reduced order model achieves good dynamic response and robustness against load disturbance with the original high order system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Characteristics of Reduction Gear in Electric Agricultural Vehicle
NASA Astrophysics Data System (ADS)
Choi, W. S.; Pratama, P. S.; Supeno, D.; Jeong, S. W.; Byun, J. Y.; Woo, J. H.; Lee, E. S.; Park, C. S.
2018-03-01
In electric agricultural machine a reduction gear is needed to convert the high speed rotation motion generated by DC motor to lower speed rotation motion used by the vehicle. The reduction gear consists of several spur gears. Spur gears are the most easily visualized gears that transmit motion between two parallel shafts and easy to produce. The modelling and simulation of spur gears in DC motor reduction gear is important to predict the actual motion behaviour. A pair of spur gear tooth in action is generally subjected to two types of cyclic stress: contact stress and bending stress. The stress may not attain their maximum values at the same point of contact fatigue. These types of failure can be minimized by analysis of the problem during the design stage and creating proper tooth surface profile with proper manufacturing methods. To improve its life expectation in this study modal and stress analysis of reduction gear is simulated using ANSYS workbench based on finite element method (FEM). The modal analysis was done to understand reduction gear deformation behaviour when vibration occurs. FEM static stress analysis is also simulated on reduction gear to simulate the gear teeth bending stress and contact stress behaviour.
Method for Estimating Thread Strength Reduction of Damaged Parent Holes with Inserts
NASA Technical Reports Server (NTRS)
Johnson, David L.; Stratton, Troy C.
2005-01-01
During normal assembly and disassembly of bolted-joint components, thread damage and/or deformation may occur. If threads are overloaded, thread damage/deformation can also be anticipated. Typical inspection techniques (e.g. using GO-NO GO gages) may not provide adequate visibility of the extent of thread damage. More detailed inspection techniques have provided actual pitch-diameter profiles of damaged-hardware holes. A method to predict the reduction in thread shear-out capacity of damaged threaded holes has been developed. This method was based on testing and analytical modeling. Test samples were machined to simulate damaged holes in the hardware of interest. Test samples containing pristine parent-holes were also manufactured from the same bar-stock material to provide baseline results for comparison purposes. After the particular parent-hole thread profile was machined into each sample a helical insert was installed into the threaded hole. These samples were tested in a specially designed fixture to determine the maximum load required to shear out the parent threads. It was determined from the pristine-hole samples that, for the specific material tested, each individual thread could resist an average load of 3980 pounds. The shear-out loads of the holes having modified pitch diameters were compared to the ultimate loads of the specimens with pristine holes. An equivalent number of missing helical coil threads was then determined based on the ratio of shear-out loads for each thread configuration. These data were compared with the results from a finite element model (FEM). The model gave insights into the ability of the thread loads to redistribute for both pristine and simulated damage configurations. In this case, it was determined that the overall potential reduction in thread load-carrying capability in the hardware of interest was equal to having up to three fewer threads in the hole that bolt threads could engage. One- half of this potential reduction was due to local pitch-diameter variations and the other half was due to overall pitch-diameter enlargement beyond Class 2 fit. This result was important in that the thread shear capacity for this particular hardware design was the limiting structural capability. The details of the method development, including the supporting testing, data reduction and analytical model results comparison will be discussed hereafter.
NASA Astrophysics Data System (ADS)
Wang, Y. B.; Zhu, X. W.; Dai, H. H.
2016-08-01
Though widely used in modelling nano- and micro- structures, Eringen's differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen's two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings are considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.
An, Yan; Zou, Zhihong; Li, Ranran
2014-01-01
A large number of parameters are acquired during practical water quality monitoring. If all the parameters are used in water quality assessment, the computational complexity will definitely increase. In order to reduce the input space dimensions, a fuzzy rough set was introduced to perform attribute reduction. Then, an attribute recognition theoretical model and entropy method were combined to assess water quality in the Harbin reach of the Songhuajiang River in China. A dataset consisting of ten parameters was collected from January to October in 2012. Fuzzy rough set was applied to reduce the ten parameters to four parameters: BOD5, NH3-N, TP, and F. coli (Reduct A). Considering that DO is a usual parameter in water quality assessment, another reduct, including DO, BOD5, NH3-N, TP, TN, F, and F. coli (Reduct B), was obtained. The assessment results of Reduct B show a good consistency with those of Reduct A, and this means that DO is not always necessary to assess water quality. The results with attribute reduction are not exactly the same as those without attribute reduction, which can be attributed to the α value decided by subjective experience. The assessment results gained by the fuzzy rough set obviously reduce computational complexity, and are acceptable and reliable. The model proposed in this paper enhances the water quality assessment system. PMID:24675643
Interval type-2 fuzzy PID controller for uncertain nonlinear inverted pendulum system.
El-Bardini, Mohammad; El-Nagar, Ahmad M
2014-05-01
In this paper, the interval type-2 fuzzy proportional-integral-derivative controller (IT2F-PID) is proposed for controlling an inverted pendulum on a cart system with an uncertain model. The proposed controller is designed using a new method of type-reduction that we have proposed, which is called the simplified type-reduction method. The proposed IT2F-PID controller is able to handle the effect of structure uncertainties due to the structure of the interval type-2 fuzzy logic system (IT2-FLS). The results of the proposed IT2F-PID controller using a new method of type-reduction are compared with the other proposed IT2F-PID controller using the uncertainty bound method and the type-1 fuzzy PID controller (T1F-PID). The simulation and practical results show that the performance of the proposed controller is significantly improved compared with the T1F-PID controller. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Beyramysoltan, Samira; Rajkó, Róbert; Abdollahi, Hamid
2013-08-12
The obtained results by soft modeling multivariate curve resolution methods often are not unique and are questionable because of rotational ambiguity. It means a range of feasible solutions equally fit experimental data and fulfill the constraints. Regarding to chemometric literature, a survey of useful constraints for the reduction of the rotational ambiguity is a big challenge for chemometrician. It is worth to study the effects of applying constraints on the reduction of rotational ambiguity, since it can help us to choose the useful constraints in order to impose in multivariate curve resolution methods for analyzing data sets. In this work, we have investigated the effect of equality constraint on decreasing of the rotational ambiguity. For calculation of all feasible solutions corresponding with known spectrum, a novel systematic grid search method based on Species-based Particle Swarm Optimization is proposed in a three-component system. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Blumenthal, Brennan T.; Elmiligui, Alaa; Geiselhart, Karl A.; Campbell, Richard L.; Maughmer, Mark D.; Schmitz, Sven
2016-01-01
The present paper examines potential propulsive and aerodynamic benefits of integrating a Boundary-Layer Ingestion (BLI) propulsion system into a typical commercial aircraft using the Common Research Model (CRM) geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment is used to generate engine conditions for CFD analysis. Improvements to the BLI geometry are made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method, and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2 deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.4% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from Boundary-Layer Ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.
NASA Technical Reports Server (NTRS)
Blumenthal, Brennan
2016-01-01
This thesis will examine potential propulsive and aerodynamic benefits of integrating a boundary-layer ingestion (BLI) propulsion system with a typical commercial aircraft using the Common Research Model geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment will be used to generate engine conditions for CFD analysis. Improvements to the BLI geometry will be made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.3% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from boundary-layer ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.
Computational methods of robust controller design for aerodynamic flutter suppression
NASA Technical Reports Server (NTRS)
Anderson, L. R.
1981-01-01
The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.
Model and Data Reduction for Control, Identification and Compressed Sensing
NASA Astrophysics Data System (ADS)
Kramer, Boris
This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n ≥ 106 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic mode decomposition). Subsequently, a new, compressed sensing based classification algorithm is developed which incorporates the extracted dynamic information into the sensing basis. We show that this augmented classification basis makes the method more robust to noise, and results in superior identification of the correct parameter. Numerical examples consist of a Navier-Stokes, as well as a Boussinesq flow application.
Isostable reduction with applications to time-dependent partial differential equations.
Wilson, Dan; Moehlis, Jeff
2016-07-01
Isostables and isostable reduction, analogous to isochrons and phase reduction for oscillatory systems, are useful in the study of nonlinear equations which asymptotically approach a stationary solution. In this work, we present a general method for isostable reduction of partial differential equations, with the potential power to reduce the dimensionality of a nonlinear system from infinity to 1. We illustrate the utility of this reduction by applying it to two different models with biological relevance. In the first example, isostable reduction of the Fokker-Planck equation provides the necessary framework to design a simple control strategy to desynchronize a population of pathologically synchronized oscillatory neurons, as might be relevant to Parkinson's disease. Another example analyzes a nonlinear reaction-diffusion equation with relevance to action potential propagation in a cardiac system.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
1998-01-01
Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.
NASA Astrophysics Data System (ADS)
Nemati, Maedeh; Shateri Najaf Abady, Ali Reza; Toghraie, Davood; Karimipour, Arash
2018-01-01
The incorporation of different equations of state into single-component multiphase lattice Boltzmann model is considered in this paper. The original pseudopotential model is first detailed, and several cubic equations of state, the Redlich-Kwong, Redlich-Kwong-Soave, and Peng-Robinson are then incorporated into the lattice Boltzmann model. A comparison of the numerical simulation achievements on the basis of density ratios and spurious currents is used for presentation of the details of phase separation in these non-ideal single-component systems. The paper demonstrates that the scheme for the inter-particle interaction force term as well as the force term incorporation method matters to achieve more accurate and stable results. The velocity shifting method is demonstrated as the force term incorporation method, among many, with accuracy and stability results. Kupershtokh scheme also makes it possible to achieve large density ratio (up to 104) and to reproduce the coexistence curve with high accuracy. Significant reduction of the spurious currents at vapor-liquid interface is another observation. High-density ratio and spurious current reduction resulted from the Redlich-Kwong-Soave and Peng-Robinson EOSs, in higher accordance with the Maxwell construction results.
NASA Astrophysics Data System (ADS)
Migliorelli, Carolina; Alonso, Joan F.; Romero, Sergio; Mañanas, Miguel A.; Nowak, Rafał; Russi, Antonio
2016-04-01
Objective. Medical intractable epilepsy is a common condition that affects 40% of epileptic patients that generally have to undergo resective surgery. Magnetoencephalography (MEG) has been increasingly used to identify the epileptogenic foci through equivalent current dipole (ECD) modeling, one of the most accepted methods to obtain an accurate localization of interictal epileptiform discharges (IEDs). Modeling requires that MEG signals are adequately preprocessed to reduce interferences, a task that has been greatly improved by the use of blind source separation (BSS) methods. MEG recordings are highly sensitive to metallic interferences originated inside the head by implanted intracranial electrodes, dental prosthesis, etc and also coming from external sources such as pacemakers or vagal stimulators. To reduce these artifacts, a BSS-based fully automatic procedure was recently developed and validated, showing an effective reduction of metallic artifacts in simulated and real signals (Migliorelli et al 2015 J. Neural Eng. 12 046001). The main objective of this study was to evaluate its effects in the detection of IEDs and ECD modeling of patients with focal epilepsy and metallic interference. Approach. A comparison between the resulting positions of ECDs was performed: without removing metallic interference; rejecting only channels with large metallic artifacts; and after BSS-based reduction. Measures of dispersion and distance of ECDs were defined to analyze the results. Main results. The relationship between the artifact-to-signal ratio and ECD fitting showed that higher values of metallic interference produced highly scattered dipoles. Results revealed a significant reduction on dispersion using the BSS-based reduction procedure, yielding feasible locations of ECDs in contrast to the other two approaches. Significance. The automatic BSS-based method can be applied to MEG datasets affected by metallic artifacts as a processing step to improve the localization of epileptic foci.
A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2001-01-01
An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.
Real-time data reduction capabilities at the Langley 7 by 10 foot high speed tunnel
NASA Technical Reports Server (NTRS)
Fox, C. H., Jr.
1980-01-01
The 7 by 10 foot high speed tunnel performs a wide range of tests employing a variety of model installation methods. To support the reduction of static data from this facility, a generalized wind tunnel data reduction program had been developed for use on the Langley central computer complex. The capabilities of a version of this generalized program adapted for real time use on a dedicated on-site computer are discussed. The input specifications, instructions for the console operator, and full descriptions of the algorithms are included.
Study on Noise Prediction Model and Control Schemes for Substation
Gao, Yang; Liu, Songtao
2014-01-01
With the government's emphasis on environmental issues of power transmission and transformation project, noise pollution has become a prominent problem now. The noise from the working transformer, reactor, and other electrical equipment in the substation will bring negative effect to the ambient environment. This paper focuses on using acoustic software for the simulation and calculation method to control substation noise. According to the characteristics of the substation noise and the techniques of noise reduction, a substation's acoustic field model was established with the SoundPLAN software to predict the scope of substation noise. On this basis, 4 reasonable noise control schemes were advanced to provide some helpful references for noise control during the new substation's design and construction process. And the feasibility and application effect of these control schemes can be verified by using the method of simulation modeling. The simulation results show that the substation always has the problem of excessive noise at boundary under the conventional measures. The excess noise can be efficiently reduced by taking the corresponding noise reduction methods. PMID:24672356
Reduction of collisional-radiative models for transient, atomic plasmas
NASA Astrophysics Data System (ADS)
Abrantes, Richard June; Karagozian, Ann; Bilyeu, David; Le, Hai
2017-10-01
Interactions between plasmas and any radiation field, whether by lasers or plasma emissions, introduce many computational challenges. One of these computational challenges involves resolving the atomic physics, which can influence other physical phenomena in the radiated system. In this work, a collisional-radiative (CR) model with reduction capabilities is developed to capture the atomic physics at a reduced computational cost. Although the model is made with any element in mind, the model is currently supplemented by LANL's argon database, which includes the relevant collisional and radiative processes for all of the ionic stages. Using the detailed data set as the true solution, reduction mechanisms in the form of Boltzmann grouping, uniform grouping, and quasi-steady-state (QSS), are implemented to compare against the true solution. Effects on the transient plasma stemming from the grouping methods are compared. Distribution A: Approved for public release; unlimited distribution, PA (Public Affairs) Clearance Number 17449. This work was supported by the Air Force Office of Scientific Research (AFOSR), Grant Number 17RQCOR463 (Dr. Jason Marshall).
NASA Astrophysics Data System (ADS)
Schulz, Wolfgang; Hermanns, Torsten; Al Khawli, Toufik
2017-07-01
Decision making for competitive production in high-wage countries is a daily challenge where rational and irrational methods are used. The design of decision making processes is an intriguing, discipline spanning science. However, there are gaps in understanding the impact of the known mathematical and procedural methods on the usage of rational choice theory. Following Benjamin Franklin's rule for decision making formulated in London 1772, he called "Prudential Algebra" with the meaning of prudential reasons, one of the major ingredients of Meta-Modelling can be identified finally leading to one algebraic value labelling the results (criteria settings) of alternative decisions (parameter settings). This work describes the advances in Meta-Modelling techniques applied to multi-dimensional and multi-criterial optimization by identifying the persistence level of the corresponding Morse-Smale Complex. Implementations for laser cutting and laser drilling are presented, including the generation of fast and frugal Meta-Models with controlled error based on mathematical model reduction Reduced Models are derived to avoid any unnecessary complexity. Both, model reduction and analysis of multi-dimensional parameter space are used to enable interactive communication between Discovery Finders and Invention Makers. Emulators and visualizations of a metamodel are introduced as components of Virtual Production Intelligence making applicable the methods of Scientific Design Thinking and getting the developer as well as the operator more skilled.
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
NASA Technical Reports Server (NTRS)
Koshak, William; Krider, E. Philip; Murray, Natalie; Boccippio, Dennis
2007-01-01
A "dimensional reduction" (DR) method is introduced for analyzing lightning field changes whereby the number of unknowns in a discrete two-charge model is reduced from the standard eight to just four. The four unknowns are found by performing a numerical minimization of a chi-squared goodness-of-fit function. At each step of the minimization, an Overdetermined Fixed Matrix (OFM) method is used to immediately retrieve the best "residual source". In this way, all 8 parameters are found, yet a numerical search of only 4 parameters is required. The inversion method is applied to the understanding of lightning charge retrievals. The accuracy of the DR method has been assessed by comparing retrievals with data provided by the Lightning Detection And Ranging (LDAR) instrument. Because lightning effectively deposits charge within thundercloud charge centers and because LDAR traces the geometrical development of the lightning channel with high precision, the LDAR data provides an ideal constraint for finding the best model charge solutions. In particular, LDAR data can be used to help determine both the horizontal and vertical positions of the model charges, thereby eliminating dipole ambiguities. The results of the LDAR-constrained charge retrieval method have been compared to the locations of optical pulses/flash locations detected by the Lightning Imaging Sensor (LIS).
A method to assess the potential effects of air pollution mitigation on healthcare costs.
Sætterstrøm, Bjørn; Kruse, Marie; Brønnum-Hansen, Henrik; Bønløkke, Jakob Hjort; Flachs, Esben Meulengracht; Sørensen, Jan
2012-01-01
The aim of this study was to develop a method to assess the potential effects of air pollution mitigation on healthcare costs and to apply this method to assess the potential savings related to a reduction in fine particle matter in Denmark. The effects of air pollution on health were used to identify "exposed" individuals (i.e., cases). Coronary heart disease, stroke, chronic obstructive pulmonary disease, and lung cancer were considered to be associated with air pollution. We used propensity score matching, two-part estimation, and Lin's method to estimate healthcare costs. Subsequently, we multiplied the number of saved cases due to mitigation with the healthcare costs to arrive to an expression for healthcare cost savings. The potential cost saving in the healthcare system arising from a modelled reduction in air pollution was estimated at €0.1-2.6 million per 100,000 inhabitants for the four diseases. We have illustrated an application of a method to assess the potential changes in healthcare costs due to a reduction in air pollution. The method relies on a large volume of administrative data and combines a number of established methods for epidemiological analysis.
Masoudi, Reza; Soleimani, Mohammad Ali; Yaghoobzadeh, Ameneh; Baraz, Shahram; Hakim, Ashrafalsadat; Chan, Yiong H
2017-01-01
Education is a fundamental component for patients with diabetes to achieve good glycemic control. In addition, selecting the appropriate method of education is one of the most effective factors in the quality of life. The present study aimed to evaluate the effect of face-to-face education, problem-based learning, and Goldstein systematic training model on the quality of life (QOL) and fatigue among caregivers of patients with diabetes. This randomized clinical trial was conducted in Hajar Hospital (Shahrekord, Iran) in 2012. The study subjects consisted of 105 family caregivers of patients with diabetes. The participants were randomly assigned to three intervention groups (35 caregivers in each group). For each group, 5-h training sessions were held separately. QOL and fatigue were evaluated immediately before and after the intervention, and after 1, 2, 3, and 4 months of intervention. There was a significant increase in QOL for all the three groups. Both the problem-based learning and the Goldstein method showed desirable QOL improvement over time. The desired educational intervention for fatigue reduction during the 4-month post-intervention period was the Goldstein method. A significant reduction was observed in fatigue in all three groups after the intervention ( P < 0.001). The results of the present study illustrated that the problem-based learning and Goldstein systematic training model improve the QOL of caregivers of patients with diabetes. In addition, the Goldstein systematic training model had the greatest effect on the reduction of fatigue within 4 months of the intervention.
2015-06-01
and tools, called model-integrated computing ( MIC ) [3] relies on the use of domain-specific modeling languages for creating models of the system to be...hence giving reflective capabilities to it. We have followed the MIC method here: we designed a domain- specific modeling language for modeling...are produced one-off and not for the mass market , the scope for price reduction based on the market demands is non-existent. Processes to create
Scalable Learning for Geostatistics and Speaker Recognition
2011-01-01
of prior knowledge of the model or due to improved robustness requirements). Both these methods have their own advantages and disadvantages. The use...application. If the data is well-correlated and low-dimensional, any prior knowledge available on the data can be used to build a parametric model. In the...absence of prior knowledge , non-parametric methods can be used. If the data is high-dimensional, PCA based dimensionality reduction is often the first
Isma’eel, Hussain A.; Sakr, George E.; Almedawar, Mohamad M.; Fathallah, Jihan; Garabedian, Torkom; Eddine, Savo Bou Zein
2015-01-01
Background High dietary salt intake is directly linked to hypertension and cardiovascular diseases (CVDs). Predicting behaviors regarding salt intake habits is vital to guide interventions and increase their effectiveness. We aim to compare the accuracy of an artificial neural network (ANN) based tool that predicts behavior from key knowledge questions along with clinical data in a high cardiovascular risk cohort relative to the least square models (LSM) method. Methods We collected knowledge, attitude and behavior data on 115 patients. A behavior score was calculated to classify patients’ behavior towards reducing salt intake. Accuracy comparison between ANN and regression analysis was calculated using the bootstrap technique with 200 iterations. Results Starting from a 69-item questionnaire, a reduced model was developed and included eight knowledge items found to result in the highest accuracy of 62% CI (58-67%). The best prediction accuracy in the full and reduced models was attained by ANN at 66% and 62%, respectively, compared to full and reduced LSM at 40% and 34%, respectively. The average relative increase in accuracy over all in the full and reduced models is 82% and 102%, respectively. Conclusions Using ANN modeling, we can predict salt reduction behaviors with 66% accuracy. The statistical model has been implemented in an online calculator and can be used in clinics to estimate the patient’s behavior. This will help implementation in future research to further prove clinical utility of this tool to guide therapeutic salt reduction interventions in high cardiovascular risk individuals. PMID:26090333
In the United States, regional-scale photochemical models are being used to design emission control strategies needed to meet the relevant National Ambient Air Quality Standards (NAAQS) within the framework of the attainment demonstration process. Previous studies have shown that...
NASA Technical Reports Server (NTRS)
Samba, A. S.
1985-01-01
The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.
Optimization Design of Minimum Total Resistance Hull Form Based on CFD Method
NASA Astrophysics Data System (ADS)
Zhang, Bao-ji; Zhang, Sheng-long; Zhang, Hui
2018-06-01
In order to reduce the resistance and improve the hydrodynamic performance of a ship, two hull form design methods are proposed based on the potential flow theory and viscous flow theory. The flow fields are meshed using body-fitted mesh and structured grids. The parameters of the hull modification function are the design variables. A three-dimensional modeling method is used to alter the geometry. The Non-Linear Programming (NLP) method is utilized to optimize a David Taylor Model Basin (DTMB) model 5415 ship under the constraints, including the displacement constraint. The optimization results show an effective reduction of the resistance. The two hull form design methods developed in this study can provide technical support and theoretical basis for designing green ships.
Steinmann, Eike; Gravemann, Ute; Friesland, Martina; Doerrbecker, Juliane; Müller, Thomas H; Pietschmann, Thomas; Seltsam, Axel
2013-05-01
Contamination of blood products with hepatitis C virus (HCV) can cause infections resulting in acute and chronic liver diseases. Pathogen reduction methods such as photodynamic treatment with methylene blue (MB) plus visible light as well as irradiation with shortwave ultraviolet (UVC) light were developed to inactivate viruses and other pathogens in plasma and platelet concentrates (PCs), respectively. So far, their inactivation capacities for HCV have only been tested in inactivation studies using model viruses for HCV. Recently, a HCV infection system for the propagation of infectious HCV in cell culture was developed. Inactivation studies were performed with cell culture-derived HCV and bovine viral diarrhea virus (BVDV), a model for HCV. Plasma units or PCs were spiked with high titers of cell culture-grown viruses. After treatment of the blood units with MB plus light (Theraflex MB-Plasma system, MacoPharma) or UVC (Theraflex UV-Platelets system, MacoPharma), residual viral infectivity was assessed using sensitive cell culture systems. HCV was sensitive to inactivation by both pathogen reduction procedures. HCV in plasma was efficiently inactivated by MB plus light below the detection limit already by 1/12 of the full light dose. HCV in PCs was inactivated by UVC irradiation with a reduction factor of more than 5 log. BVDV was less sensitive to the two pathogen reduction methods. Functional assays with human HCV offer an efficient tool to directly assess the inactivation capacity of pathogen reduction procedures. Pathogen reduction technologies such as MB plus light treatment and UVC irradiation have the potential to significantly reduce transfusion-transmitted HCV infections. © 2012 American Association of Blood Banks.
Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.
2010-01-01
A central goal of human genetics is to identify and characterize susceptibility genes for common complex human diseases. An important challenge in this endeavor is the modeling of gene-gene interaction or epistasis that can result in non-additivity of genetic effects. The multifactor dimensionality reduction (MDR) method was developed as machine learning alternative to parametric logistic regression for detecting interactions in absence of significant marginal effects. The goal of MDR is to reduce the dimensionality inherent in modeling combinations of polymorphisms using a computational approach called constructive induction. Here, we propose a Robust Multifactor Dimensionality Reduction (RMDR) method that performs constructive induction using a Fisher’s Exact Test rather than a predetermined threshold. The advantage of this approach is that only those genotype combinations that are determined to be statistically significant are considered in the MDR analysis. We use two simulation studies to demonstrate that this approach will increase the success rate of MDR when there are only a few genotype combinations that are significantly associated with case-control status. We show that there is no loss of success rate when this is not the case. We then apply the RMDR method to the detection of gene-gene interactions in genotype data from a population-based study of bladder cancer in New Hampshire. PMID:21091664
The Kadomtsev{endash}Petviashvili equation as a source of integrable model equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maccari, A.
1996-12-01
A new integrable and nonlinear partial differential equation (PDE) in 2+1 dimensions is obtained, by an asymptotically exact reduction method based on Fourier expansion and spatiotemporal rescaling, from the Kadomtsev{endash}Petviashvili equation. The integrability property is explicitly demonstrated, by exhibiting the corresponding Lax pair, that is obtained by applying the reduction technique to the Lax pair of the Kadomtsev{endash}Petviashvili equation. This model equation is likely to be of applicative relevance, because it may be considered a consistent approximation of a large class of nonlinear evolution PDEs. {copyright} {ital 1996 American Institute of Physics.}
Shock Isolation Elements Testing for High Input Loadings. Volume II. Foam Shock Isolation Elements.
SHOCK ABSORBERS ), (*GUIDED MISSILE SILOS, SHOCK ABSORBERS ), (*EXPANDED PLASTICS, (*SHOCK(MECHANICS), REDUCTION), TEST METHODS, SHOCK WAVES, STRAIN(MECHANICS), LOADS(FORCES), MATHEMATICAL MODELS, NUCLEAR EXPLOSIONS, HARDENING.
SHOCK ABSORBERS ), (*GUIDED MISSILE SILOS, SHOCK ABSORBERS ), (*SPRINGS, (*SHOCK(MECHANICS), REDUCTION), TORSION BARS, ELASTOMERS, DAMPING, EQUATIONS OF MOTION, MODEL TESTS, TEST METHODS, NUCLEAR EXPLOSIONS, HARDENING.
Viterbi sparse spike detection and a compositional origin to ultralow-velocity zones
NASA Astrophysics Data System (ADS)
Brown, Samuel Paul
Accurate interpretation of seismic travel times and amplitudes in both the exploration and global scales is complicated by the band-limited nature of seismic data. We present a stochastic method, Viterbi sparse spike detection (VSSD), to reduce a seismic waveform into a most probable constituent spike train. Model waveforms are constructed from a set of candidate spike trains convolved with a source wavelet estimate. For each model waveform, a profile hidden Markov model (HMM) is constructed to represent the waveform as a stochastic generative model with a linear topology corresponding to a sequence of samples. The Viterbi algorithm is employed to simultaneously find the optimal nonlinear alignment between a model waveform and the seismic data, and to assign a score to each candidate spike train. The most probable travel times and amplitudes are inferred from the alignments of the highest scoring models. Our analyses show that the method can resolve closely spaced arrivals below traditional resolution limits and that travel time estimates are robust in the presence of random noise and source wavelet errors. We applied the VSSD method to constrain the elastic properties of a ultralow- velocity zone (ULVZ) at the core-mantle boundary beneath the Coral Sea. We analyzed vertical component short period ScP waveforms for 16 earthquakes occurring in the Tonga-Fiji trench recorded at the Alice Springs Array (ASAR) in central Australia. These waveforms show strong pre and postcursory seismic arrivals consistent with ULVZ layering. We used the VSSD method to measure differential travel-times and amplitudes of the post-cursor arrival ScSP and the precursor arrival SPcP relative to ScP. We compare our measurements to a database of approximately 340,000 synthetic seismograms finding that these data are best fit by a ULVZ model with an S-wave velocity reduction of 24%, a P-wave velocity reduction of 23%, a thickness of 8.5 km, and a density increase of 6%. We simultaneously constrain both P- and S-wave velocity reductions as a 1:1 ratio inside this ULVZ. This 1:1 ratio is not consistent with a partial melt origin to ULVZs. Rather, we demonstrate that a compositional origin is more likely.
Model reduction for Space Station Freedom
NASA Technical Reports Server (NTRS)
Williams, Trevor
1992-01-01
Model reduction is an important practical problem in the control of flexible spacecraft, and a considerable amount of work has been carried out on this topic. Two of the best known methods developed are modal truncation and internal balancing. Modal truncation is simple to implement but can give poor results when the structure possesses clustered natural frequencies, as often occurs in practice. Balancing avoids this problem but has the disadvantages of high computational cost, possible numerical sensitivity problems, and no physical interpretation for the resulting balanced 'modes'. The purpose of this work is to examine the performance of the subsystem balancing technique developed by the investigator when tested on a realistic flexible space structure, in this case a model of the Permanently Manned Configuration (PMC) of Space Station Freedom. This method retains the desirable properties of standard balancing while overcoming the three difficulties listed above. It achieves this by first decomposing the structural model into subsystems of highly correlated modes. Each subsystem is approximately uncorrelated from all others, so balancing them separately and then combining yields comparable results to balancing the entire structure directly. The operation count reduction obtained by the new technique is considerable: a factor of roughly r(exp 2) if the system decomposes into r equal subsystems. Numerical accuracy is also improved significantly, as the matrices being operated on are of reduced dimension, and the modes of the reduced-order model now have a clear physical interpretation; they are, to first order, linear combinations of repeated-frequency modes.
Fulgoni, Victor L; Agarwal, Sanjiv; Spence, Lisa; Samuel, Priscilla
2014-12-18
Because excessive dietary sodium intake is a major contributor to hypertension, a reduction in dietary sodium has been recommended for the US population. Using the National Health and Nutrition Examination Survey (NHANES) 2007-2010 data, we estimated current sodium intake in US population ethnic subgroups and modeled the potential impact of a new sodium reduction technology on sodium intake. NHANES 2007-2010 data were analyzed using The National Cancer Institute method to estimate usual intake in population subgroups. Potential impact of SODA-LO® Salt Microspheres sodium reduction technology on sodium intake was modeled using suggested sodium reductions of 20-30% in 953 foods and assuming various market penetrations. SAS 9.2, SUDAAN 11, and NHANES survey weights were used in all calculations with assessment across age, gender and ethnic groups. Current sodium intake across all population subgroups exceeds the Dietary Guidelines 2010 recommendations and has not changed during the last decade. However, sodium intake measured as a function of food intake has decreased significantly during the last decade for all ethnicities. "Grain Products" and "Meat, Poultry, Fish, & Mixtures" contribute about 2/3rd of total sodium intake. Sodium reduction, using SODA-LO® Salt Microspheres sodium reduction technology (with 100% market penetration) was estimated to be 185-323 mg/day or 6.3-8.4% of intake depending upon age, gender and ethnic group. Current sodium intake in US ethnic subgroups exceeds the recommendations and sodium reduction technologies could potentially help reduce dietary sodium intake among those groups.
Permafrost thaw strongly reduces allowable CO2 emissions for 1.5°C and 2°C
NASA Astrophysics Data System (ADS)
Kechiar, M.; Gasser, T.; Kleinen, T.; Ciais, P.; Huang, Y.; Burke, E.; Obersteiner, M.
2017-12-01
We quantify how the inclusion of carbon emission from permafrost thaw impacts the budgets of allowable anthropogenic CO2 emissions. We use the compact Earth system model OSCAR v2.2 which we expand with a permafrost module calibrated to emulate the behavior of the complex models JSBACH, ORCHIDEE and JULES. When using the "exceedance" method and with permafrost thaw turned off, we find budgets very close to the CMIP5 models' estimates reported by IPCC. With permafrost thaw turned on, the total budgets are reduced by 3-4%. This corresponds to a 33-45% reduction of the remaining budget for 1.5°C, and a 9-13% reduction for 2°C. When using the "avoidance" method, however, permafrost thaw reduces the total budget by 3-7%, which corresponds to reductions by 33-56% and 56-79% of the remaining budget for 1.5°C and 2°C, respectively. The avoidance method relies on many scenarios that actually peak below the target whereas the exceedance method overlooks the carbon emitted by thawed permafrost after the temperature target is reached, which explains the difference. If we use only the subset of scenarios in which there is no net negative emissions, the permafrost-induced reduction in total budgets rises to 6-15%. Permafrost thaw therefore makes the emission budgets strongly path-dependent. We also estimate budgets of needed carbon capture in scenarios overshooting the temperature targets. Permafrost thaw strongly increases these capture budgets: in the case of a 1.5°C target overshot by 0.5°C, which is in line with the Paris agreement, about 30% more carbon must be captured. Our conclusions are threefold. First, inclusion of permafrost thaw systematically reduces the emission budgets, and very strongly so if the temperature target is overshot. Second, the exceedance method, that is the only one that complex models can follow, only partially accounts for the effect of slow non-linear processes such as permafrost thaw, leading to overestimated budgets. Third, the newfound strong path-dependency of the budgets renders the concept more delicate to use. For instance, using a budget that implicitly assumes a large development of negative emission technologies may lead to missing the target if these are not as scalable as anticipated.
Poisson-Gaussian Noise Analysis and Estimation for Low-Dose X-ray Images in the NSCT Domain.
Lee, Sangyoon; Lee, Min Seok; Kang, Moon Gi
2018-03-29
The noise distribution of images obtained by X-ray sensors in low-dosage situations can be analyzed using the Poisson and Gaussian mixture model. Multiscale conversion is one of the most popular noise reduction methods used in recent years. Estimation of the noise distribution of each subband in the multiscale domain is the most important factor in performing noise reduction, with non-subsampled contourlet transform (NSCT) representing an effective method for scale and direction decomposition. In this study, we use artificially generated noise to analyze and estimate the Poisson-Gaussian noise of low-dose X-ray images in the NSCT domain. The noise distribution of the subband coefficients is analyzed using the noiseless low-band coefficients and the variance of the noisy subband coefficients. The noise-after-transform also follows a Poisson-Gaussian distribution, and the relationship between the noise parameters of the subband and the full-band image is identified. We then analyze noise of actual images to validate the theoretical analysis. Comparison of the proposed noise estimation method with an existing noise reduction method confirms that the proposed method outperforms traditional methods.
Poisson–Gaussian Noise Analysis and Estimation for Low-Dose X-ray Images in the NSCT Domain
Lee, Sangyoon; Lee, Min Seok; Kang, Moon Gi
2018-01-01
The noise distribution of images obtained by X-ray sensors in low-dosage situations can be analyzed using the Poisson and Gaussian mixture model. Multiscale conversion is one of the most popular noise reduction methods used in recent years. Estimation of the noise distribution of each subband in the multiscale domain is the most important factor in performing noise reduction, with non-subsampled contourlet transform (NSCT) representing an effective method for scale and direction decomposition. In this study, we use artificially generated noise to analyze and estimate the Poisson–Gaussian noise of low-dose X-ray images in the NSCT domain. The noise distribution of the subband coefficients is analyzed using the noiseless low-band coefficients and the variance of the noisy subband coefficients. The noise-after-transform also follows a Poisson–Gaussian distribution, and the relationship between the noise parameters of the subband and the full-band image is identified. We then analyze noise of actual images to validate the theoretical analysis. Comparison of the proposed noise estimation method with an existing noise reduction method confirms that the proposed method outperforms traditional methods. PMID:29596335
Isothermal reduction kinetics of Panzhihua ilmenite concentrate under 30vol% CO-70vol% N2 atmosphere
NASA Astrophysics Data System (ADS)
Zhang, Ying-yi; Lü, Wei; Lü, Xue-wei; Li, Sheng-ping; Bai, Chen-guang; Song, Bing; Han, Ke-xi
2017-03-01
The reduction of ilmenite concentrate in 30vol% CO-70vol% N2 atmosphere was characterized by thermogravimetric and differential thermogravimetric (TG-DTG) analysis methods at temperatures from 1073 to 1223 K. The isothermal reduction results show that the reduction process comprised two stages; the corresponding apparent activation energy was obtained by the iso-conversional and model-fitting methods. For the first stage, the effect of temperature on the conversion degree was not obvious, the phase boundary chemical reaction was the controlling step, with an apparent activation energy of 15.55-40.71 kJ·mol-1. For the second stage, when the temperatures was greater than 1123 K, the reaction rate and the conversion degree increased sharply with increasing temperature, and random nucleation and subsequent growth were the controlling steps, with an apparent activation energy ranging from 182.33 to 195.95 kJ·mol-1. For the whole reduction process, the average activation energy and pre-exponential factor were 98.94-118.33 kJ·mol-1 and 1.820-1.816 min-1, respectively.
Wang, Li; Xi, Feng Ming; Li, Jin Xin; Liu, Li Li
2016-09-01
Taking 39 industries as independent decision-making units in Liaoning Province from 2003 to 2012 and considering the benefits of energy, economy and environment, we combined direction distance function and radial DEA method to estimate and decompose the energy conservation and carbon emissions reduction efficiency of the industries. Carbon emission of each industry was calculated and defined as an undesirable output into the model of energy saving and carbon emission reduction efficiency. The results showed that energy saving and carbon emission reduction efficiency of industries had obvious heterogeneity in Liaoning Province. The whole energy conservation and carbon emissions reduction efficiency in each industry of Liaoning Province was not high, but it presented a rising trend. Improvements of pure technical efficiency and scale efficiency were the main measures to enhance energy saving and carbon emission reduction efficiency, especially scale efficiency improvement. In order to improve the energy saving and carbon emission reduction efficiency of each industry in Liaoning Province, we put forward that Liaoning Province should adjust industry structure, encourage the development of low carbon high benefit industries, improve scientific and technological level and adjust the industry scale reasonably, meanwhile, optimize energy structure, and develop renewable and clean energy.
NASA Astrophysics Data System (ADS)
Yates, S. R.; Ashworth, D. J.; Zheng, W.; Knuteson, J.; van Wesenbeeck, I. J.
2016-07-01
Fumigating soil is important for the production of many high-value vegetable, fruit, and tree crops, but fumigants are toxic pesticides with relatively high volatility, which can lead to significant atmospheric emissions. A field experiment was conducted to measure emissions and subsurface diffusion of a mixture of 1,3-dichloropropene (1,3-D) and chloropicrin after shank injection to bare soil at 61 cm depth (i.e., deep injection). Three on-field methods, the aerodynamic (ADM), integrated horizontal flux (IHF), and theoretical profile shape (TPS) methods, were used to obtain fumigant flux density and cumulative emission values. Two air dispersion models (CALPUFF and ISCST3) were also used to back-calculate the flux density using air concentration measurements surrounding the fumigated field. Emissions were continuously measured for 16 days and the daily peak emission rates for the five methods ranged from 13 to 33 μg m-2 s-1 for 1,3-D and 0.22-3.2 μg m-2 s-1 for chloropicrin. Total 1,3-D mass lost to the atmosphere was approximately 23-41 kg ha-1, or 15-27% of the applied active ingredient and total mass loss of chloropicrin was <2%. Based on the five methods, deep injection reduced total emissions by approximately 2-24% compared to standard fumigation practices where fumigant injection is at 46 cm depth. Given the relatively wide range in emission-reduction percentages, a fumigant diffusion model was used to predict the percentage reduction in emissions by injecting at 61 cm, which yielded a 21% reduction in emissions. Significant reductions in emissions of 1,3-D and chloropicrin are possible by injecting soil fumigants deeper in soil.
NASA Astrophysics Data System (ADS)
Xing, Jia; Ding, Dian; Wang, Shuxiao; Zhao, Bin; Jang, Carey; Wu, Wenjing; Zhang, Fenfen; Zhu, Yun; Hao, Jiming
2018-06-01
As one common precursor for both PM2.5 and O3 pollution, NOx gains great attention because its controls can be beneficial for reducing both PM2.5 and O3. However, the effectiveness of NOx controls for reducing PM2.5 and O3 are largely influenced by the ambient levels of NH3 and VOC, exhibiting strong nonlinearities characterized as NH3-limited/NH3-poor and NOx-/VOC-limited conditions, respectively. Quantification of such nonlinearities is a prerequisite for making suitable policy decisions but limitations of existing methods were recognized. In this study, a new method was developed by fitting multiple simulations of a chemical transport model (i.e., Community Multiscale Air Quality Modeling System, CMAQ) with a set of polynomial functions (denoted as pf-RSM
) to quantify responses of ambient PM2.5 and O3 concentrations to changes in precursor emissions. The accuracy of the pf-RSM is carefully examined to meet the criteria of a mean normalized error within 2 % and a maximal normalized error within 10 % by using 40 training samples with marginal processing. An advantage of the pf-RSM method is that the nonlinearity in PM2.5 and O3 responses to precursor emission changes can be characterized by quantitative indicators, including (1) a peak ratio (denoted as PR) representing VOC-limited or NOx-limited conditions, (2) a suggested ratio of VOC reduction to NOx reduction to avoid increasing O3 under VOC-limited conditions, (3) a flex ratio (denoted as FR) representing NH3-poor or NH3-rich conditions, and (4) enhanced benefits in PM2.5 reductions from simultaneous reduction of NH3 with the same reduction rate of NOx. A case study in the Beijing-Tianjin-Hebei region suggested that most urban areas present strong VOC-limited conditions with a PR from 0.4 to 0.8 in July, implying that the NOx emission reduction rate needs to be greater than 20-60 % to pass the transition from VOC-limited to NOx-limited conditions. A simultaneous VOC control (the ratio of VOC reduction to NOx reduction is about 0.5-1.2) can avoid increasing O3 during the transition. For PM2.5, most urban areas present strong NH3-rich conditions with a PR from 0.75 to 0.95, implying that NH3 is sufficiently abundant to neutralize extra nitric acid produced by an additional 5-35 % of NOx emissions. Enhanced benefits in PM2.5 reductions from simultaneous reduction of NH3 were estimated to be 0.04-0.15 µg m-3 PM2.5 per 1 % reduction of NH3 along with NOx, with greater benefits in July when the NH3-rich conditions are not as strong as in January. Thus, the newly developed pf-RSM model has successfully quantified the enhanced effectiveness of NOx control, and simultaneous reduction of VOC and NH3 with NOx can assure the control effectiveness of PM2.5 and O3.
NASA Astrophysics Data System (ADS)
Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.
2017-12-01
The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.
A qualitative analysis of case managers' use of harm reduction in practice.
Tiderington, Emmy; Stanhope, Victoria; Henwood, Benjamin F
2013-01-01
The harm reduction approach has become a viable framework within the field of addictions, yet there is limited understanding about how this approach is implemented in practice. For people who are homeless and have co-occurring psychiatric and substance use disorders, the Housing First model has shown promising results in employing such an approach. This qualitative study utilizes ethnographic methods to explore case managers' use of harm reduction within Housing First with a specific focus on the consumer-provider relationship. Analysis of observational data and in-depth interviews with providers and consumers revealed how communication between the two regarding the consumer's substance use interacted with the consumer-provider relationship. From these findings emerged a heuristic model of harm reduction practice that highlighted the profound influence of relationship quality on the paths of communication regarding substance use. This study provides valuable insight into how harm reduction is implemented in clinical practice that ultimately has public health implications in terms of more effectively addressing high rates of addiction that contribute to homelessness and health disparities. Copyright © 2013 Elsevier Inc. All rights reserved.
Rocha, João; Roebeling, Peter; Rial-Rivas, María Ermitas
2015-12-01
The extensive use of fertilizers has become one of the most challenging environmental issues in agricultural catchment areas. In order to reduce the negative impacts from agricultural activities and to accomplish the objectives of the European Water Framework Directive we must consider the implementation of sustainable agricultural practices. In this study, we assess sustainable agricultural practices based on reductions in N-fertilizer application rates (from 100% to 0%) and N-application methods (single, split and slow-release) across key agricultural land use classes in the Vouga catchment, Portugal. The SWAT model was used to relate sustainable agricultural practices, agricultural yields and N-NO3 water pollution deliveries. Results show that crop yields as well as N-NO3 exportation rates decrease with reductions in N-application rates and single N-application methods lead to lower crop yields and higher N-NO3 exportation rates as compared to split and slow-release N-application methods. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mirzaei, Mahmood; Tibaldi, Carlo; Hansen, Morten H.
2016-09-01
PI/PID controllers are the most common wind turbine controllers. Normally a first tuning is obtained using methods such as pole-placement or Ziegler-Nichols and then extensive aeroelastic simulations are used to obtain the best tuning in terms of regulation of the outputs and reduction of the loads. In the traditional tuning approaches, the properties of different open loop and closed loop transfer functions of the system are not normally considered. In this paper, an assessment of the pole-placement tuning method is presented based on robustness measures. Then a constrained optimization setup is suggested to automatically tune the wind turbine controller subject to robustness constraints. The properties of the system such as the maximum sensitivity and complementary sensitivity functions (Ms and Mt ), along with some of the responses of the system, are used to investigate the controller performance and formulate the optimization problem. The cost function is the integral absolute error (IAE) of the rotational speed from a disturbance modeled as a step in wind speed. Linearized model of the DTU 10-MW reference wind turbine is obtained using HAWCStab2. Thereafter, the model is reduced with model order reduction. The trade-off curves are given to assess the tunings of the poles- placement method and a constrained optimization problem is solved to find the best tuning.
Emissions reductions from expanding state-level renewable portfolio standards.
Johnson, Jeremiah X; Novacheck, Joshua
2015-05-05
In the United States, state-level Renewable Portfolio Standards (RPS) have served as key drivers for the development of new renewable energy. This research presents a method to evaluate emissions reductions and costs attributable to new or expanded RPS programs by integrating a comprehensive economic dispatch model and a renewable project selection model. The latter model minimizes incremental RPS costs, accounting for renewable power purchase agreements (PPAs), displaced generation and capacity costs, and net changes to a state's imports and exports. We test this method on potential expansions to Michigan's RPS, evaluating target renewable penetrations of 10% (business as usual or BAU), 20%, 25%, and 40%, with varying times to completion. Relative to the BAU case, these expanded RPS policies reduce the CO2 intensity of generation by 13%, 18%, and 33% by 2035, respectively. SO2 emissions intensity decreased by 13%, 20%, and 34% for each of the three scenarios, while NOx reductions totaled 12%, 17%, and 31%, relative to the BAU case. For CO2 and NOx, absolute reductions in emissions intensity were not as large due to an increasing trend in emissions intensity in the BAU case driven by load growth. Over the study period (2015 to 2035), the absolute CO2 emissions intensity increased by 1% in the 20% RPS case and decreased by 6% and 22% for the 25% and 40% cases, respectively. Between 26% and 31% of the CO2, SO2, and NOx emissions reductions attributable to the expanded RPS occur in neighboring states, underscoring the challenges quantifying local emissions reductions from state-level energy policies with an interconnected grid. Without federal subsidies, the cost of CO2 mitigation using an RPS in Michigan is between $28 and $34/t CO2 when RPS targets are met. The optimal renewable build plan is sensitive to the capacity credit for solar but insensitive to the value for wind power.
Wang, Tien-Hsiang; Ma, Hsu; Tseng, Ching-Shiow; Chou, Yi-Hong; Cai, Kun-Lin
Surgical navigation systems have been an important tool in maxillofacial surgery, helping surgeons create a presurgical plan, locate lesions, and provide guidance. For secondary facial bone reductions, a good presurgical plan and proper execution are the key to success. Previous studies used predetermined markers and screw holes as navigation references; however, unexpected situations may occur, making the predetermined surgical plan unreliable. Instead of determining positions preoperatively, this study proposes a method that surgeons can use intraoperatively to choose surface markers in a more flexible manner. Eight zygomatic fractures were created in four skull models, and preoperative computed tomography (CT) image data were imported into a self-developed navigation program for presurgical planning. This program also calculates the ideal positions of navigation references points for screw holes. During reduction surgery, markers on fractured bone are selected, registered, and calculated as free navigation reference points (FNRPs). The surface markers and FNRPs are used to monitor the position of the dislocated bone. Titanium bone plates were prefabricated on stereolithography models for osteosynthesis. Two reductions with only FNRPs, as well as six reductions with FNRPs and prefabricated bone plates, were successfully performed. Postoperative CT data were obtained, and surgical errors in the six-reduction group were evaluated. The average deviation from the screw hole drilling positions was 0.92 ± 0.38 mm. The average deviation included displacement and rotation of the zygomas. The mean displacement was 0.83 ± 0.38 mm, and the average rotations around the x, y, and z axes were 0.66 ± 0.59°, 0.77 ± 0.54°, and 0.79 ± 0.42°, respectively. The results show that combining presurgical planning and the developed navigation program to generate FNRPs for assisting in secondary zygoma reduction is an accurate and practical method. Further study is necessary to prove its clinical value.
Active Subspace Methods for Data-Intensive Inverse Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qiqi
2017-04-27
The project has developed theory and computational tools to exploit active subspaces to reduce the dimension in statistical calibration problems. This dimension reduction enables MCMC methods to calibrate otherwise intractable models. The same theoretical and computational tools can also reduce the measurement dimension for calibration problems that use large stores of data.
NASA Astrophysics Data System (ADS)
Fomina, E. V.; Kozhukhova, N. I.; Sverguzova, S. V.; Fomin, A. E.
2018-05-01
In this paper, the regression equations method for design of construction material was studied. Regression and polynomial equations representing the correlation between the studied parameters were proposed. The logic design and software interface of the regression equations method focused on parameter optimization to provide the energy saving effect at the stage of autoclave aerated concrete design considering the replacement of traditionally used quartz sand by coal mining by-product such as argillite. The mathematical model represented by a quadric polynomial for the design of experiment was obtained using calculated and experimental data. This allowed the estimation of relationship between the composition and final properties of the aerated concrete. The surface response graphically presented in a nomogram allowed the estimation of concrete properties in response to variation of composition within the x-space. The optimal range of argillite content was obtained leading to a reduction of raw materials demand, development of target plastic strength of aerated concrete as well as a reduction of curing time before autoclave treatment. Generally, this method allows the design of autoclave aerated concrete with required performance without additional resource and time costs.
Shao, Kan; Small, Mitchell J
2011-10-01
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.
Advanced Fluid Reduced Order Models for Compressible Flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tezaur, Irina Kalashnikova; Fike, Jeffrey A.; Carlberg, Kevin Thomas
This report summarizes fiscal year (FY) 2017 progress towards developing and implementing within the SPARC in-house finite volume flow solver advanced fluid reduced order models (ROMs) for compressible captive-carriage flow problems of interest to Sandia National Laboratories for the design and qualification of nuclear weapons components. The proposed projection-based model order reduction (MOR) approach, known as the Proper Orthogonal Decomposition (POD)/Least- Squares Petrov-Galerkin (LSPG) method, can substantially reduce the CPU-time requirement for these simulations, thereby enabling advanced analyses such as uncertainty quantification and de- sign optimization. Following a description of the project objectives and FY17 targets, we overview briefly themore » POD/LSPG approach to model reduction implemented within SPARC . We then study the viability of these ROMs for long-time predictive simulations in the context of a two-dimensional viscous laminar cavity problem, and describe some FY17 enhancements to the proposed model reduction methodology that led to ROMs with improved predictive capabilities. Also described in this report are some FY17 efforts pursued in parallel to the primary objective of determining whether the ROMs in SPARC are viable for the targeted application. These include the implemen- tation and verification of some higher-order finite volume discretization methods within SPARC (towards using the code to study the viability of ROMs on three-dimensional cavity problems) and a novel structure-preserving constrained POD/LSPG formulation that can improve the accuracy of projection-based reduced order models. We conclude the report by summarizing the key takeaways from our FY17 findings, and providing some perspectives for future work.« less
Methodological development for selection of significant predictors explaining fatal road accidents.
Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco
2016-05-01
Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.
[Predicting the impact of climate change in the next 40 years on the yield of maize in China].
Ma, Yu-ping; Sun, Lin-li; E, You-hao; Wu, Wei
2015-01-01
Climate change will significantly affect agricultural production in China. The combination of the integral regression model and the latest climate projection may well assess the impact of future climate change on crop yield. In this paper, the correlation model of maize yield and meteorological factors was firstly established for different provinces in China by using the integral regression method, then the impact of climate change in the next 40 years on China's maize production was evaluated combined the latest climate prediction with the reason be ing analyzed. The results showed that if the current speeds of maize variety improvement and science and technology development were constant, maize yield in China would be mainly in an increasing trend of reduction with time in the next 40 years in a range generally within 5%. Under A2 climate change scenario, the region with the most reduction of maize yield would be the Northeast except during 2021-2030, and the reduction would be generally in the range of 2.3%-4.2%. Maize yield reduction would be also high in the Northwest, Southwest and middle and lower reaches of Yangtze River after 2031. Under B2 scenario, the reduction of 5.3% in the Northeast in 2031-2040 would be the greatest across all regions. Other regions with considerable maize yield reduction would be mainly in the Northwest and the Southwest. Reduction in maize yield in North China would be small, generally within 2%, under any scenarios, and that in South China would be almost unchanged. The reduction of maize yield in most regions would be greater under A2 scenario than under B2 scenario except for the period of 2021-2030. The effect of the ten day precipitation on maize yield in northern China would be almost positive. However, the effect of ten day average temperature on yield of maize in all regions would be generally negative. The main reason of maize yield reduction was temperature increase in most provinces but precipitation decrease in a few provinces. Assessments of the future change of maize yield in China based on the different methods were not consistent. Further evaluation needs to consider the change of maize variety and scientific and technological progress, and to enhance the reliability of evaluation models.
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
2011-01-01
Background A large proportion of disease burden is attributed to behavioural risk factors. However, funding for public health programs in Australia remains limited. Government and non-government organisations are interested in the productivity effects on society from reducing chronic diseases. We aimed to estimate the potential health status and economic benefits to society following a feasible reduction in the prevalence of six behavioural risk factors: tobacco smoking; inadequate fruit and vegetable consumption; high risk alcohol consumption; high body mass index; physical inactivity; and intimate partner violence. Methods Simulation models were developed for the 2008 Australian population. A realistic reduction in current risk factor prevalence using best available evidence with expert consensus was determined. Avoidable disease, deaths, Disability Adjusted Life Years (DALYs) and health sector costs were estimated. Productivity gains included workforce (friction cost method), household production and leisure time. Multivariable uncertainty analyses and correction for the joint effects of risk factors on health status were undertaken. Consistent methods and data sources were used. Results Over the lifetime of the 2008 Australian adult population, total opportunity cost savings of AUD2,334 million (95% Uncertainty Interval AUD1,395 to AUD3,347; 64% in the health sector) were found if feasible reductions in the risk factors were achieved. There would be 95,000 fewer DALYs (a reduction of about 3.6% in total DALYs for Australia); 161,000 less new cases of disease; 6,000 fewer deaths; a reduction of 5 million days in workforce absenteeism; and 529,000 increased days of leisure time. Conclusions Reductions in common behavioural risk factors may provide substantial benefits to society. For example, the total potential annual cost savings in the health sector represent approximately 2% of total annual health expenditure in Australia. Our findings contribute important new knowledge about productivity effects, including the potential for increased household and leisure activities, associated with chronic disease prevention. The selection of targets for risk factor prevalence reduction is an important policy decision and a useful approach for future analyses. Similar approaches could be applied in other countries if the data are available. PMID:21689461
Modeling of the oxygen reduction reaction for dense LSM thin films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Liu, Jian; Yu, Yang
In this study, the oxygen reduction reaction mechanism is investigated using numerical methods on a dense thin (La 1-xSr x) yMnO 3±δ film deposited on a YSZ substrate. This 1-D continuum model consists of defect chemistry and elementary oxygen reduction reaction steps coupled via reaction rates. The defect chemistry model contains eight species including cation vacancies on the A- and B-sites. The oxygen vacancy is calculated by solving species transportation equations in multiphysics simulations. Due to the simple geometry of a dense thin film, the oxygen reduction reaction was reduced to three elementary steps: surface adsorption and dissociation, incorporation onmore » the surface, and charge transfer across the LSM/YSZ interface. The numerical simulations allow for calculation of the temperature- and oxygen partial pressure-dependent properties of LSM. The parameters of the model are calibrated with experimental impedance data for various oxygen partial pressures at different temperatures. The results indicate that surface adsorption and dissociation is the rate-determining step in the ORR of LSM thin films. With the fine-tuned parameters, further quantitative analysis is performed. The activation energy of the oxygen exchange reaction and the dependence of oxygen non-stoichiometry on oxygen partial pressure are also calculated and verified using the literature results.« less
Modeling of the oxygen reduction reaction for dense LSM thin films
Yang, Tao; Liu, Jian; Yu, Yang; ...
2017-10-17
In this study, the oxygen reduction reaction mechanism is investigated using numerical methods on a dense thin (La 1-xSr x) yMnO 3±δ film deposited on a YSZ substrate. This 1-D continuum model consists of defect chemistry and elementary oxygen reduction reaction steps coupled via reaction rates. The defect chemistry model contains eight species including cation vacancies on the A- and B-sites. The oxygen vacancy is calculated by solving species transportation equations in multiphysics simulations. Due to the simple geometry of a dense thin film, the oxygen reduction reaction was reduced to three elementary steps: surface adsorption and dissociation, incorporation onmore » the surface, and charge transfer across the LSM/YSZ interface. The numerical simulations allow for calculation of the temperature- and oxygen partial pressure-dependent properties of LSM. The parameters of the model are calibrated with experimental impedance data for various oxygen partial pressures at different temperatures. The results indicate that surface adsorption and dissociation is the rate-determining step in the ORR of LSM thin films. With the fine-tuned parameters, further quantitative analysis is performed. The activation energy of the oxygen exchange reaction and the dependence of oxygen non-stoichiometry on oxygen partial pressure are also calculated and verified using the literature results.« less
Space vehicle acoustics prediction improvement for payloads. [space shuttle
NASA Technical Reports Server (NTRS)
Dandridge, R. E.
1979-01-01
The modal analysis method was extensively modified for the prediction of space vehicle noise reduction in the shuttle payload enclosure, and this program was adapted to the IBM 360 computer. The predicted noise reduction levels for two test cases were compared with experimental results to determine the validity of the analytical model for predicting space vehicle payload noise environments in the 10 Hz one-third octave band regime. The prediction approach for the two test cases generally gave reasonable magnitudes and trends when compared with the measured noise reduction spectra. The discrepancies in the predictions could be corrected primarily by improved modeling of the vehicle structural walls and of the enclosed acoustic space to obtain a more accurate assessment of normal modes. Techniques for improving and expandng the noise prediction for a payload environment are also suggested.
Stream temperature investigations: field and analytic methods
Bartholow, J.M.
1989-01-01
Alternative public domain stream and reservoir temperature models are contrasted with SNTEMP. A distinction is made between steady-flow and dynamic-flow models and their respective capabilities. Regression models are offered as an alternative approach for some situations, with appropriate mathematical formulas suggested. Appendices provide information on State and Federal agencies that are good data sources, vendors for field instrumentation, and small computer programs useful in data reduction.
Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements
NASA Technical Reports Server (NTRS)
Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.
2012-01-01
We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".
Automated data processing and radioassays.
Samols, E; Barrows, G H
1978-04-01
Radioassays include (1) radioimmunoassays, (2) competitive protein-binding assays based on competition for limited antibody or specific binding protein, (3) immunoradiometric assay, based on competition for excess labeled antibody, and (4) radioreceptor assays. Most mathematical models describing the relationship between labeled ligand binding and unlabeled ligand concentration have been based on the law of mass action or the isotope dilution principle. These models provide useful data reduction programs, but are theoretically unfactory because competitive radioassay usually is not based on classical dilution principles, labeled and unlabeled ligand do not have to be identical, antibodies (or receptors) are frequently heterogenous, equilibrium usually is not reached, and there is probably steric and cooperative influence on binding. An alternative, more flexible mathematical model based on the probability or binding collisions being restricted by the surface area of reactive divalent sites on antibody and on univalent antigen has been derived. Application of these models to automated data reduction allows standard curves to be fitted by a mathematical expression, and unknown values are calculated from binding data. The vitrues and pitfalls are presented of point-to-point data reduction, linear transformations, and curvilinear fitting approaches. A third-order polynomial using the square root of concentration closely approximates the mathematical model based on probability, and in our experience this method provides the most acceptable results with all varieties of radioassays. With this curvilinear system, linear point connection should be used between the zero standard and the beginning of significant dose response, and also towards saturation. The importance is stressed of limiting the range of reported automated assay results to that portion of the standard curve that delivers optimal sensitivity. Published methods for automated data reduction of Scatchard plots for radioreceptor assay are limited by calculation of a single mean K value. The quality of the input data is generally the limiting factor in achieving good precision with automated as it is with manual data reduction. The major advantages of computerized curve fitting include: (1) handling large amounts of data rapidly and without computational error; (2) providing useful quality-control data; (3) indicating within-batch variance of the test results; (4) providing ongoing quality-control charts and between assay variance.
NASA Astrophysics Data System (ADS)
Elliott, R.; Pickles, C. A.
2017-09-01
Nickeliferous limonitic laterite ores are becoming increasingly attractive as a source of metallic nickel as the costs associated with recovering nickel from the sulphide ores increase. Unlike the sulphide ores, however, the laterite ores are not amenable to concentration by conventional mineral processing techniques such as froth flotation. One potential concentrating method would be the pyrometallurgical solid state reduction of the nickeliferous limonitic ores at relatively low temperatures, followed by beneficiation via magnetic separation. A number of reductants can be utilized in the reduction step, and in this research, a thermodynamic model has been developed to investigate the reduction of a nickeliferous limonitic laterite by hydrogen. The nickel recovery to the ferronickel phase was predicted to be greater than 95 % at temperatures of 673-873 K. Reductant additions above the stoichiometric requirement resulted in high recoveries over a wider temperature range, but the nickel grade of the ferronickel decreased.
Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua
2013-01-01
Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.
Jiang, Wei; Chen, Yaxin; He, Xiaoxia; Hu, Shiwei; Li, Shijie; Liu, Yu
2018-01-15
The tyramine/glucose Maillard reaction was proposed as an emerging tool for tyramine reduction in a model system and two commercial soy sauce samples. The model system was composed of tyramine and glucose in buffer solutions with or without NaCl. The results showed that tyramine was reduced in the model system, and the reduction rate was affected by temperature, heating time, initial pH value, NaCl concentration, initial glucose concentration and initial tyramine concentration. Changes in fluorescence intensity and ultraviolet-visible (UV-vis) absorption spectra showed three stages of the Maillard reaction between tyramine and glucose. Cytotoxicity assay demonstrated that tyramine/glucose Maillard reaction products (MRPs) were significantly less toxic than that of tyramine (p<0.05). Moreover, tyramine concentration in soy sauce samples was significantly reduced when heated with the addition of glucose (p<0.05). Experimental results showed that the tyramine/glucose Maillard reaction is a promising method for tyramine reduction in foods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Alsmadi, Othman M K; Abo-Hammour, Zaer S
2015-01-01
A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Multi-linear model set design based on the nonlinearity measure and H-gap metric.
Shaghaghi, Davood; Fatehi, Alireza; Khaki-Sedigh, Ali
2017-05-01
This paper proposes a model bank selection method for a large class of nonlinear systems with wide operating ranges. In particular, nonlinearity measure and H-gap metric are used to provide an effective algorithm to design a model bank for the system. Then, the proposed model bank is accompanied with model predictive controllers to design a high performance advanced process controller. The advantage of this method is the reduction of excessive switch between models and also decrement of the computational complexity in the controller bank that can lead to performance improvement of the control system. The effectiveness of the method is verified by simulations as well as experimental studies on a pH neutralization laboratory apparatus which confirms the efficiency of the proposed algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Vijayalakshmi, Subramanian; Nadanasabhapathi, Shanmugam; Kumar, Ranganathan; Sunny Kumar, S
2018-03-01
The presence of aflatoxin, a carcinogenic and toxigenic secondary metabolite produced by Aspergillus species, in food matrix has been a major worldwide problem for years now. Food processing methods such as roasting, extrusion, etc. have been employed for effective destruction of aflatoxins, which are known for their thermo-stable nature. The high temperature treatment, adversely affects the nutritive and other quality attributes of the food, leading to the necessity of application of non-thermal processing techniques such as ultrasonication, gamma irradiation, high pressure processing, pulsed electric field (PEF), etc. The present study was focused on analysing the efficacy of the PEF process in the reduction of the toxin content, which was subsequently quantified using HPLC. The process parameters of different pH model system (potato dextrose agar) artificially spiked with aflatoxin mix standard was optimized using the response surface methodology. The optimization of PEF process effects on the responses aflatoxin B1 and total aflatoxin reduction (%) by pH (4-10), pulse width (10-26 µs) and output voltage (20-65%), fitted 2FI model and quadratic model respectively. The response surface plots obtained for the processes were of saddle point type, with the absence of minimum or maximum response at the centre point. The implemented numerical optimization showed that the predicted and actual values were similar, proving the adequacy of the fitted models and also proved the possible application of PEF in toxin reduction.
Aliabadi, Mohsen; Biabani, Azam; Golmohammadi, Rostam; Farhadian, Maryam
2018-05-28
Actual noise reduction of the earmuffs is considered as one of the main challenges for the evaluation of the effectiveness of a hearing conservation program. The current study aimed to determine the real world noise attenuation of current hearing protection devices in typical workplaces using a field microphone in real ear (FMIRE) method. In this cross-sectional study, five common earmuffs were investigated among 50 workers in two industrial factories with different noise characteristics. Noise reduction data was measured with the use of earmuffs based on the ISO 11904 standard, field microphone in real ear method, using noise dosimeter (SVANTEK, model SV 102) equipped with a microphone SV 25 model. The actual insertion losses (IL) of the tested earmuffs in octave band were lower than the labeled insertion loss data (p < 0.05). The frequency nature of noise to which workers are exposed has noticeable effects on the actual noise reduction of earmuffs (p < 0.05). The results suggest that the proportion of time using earmuffs has a considerable impact on the effective noise reduction during the workday. Data about the ambient noise characteristics is a key criterion when evaluating the acoustic performance of hearing protectors in any workplaces. Comfort aspects should be considered as one of the most important criteria for long-term use and effective wearing of hearing protection devices. FMIRE could facilitate rapid and simple measurement of the actual performance of the current earmuffs employed by workers during different work activities.
An Eigensystem Realization Algorithm (ERA) for modal parameter identification and model reduction
NASA Technical Reports Server (NTRS)
Juang, J. N.; Pappa, R. S.
1985-01-01
A method, called the Eigensystem Realization Algorithm (ERA), is developed for modal parameter identification and model reduction of dynamic systems from test data. A new approach is introduced in conjunction with the singular value decomposition technique to derive the basic formulation of minimum order realization which is an extended version of the Ho-Kalman algorithm. The basic formulation is then transformed into modal space for modal parameter identification. Two accuracy indicators are developed to quantitatively identify the system modes and noise modes. For illustration of the algorithm, examples are shown using simulation data and experimental data for a rectangular grid structure.
NASA Astrophysics Data System (ADS)
Kornilov, V. I.; Boiko, A. V.
2017-10-01
Problems of experimental modeling of the process of air blowing into turbulent boundary layer of incompressible fluid through finely perforated wall are discussed. Particular attention is paid to the analysis of both the main factors responsible for the effectiveness of blowing and the possibility of studying the factors in artificially generated turbulent boundary layer. It was shown that uniformity of the injected gas is one of the main requirements to enhance the effectiveness of this method of flow control. An example of the successful application of this technology exhibiting a significant reduction of the turbulent skin friction is provided.
NASA Astrophysics Data System (ADS)
Lafranchi, B. W.; Goldstein, A. H.; Cohen, R. C.
2011-02-01
Observations of NOx in the Sacramento, CA region show that mixing ratios decreased by 30% between 2001 and 2008. Here we use an observation-based method to quantify net ozone production rates in the outflow from the Sacramento metropolitan region and examine the O3 decrease resulting from reductions in NOx emissions. This observational method does not rely on assumptions about detailed chemistry of ozone production, rather it is an independent means to verify and test these assumptions. We use an instantaneous steady-state model as well as a detailed 1-D plume model to aid in interpretation of the ozone production inferred from observations. In agreement with the models, the observations show that early in the plume, the NOx dependence for Ox (Ox = O3 + NO2) production is strongly coupled with temperature, suggesting that temperature-dependent biogenic VOC emissions can drive Ox production between NOx-limited and NOx-suppressed regimes. As a result, NOx reductions were found to be most effective at higher temperatures over the 7 year period. We show that violations of the California 1-hour O3 standard (90 ppb) in the region have been decreasing linearly with decreases in NOx (at a given temperature) and predict that reductions of NOx concentrations (and presumably emissions) by an additional 30% (relative to 2007 levels) will eliminate violations of the state 1 h standard in the region. If current trends continue, a 30% decrease in NOx is expected by 2012, and an end to violations of the 1 h standard in the Sacramento region appears to be imminent.
Nonholonomic Hamiltonian Method for Meso-macroscale Simulations of Reacting Shocks
NASA Astrophysics Data System (ADS)
Fahrenthold, Eric; Lee, Sangyup
2015-06-01
The seamless integration of macroscale, mesoscale, and molecular scale models of reacting shock physics has been hindered by dramatic differences in the model formulation techniques normally used at different scales. In recent research the authors have developed the first unified discrete Hamiltonian approach to multiscale simulation of reacting shock physics. Unlike previous work, the formulation employs reacting themomechanical Hamiltonian formulations at all scales, including the continuum. Unlike previous work, the formulation employs a nonholonomic modeling approach to systematically couple the models developed at all scales. Example applications of the method show meso-macroscale shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.
Konfino, Jonatan; Mekonnen, Tekeshe A.; Coxson, Pamela G.; Ferrante, Daniel; Bibbins-Domingo, Kirsten
2013-01-01
Background Cardiovascular disease (CVD) is the leading cause of death in adults in Argentina. Sodium reduction policies targeting processed foods were implemented in 2011 in Argentina, but the impact has not been evaluated. The aims of this study are to use Argentina-specific data on sodium excretion and project the impact of Argentina’s sodium reduction policies under two scenarios - the 2-year intervention currently being undertaken or a more persistent 10 year sodium reduction strategy. Methods We used Argentina-specific data on sodium excretion by sex and projected the impact of the current strategy on sodium consumption and blood pressure decrease. We assessed the projected impact of sodium reduction policies on CVD using the Cardiovascular Disease (CVD) Policy Model, adapted to Argentina, modeling two alternative policy scenarios over the next decade. Results Our study finds that the initiative to reduce sodium consumption currently in place in Argentina will have substantial impact on CVD over the next 10 years. Under the current proposed policy of 2-year sodium reduction, the mean sodium consumption is projected to decrease by 319–387 mg/day. This decrease is expected to translate into an absolute reduction of systolic blood pressure from 0.93 mmHg to 1.81 mmHg. This would avert about 19,000 all-cause mortality, 13,000 total myocardial infarctions, and 10,000 total strokes over the next decade. A more persistent sodium reduction strategy would yield even greater CVD benefits. Conclusion The impact of the Argentinean initiative would be effective in substantially reducing mortality and morbidity from CVD. This paper provides evidence-based support to continue implementing strategies to reduce sodium consumption at a population level. PMID:24040085
Hydroplaning on multi lane facilities.
DOT National Transportation Integrated Search
2012-11-01
The primary findings of this research can be highlighted as follows. Models that provide estimates of wet weather speed reduction, as well as analytical and empirical methods for the prediction of hydroplaning speeds of trailers and heavy trucks, wer...
Spectral Quasi-Equilibrium Manifold for Chemical Kinetics.
Kooshkbaghi, Mahdi; Frouzakis, Christos E; Boulouchos, Konstantinos; Karlin, Iliya V
2016-05-26
The Spectral Quasi-Equilibrium Manifold (SQEM) method is a model reduction technique for chemical kinetics based on entropy maximization under constraints built by the slowest eigenvectors at equilibrium. The method is revisited here and discussed and validated through the Michaelis-Menten kinetic scheme, and the quality of the reduction is related to the temporal evolution and the gap between eigenvalues. SQEM is then applied to detailed reaction mechanisms for the homogeneous combustion of hydrogen, syngas, and methane mixtures with air in adiabatic constant pressure reactors. The system states computed using SQEM are compared with those obtained by direct integration of the detailed mechanism, and good agreement between the reduced and the detailed descriptions is demonstrated. The SQEM reduced model of hydrogen/air combustion is also compared with another similar technique, the Rate-Controlled Constrained-Equilibrium (RCCE). For the same number of representative variables, SQEM is found to provide a more accurate description.
Karayianni, Katerina N; Grimaldi, Keith A; Nikita, Konstantina S; Valavanis, Ioannis K
2015-01-01
This paper aims to enlighten the complex etiology beneath obesity by analysing data from a large nutrigenetics study, in which nutritional and genetic factors associated with obesity were recorded for around two thousand individuals. In our previous work, these data have been analysed using artificial neural network methods, which identified optimised subsets of factors to predict one's obesity status. These methods did not reveal though how the selected factors interact with each other in the obtained predictive models. For that reason, parallel Multifactor Dimensionality Reduction (pMDR) was used here to further analyse the pre-selected subsets of nutrigenetic factors. Within pMDR, predictive models using up to eight factors were constructed, further reducing the input dimensionality, while rules describing the interactive effects of the selected factors were derived. In this way, it was possible to identify specific genetic variations and their interactive effects with particular nutritional factors, which are now under further study.
NASA Astrophysics Data System (ADS)
Lettmann, K.; Kirchner, J.; Schnetger, B.; Wolff, J. O.; Brumsack, H. J.
2016-12-01
Rising CO2-emissions accompanying the industrial revolution are the main drivers for climate change and ocean acidification. Several methods have been developed to capture CO2 from effluents and reduce emission. Here, we consider a promising approach that mimics natural limestone weathering: CO2 in effluent gas streams reacts with calcium carbonate in a limestone suspension. The resulting bicarbonate-rich solution can be released into natural systems. In comparison to classical carbon capture and storage (CCS) methods this artificial limestone weathering is cheaper and does not involve using toxic chemical compounds. Additionally there is no need for the controversially discussed storage of CO2 underground. The reduction of CO2-emissions becomes more important for European industries as the EU introduced a system that limits the amount of allowable CO2-emissions. Therefore, large CO2 emitters are forced to find cheap methods for emission reduction, as they often cannot circumvent CO2-production. The method mentioned above is especially of interest for power plants located close to the coast that are already using seawater for cooling purposes. Thus, it is important to estimate the environmental effects if several coastal power plants will release high amounts of bicarbonate-rich waters into coastal waters, e.g. the North Sea. In a first pilot study, the unstructured-grid finite-volume community ocean model (FVCOM) was combined with a chemical submodul (mocsy 2.0) to model the hydrodynamic circulation and mixing of bicarbonate-rich effluents from a gas power plant located at the German North Sea coast. Here, we present the first preliminary results of this project, which include modelled changes of the North Sea carbonate system and changes in pH value after the introduction of these bicarbonate-rich waters on short time scales up to one year.
NASA Astrophysics Data System (ADS)
Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric
2018-02-01
Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.
Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric
2018-02-13
Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.
Air pollution response to changing weather and power plant emissions in the eastern United States
NASA Astrophysics Data System (ADS)
Bloomer, Bryan Jaye
Air pollution in the eastern United States causes human sickness and death as well as damage to crops and materials. NOX emission reduction is observed to improve air quality. Effectively reducing pollution in the future requires understanding the connections between smog, precursor emissions, weather, and climate change. Numerical models predict global warming will exacerbate smog over the next 50 years. My analysis of 21 years of CASTNET observations quantifies a climate change penalty. I calculate, for data collected prior to 2002, a climate penalty factor of ˜3.3 ppb O3/°C across the power plant dominated receptor regions in the rural, eastern U.S. Recent reductions in NOX emissions decreased the climate penalty factor to ˜2.2 ppb O3/°C. Prior to 1995, power plant emissions of CO2, SO2, and NOX were estimated with fuel sampling and analysis methods. Currently, emissions are measured with continuous monitoring equipment (CEMS) installed directly in stacks. My comparison of the two methods show CO 2 and SO2 emissions are ˜5% lower when inferred from fuel sampling; greater differences are found for NOX emissions. CEMS are the method of choice for emission inventories and commodity trading and should be the standard against which other methods are evaluated for global greenhouse gas trading policies. I used CEMS data and applied chemistry transport modeling to evaluate improvements in air quality observed by aircraft during the North American electrical blackout of 2003. An air quality model produced substantial reductions in O3, but not as much as observed. The study highlights weaknesses in the model as commonly used for evaluating a single day event and suggests areas for further investigation. A new analysis and visualization method quantifies local-daily to hemispheric-seasonal scale relationships between weather and air pollution, confirming improved air quality despite increasing temperatures across the eastern U.S. Climate penalty factors indicate amplified smog formation in areas of the world with rising temperatures and increasing emissions. Tools developed in this dissertation provide data for model evaluation and methods for establishing air quality standards with an adequate margin of safety for cleaning the air and protecting the public's health in a world with changing climate.
Salter-Blanc, Alexandra; Bylaska, Eric J.; Johnston, Hayley; ...
2015-02-11
The evaluation of new energetic nitroaromatic compounds (NACs) for use in green munitions formulations requires models that can predict their environmental fate. The susceptibility of energetic NACs to nitro reduction might be predicted from correlations between rate constants (k) for this reaction and one-electron reduction potentials (E1NAC) / 0.059 V, but the mechanistic implications of such correlations are inconsistent with evidence from other methods. To address this inconsistency, we have reevaluated existing kinetic data using a (non-linear) free-energy relationship (FER) based on the Marcus theory of outer-sphere electron transfer. For most reductants, the results are inconsistent with rate limitation bymore » an initial, outer-sphere electron transfer, suggesting that the strong correlation between k and E1NAC is justified only as an empirical model. This empirical correlation was used to calibrate a new quantitative structure-activity relationship (QSAR) using previously reported values of k for non-energetic NAC reduction by Fe(II) porphyrin and newly reported values of E1NAC determined using density functional theory at the B3LYP/6-311++G(2d,2p) level with the COSMO solvation model. The QSAR was then validated for energetic NACs using newly measured kinetic data for 2,4,6-trinitrotoluene (TNT), 2,4-dinitrotoluene (2,4-DNT), and 2,4-dinitroanisole (DNAN). The data show close agreement with the QSAR, supporting its applicability to energetic NACs.« less
SAR Speckle Noise Reduction Using Wiener Filter
NASA Technical Reports Server (NTRS)
Joo, T. H.; Held, D. N.
1983-01-01
Synthetic aperture radar (SAR) images are degraded by speckle. A multiplicative speckle noise model for SAR images is presented. Using this model, a Wiener filter is derived by minimizing the mean-squared error using the known speckle statistics. Implementation of the Wiener filter is discussed and experimental results are presented. Finally, possible improvements to this method are explored.
Future-year ozone prediction for the United States using updated models and inputs.
Collet, Susan; Kidokoro, Toru; Karamchandani, Prakash; Shah, Tejas; Jung, Jaegun
2017-08-01
The relationship between emission reductions and changes in ozone can be studied using photochemical grid models. These models are updated with new information as it becomes available. The primary objective of this study was to update the previous Collet et al. studies by using the most up-to-date (at the time the study was done) modeling emission tools, inventories, and meteorology available to conduct ozone source attribution and sensitivity studies. Results show future-year, 2030, design values for 8-hr ozone concentrations were lower than base-year values, 2011. The ozone source attribution results for selected cities showed that boundary conditions were the dominant contributors to ozone concentrations at the western U.S. locations, and were important for many of the eastern U.S. Point sources were generally more important in the eastern United States than in the western United States. The contributions of on-road mobile emissions were less than 5 ppb at a majority of the cities selected for analysis. The higher-order decoupled direct method (HDDM) results showed that in most of the locations selected for analysis, NOx emission reductions were more effective than VOC emission reductions in reducing ozone levels. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies. The relationship between emission reductions and changes in ozone can be studied using photochemical grid models, which are updated with new available information. This study was to update the previous Collet et al. studies by using the most current, at the time the study was done, models and inventory to conduct ozone source attribution and sensitivity studies. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies.
NASA Astrophysics Data System (ADS)
Bailey, Bernard Charles
Increasing the optical range of target detection and recognition continues to be an area of great interest in the ocean environment. Light attenuation limits radiative and information transfer for image formation in water. These limitations are difficult to surmount in conventional underwater imaging system design. Methods for the formation of images in scattering media generally rely upon temporal or spatial methodologies. Some interesting designs have been developed in an attempt to circumvent or overcome the scattering problem. This document describes a variation of the spatial interferometric technique that relies upon projected spatial gratings with subsequent detection against a coherent return signal for the purpose of noise reduction and image enhancement. A model is developed that simulates the projected structured illumination through turbid water to a target and its return to a detector. The model shows an unstructured backscatter superimposed on a structured return signal. The model can predict the effect on received signal to noise of variations in the projected spatial frequency and turbidity. The model has been extended to predict what a camera would actually see so that various noise reduction schemes can be modeled. Finally, some water tank tests are presented validating original hypothesis and model predictions. The method is advantageous in not requiring temporal synchronization between reference and signal beams and may use a continuous illumination source. Spatial coherency of the beam allows detection of the direct return, while scattered light appears as a noncoherent noise term. Both model and illumination method should prove to be valuable tools in ocean research.
NASA Astrophysics Data System (ADS)
Lombardozzi, D.; Bonan, G. B.; Levis, S.; Sparks, J. P.
2010-12-01
Humans are indirectly increasing concentrations of surface ozone (O3) through industrial processes. Ozone is known to have negative impacts on plants, including reductions in crop yields, plant growth, and visible leaf injury. Research also suggests that O3 exposure differentially affects photosynthesis and transpiration because biochemical aspects of photosynthesis are damaged in addition to stomatal conductance, the common link that controls both processes. However, most models incorporate O3 damage as a decrease in photosynthesis, with stomatal conductance responding linearly through the coupling of photosynthesis and conductance calculations. The observed differential effects of O3 on photosynthesis and conductance are not explicitly expressed in most modeling efforts, potentially causing larger decreases in transpiration. We ran five independent simulations of the CLM that compare current methods of incorporating O3 as a decrease in photosynthesis to a new method of separating photosynthesis and transpiration responses to O3 by independently modifying each parameter. We also determine the magnitude of both direct decreases to photosynthesis and transpiration and decreases caused by feedbacks in each parameter. Results show that traditional methods of modeling O3 effects by decreasing photosynthesis cause linear decreases in predicted transpiration that are ~20% larger than observed decreases in transpiration. However, modeled decreases in photosynthesis and transpiration that are incorporated independently of one another predict observed decreases in photosynthesis and improve transpiration predictions by ~13%. Therefore, models best predict carbon and water fluxes when incorporating O3-induced decreases in photosynthesis and transpiration independently.
Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds
NASA Astrophysics Data System (ADS)
Abdo, Mohammad Gamal Mohammad Mostafa
This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).
Inverse modeling methods for indoor airborne pollutant tracking: literature review and fundamentals.
Liu, X; Zhai, Z
2007-12-01
Reduction in indoor environment quality calls for effective control and improvement measures. Accurate and prompt identification of contaminant sources ensures that they can be quickly removed and contaminated spaces isolated and cleaned. This paper discusses the use of inverse modeling to identify potential indoor pollutant sources with limited pollutant sensor data. The study reviews various inverse modeling methods for advection-dispersion problems and summarizes the methods into three major categories: forward, backward, and probability inverse modeling methods. The adjoint probability inverse modeling method is indicated as an appropriate model for indoor air pollutant tracking because it can quickly find source location, strength and release time without prior information. The paper introduces the principles of the adjoint probability method and establishes the corresponding adjoint equations for both multi-zone airflow models and computational fluid dynamics (CFD) models. The study proposes a two-stage inverse modeling approach integrating both multi-zone and CFD models, which can provide a rapid estimate of indoor pollution status and history for a whole building. Preliminary case study results indicate that the adjoint probability method is feasible for indoor pollutant inverse modeling. The proposed method can help identify contaminant source characteristics (location and release time) with limited sensor outputs. This will ensure an effective and prompt execution of building management strategies and thus achieve a healthy and safe indoor environment. The method can also help design optimal sensor networks.
Pattin, Kristine A.; White, Bill C.; Barney, Nate; Gui, Jiang; Nelson, Heather H.; Kelsey, Karl R.; Andrew, Angeline S.; Karagas, Margaret R.; Moore, Jason H.
2008-01-01
Multifactor dimensionality reduction (MDR) was developed as a nonparametric and model-free data mining method for detecting, characterizing, and interpreting epistasis in the absence of significant main effects in genetic and epidemiologic studies of complex traits such as disease susceptibility. The goal of MDR is to change the representation of the data using a constructive induction algorithm to make nonadditive interactions easier to detect using any classification method such as naïve Bayes or logistic regression. Traditionally, MDR constructed variables have been evaluated with a naïve Bayes classifier that is combined with 10-fold cross validation to obtain an estimate of predictive accuracy or generalizability of epistasis models. Traditionally, we have used permutation testing to statistically evaluate the significance of models obtained through MDR. The advantage of permutation testing is that it controls for false-positives due to multiple testing. The disadvantage is that permutation testing is computationally expensive. This is in an important issue that arises in the context of detecting epistasis on a genome-wide scale. The goal of the present study was to develop and evaluate several alternatives to large-scale permutation testing for assessing the statistical significance of MDR models. Using data simulated from 70 different epistasis models, we compared the power and type I error rate of MDR using a 1000-fold permutation test with hypothesis testing using an extreme value distribution (EVD). We find that this new hypothesis testing method provides a reasonable alternative to the computationally expensive 1000-fold permutation test and is 50 times faster. We then demonstrate this new method by applying it to a genetic epidemiology study of bladder cancer susceptibility that was previously analyzed using MDR and assessed using a 1000-fold permutation test. PMID:18671250
Risk factors of chronic periodontitis on healing response: a multilevel modelling analysis.
Song, J; Zhao, H; Pan, C; Li, C; Liu, J; Pan, Y
2017-09-15
Chronic periodontitis is a multifactorial polygenetic disease with an increasing number of associated factors that have been identified over recent decades. Longitudinal epidemiologic studies have demonstrated that the risk factors were related to the progression of the disease. A traditional multivariate regression model was used to find risk factors associated with chronic periodontitis. However, the approach requirement of standard statistical procedures demands individual independence. Multilevel modelling (MLM) data analysis has widely been used in recent years, regarding thorough hierarchical structuring of the data, decomposing the error terms into different levels, and providing a new analytic method and framework for solving this problem. The purpose of our study is to investigate the relationship of clinical periodontal index and the risk factors in chronic periodontitis through MLM analysis and to identify high-risk individuals in the clinical setting. Fifty-four patients with moderate to severe periodontitis were included. They were treated by means of non-surgical periodontal therapy, and then made follow-up visits regularly at 3, 6, and 12 months after therapy. Each patient answered a questionnaire survey and underwent measurement of clinical periodontal parameters. Compared with baseline, probing depth (PD) and clinical attachment loss (CAL) improved significantly after non-surgical periodontal therapy with regular follow-up visits at 3, 6, and 12 months after therapy. The null model and variance component models with no independent variables included were initially obtained to investigate the variance of the PD and CAL reductions across all three levels, and they showed a statistically significant difference (P < 0.001), thus establishing that MLM data analysis was necessary. Site-level had effects on PD and CAL reduction; those variables could explain 77-78% of PD reduction and 70-80% of CAL reduction at 3, 6, and 12 months. Other levels only explain 20-30% of PD and CAL reductions. Site-level had the greatest effect on PD and CAL reduction. Non-surgical periodontal therapy with regular follow-up visits had a remarkable curative effect. All three levels had a substantial influence on the reduction of PD and CAL. Site-level had the largest effect on PD and CAL reductions.
2011-01-01
Background Insecticide-treated mosquito nets (ITNs) and indoor-residual spraying have been scaled-up across sub-Saharan Africa as part of international efforts to control malaria. These interventions have the potential to significantly impact child survival. The Lives Saved Tool (LiST) was developed to provide national and regional estimates of cause-specific mortality based on the extent of intervention coverage scale-up. We compared the percent reduction in all-cause child mortality estimated by LiST against measured reductions in all-cause child mortality from studies assessing the impact of vector control interventions in Africa. Methods We performed a literature search for appropriate studies and compared reductions in all-cause child mortality estimated by LiST to 4 studies that estimated changes in all-cause child mortality following the scale-up of vector control interventions. The following key parameters measured by each study were applied to available country projections: baseline all-cause child mortality rate, proportion of mortality due to malaria, and population coverage of vector control interventions at baseline and follow-up years. Results The percent reduction in all-cause child mortality estimated by the LiST model fell within the confidence intervals around the measured mortality reductions for all 4 studies. Two of the LiST estimates overestimated the mortality reductions by 6.1 and 4.2 percentage points (33% and 35% relative to the measured estimates), while two underestimated the mortality reductions by 4.7 and 6.2 percentage points (22% and 25% relative to the measured estimates). Conclusions The LiST model did not systematically under- or overestimate the impact of ITNs on all-cause child mortality. These results show the LiST model to perform reasonably well at estimating the effect of vector control scale-up on child mortality when compared against measured data from studies across a range of malaria transmission settings. The LiST model appears to be a useful tool in estimating the potential mortality reduction achieved from scaling-up malaria control interventions. PMID:21501453
Comparison of four different reduction methods for anterior dislocation of the shoulder.
Guler, Olcay; Ekinci, Safak; Akyildiz, Faruk; Tirmik, Uzeyir; Cakmak, Selami; Ugras, Akin; Piskin, Ahmet; Mahirogullari, Mahir
2015-05-28
Shoulder dislocations account for almost 50% of all major joint dislocations and are mainly anterior. The aim is a comparative retrospective study of different reduction maneuvers without anesthesia to reduce the dislocated shoulder. Patients were treated with different reduction maneuvers, including various forms of traction and external rotation, in the emergency departments of four training hospitals between 2009 and 2012. Each of the four hospitals had different treatment protocols for reduction and applying one of four maneuvers: Spaso, Chair, Kocher, and Matsen methods. Thirty-nine patients were treated by the Spaso method, 47 by the Chair reduction method, 40 by the Kocher method, and 27 patients by Matsen's traction-countertraction method. All patients' demographic data were recorded. Dislocation number, reduction time, time interval between dislocation and reduction, and associated complications, pre- and post-reduction period, were recorded prospectively. No anesthetic method was used for the reduction. All of the methods used included traction and some external rotation. The Chair method had the shortest reduction time. All surgeons involved in the study agreed that the Kocher and Matsen methods needed more force for the reduction. Patients could contract their muscles because of the pain in these two methods. The Spaso method includes flexion of the shoulder and blocks muscle contraction somewhat. The Chair method was found to be the easiest because the patients could not contract their muscles while sitting on a chair with the affected arm at their side. We suggest that the Chair method is an effective and fast reduction maneuver that may be an alternative for the treatment of anterior shoulder dislocations. Further prospective studies with larger sample size are needed to compare safety of different reduction techniques.
Stereo Sound Field Controller Design Using Partial Model Matching on the Frequency Domain
NASA Astrophysics Data System (ADS)
Kumon, Makoto; Miike, Katsuhiro; Eguchi, Kazuki; Mizumoto, Ikuro; Iwai, Zenta
The objective of sound field control is to make the acoustic characteristics of a listening room close to those of the desired system. Conventional methods apply feedforward controllers, such as digital filters, to achieve this objective. However, feedback controllers are also necessary in order to attenuate noise or to compensate the uncertainty of the acoustic characteristics of the listening room. Since acoustic characteristics are well modeled on the frequency domain, it is efficient to design controllers with respect to frequency responses, but it is difficult to design a multi input multi output (MIMO) control system on a wide frequency domain. In the present study, a partial model matching method on the frequency domain was adopted because this method requires only sampled data, rather than complex mathematical models of the plant, in order to design controllers for MIMO systems. The partial model matching method was applied to design two-degree-of-freedom controllers for acoustic equalization and noise reduction. Experiments demonstrated effectiveness of the proposed method.
Lobdell, Danelle T.; Isakov, Vlad; Baxter, Lisa; Touma, Jawad S.; Smuts, Mary Beth; Özkaynak, Halûk
2011-01-01
Background New approaches to link health surveillance data with environmental and population exposure information are needed to examine the health benefits of risk management decisions. Objective We examined the feasibility of conducting a local assessment of the public health impacts of cumulative air pollution reduction activities from federal, state, local, and voluntary actions in the City of New Haven, Connecticut (USA). Methods Using a hybrid modeling approach that combines regional and local-scale air quality data, we estimated ambient concentrations for multiple air pollutants [e.g., PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter), NOx (nitrogen oxides)] for baseline year 2001 and projected emissions for 2010, 2020, and 2030. We assessed the feasibility of detecting health improvements in relation to reductions in air pollution for 26 different pollutant–health outcome linkages using both sample size and exploratory epidemiological simulations to further inform decision-making needs. Results Model projections suggested decreases (~ 10–60%) in pollutant concentrations, mainly attributable to decreases in pollutants from local sources between 2001 and 2010. Models indicated considerable spatial variability in the concentrations of most pollutants. Sample size analyses supported the feasibility of identifying linkages between reductions in NOx and improvements in all-cause mortality, prevalence of asthma in children and adults, and cardiovascular and respiratory hospitalizations. Conclusion Substantial reductions in air pollution (e.g., ~ 60% for NOx) are needed to detect health impacts of environmental actions using traditional epidemiological study designs in small communities like New Haven. In contrast, exploratory epidemiological simulations suggest that it may be possible to demonstrate the health impacts of PM reductions by predicting intraurban pollution gradients within New Haven using coupled models. PMID:21335318
NASA Astrophysics Data System (ADS)
Liu, Bin; Gan, Hong
2018-06-01
Rapid social and economic development results in increased demand for water resources. This can lead to the unsustainable development and exploitation of water resources which in turn causes significant environmental problems. Conventional water resource management approaches, such as supply and demand management strategies, frequently fail to restore regional water balance. This paper introduces the concept of water consumption balance, the balance between actual evapotranspiration (ET) and target ET, and establishes a framework to realize regional water balance. The framework consists of three stages: (1) determination of target ET and actual ET; (2) quantification of the water-saving requirements for the region; and (3) reduction of actual ET by implementing various water saving management strategies. Using this framework, a case study was conducted for Guantao County, China. The SWAT model was utilized to aid in the selection of the best water saving management strategy by comparing the ET of different irrigation methods and crop pattern adjustments. Simulation results revealed that determination of SWAT model parameters using remote sensing ET is feasible and that the model is a valuable tool for ET management. Irrigation was found to have a greater influence on the ET of winter wheat as compared to that of maize, indicating that reduction in winter wheat cultivation is the most effective way to reduce regional ET. However, the effect of water-saving irrigation methods on the reduction of ET was not obvious. This indicates that it would be difficult to achieve regional ET reduction using water-saving irrigation methods only. Furthermore, selecting the best water saving management strategy by relying solely on the amount of reduced ET was insufficient, because it ignored the impact of water conservation measures on the livelihood of the agricultural community. Incorporating these considerations with our findings, we recommend changing the current irrigation method to sprinkler irrigation and replacing 20% of the winter wheat-maize cultivated area with cotton, as the best strategy to achieve water balance in the study area.
NASA Technical Reports Server (NTRS)
Chevallier, J. P.; Vaucheret, X.
1986-01-01
A synthesis of current trends in the reduction and computation of wall effects is presented. Some of the points discussed include: (1) for the two-dimensional, transonic tests, various control techniques of boundary conditions are used with adaptive walls offering high precision in determining reference conditions and residual corrections. A reduction in the boundary layer effects of the lateral walls is obtained at T2; (2) for the three-dimensional tests, the methods for the reduction of wall effects are still seldom applied due to a lesser need and to their complexity; (3) the supports holding the model of the probes have to be taken into account in the estimation of perturbatory effects.
Synapse fits neuron: joint reduction by model inversion.
van der Scheer, H T; Doelman, A
2017-08-01
In this paper, we introduce a novel simplification method for dealing with physical systems that can be thought to consist of two subsystems connected in series, such as a neuron and a synapse. The aim of our method is to help find a simple, yet convincing model of the full cascade-connected system, assuming that a satisfactory model of one of the subsystems, e.g., the neuron, is already given. Our method allows us to validate a candidate model of the full cascade against data at a finer scale. In our main example, we apply our method to part of the squid's giant fiber system. We first postulate a simple, hypothetical model of cell-to-cell signaling based on the squid's escape response. Then, given a FitzHugh-type neuron model, we derive the verifiable model of the squid giant synapse that this hypothesis implies. We show that the derived synapse model accurately reproduces synaptic recordings, hence lending support to the postulated, simple model of cell-to-cell signaling, which thus, in turn, can be used as a basic building block for network models.
When Could a Stigma Program to Address Mental Illness in the Workplace Break Even?
Dewa, Carolyn S; Hoch, Jeffrey S
2014-01-01
Objective: To explore basic requirements for a stigma program to produce sufficient savings to pay for itself (that is, break even). Methods: A simple economic model was developed to compare reductions in total short-term disability (SDIS) cost relative to a stigma program’s costs. A 2-way sensitivity analysis is used to illustrate conditions under which this break-even scenario occurs. Results: Using estimates from the literature for the SDIS costs, this analysis shows that a stigma program can provide value added even if there is no reduction in the length of an SDIS leave. To break even, a stigma program with no reduction in the length of an SDIS leave would need to prevent at least 2.5 SDIS claims in an organization of 1000 workers. Similarly, a stigma program can break even with no reduction in the number of SDIS claims if it is able to reduce SDIS episodes by at least 7 days in an organization of 1000 employees. Conclusions: Modelling results, such as those presented in our paper, provide information to help occupational health payers become prudent buyers in the mental health market place. While in most cases, the required reductions seem modest, the real test of both the model and the program occurs once a stigma program is piloted and evaluated in a real-world setting. PMID:25565701
Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagarajan, Adarsh; Nelson, Austin; Prabakar, Kumaraguru
As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time systems and testing PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a Monte Carlo method that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-time digitalmore » testing platform. Smart PV inverters were added to the real-time model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the choice feeders could be analyzed.« less
Visualizing phylogenetic tree landscapes.
Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A
2017-02-02
Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D projections significantly increase the fit between the tree-to-tree distances and can facilitate the interpretation of the relationship among phylogenetic trees. We demonstrate that the choice of dimensionality reduction method can significantly influence the spatial relationship among a large set of competing phylogenetic trees. We highlight the importance of selecting a dimensionality reduction method to visualize large multi-locus phylogenetic landscapes and demonstrate that 3D projections of mitochondrial tree landscapes better capture the relationship among the trees being compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony Leonard; Phillippe Chatelain; Michael Rebel
Heavy ground vehicles, especially those involved in long-haul freight transportation, consume a significant part of our nation's energy supply. it is therefore of utmost importance to improve their efficiency, both to reduce emissions and to decrease reliance on imported oil. At highway speeds, more than half of the power consumed by a typical semi truck goes into overcoming aerodynamic drag, a fraction which increases with speed and crosswind. Thanks to better tools and increased awareness, recent years have seen substantial aerodynamic improvements by the truck industry, such as tractor/trailer height matching, radiator area reduction, and swept fairings. However, there remainsmore » substantial room for improvement as understanding of turbulent fluid dynamics grows. The group's research effort focused on vortex particle methods, a novel approach for computational fluid dynamics (CFD). Where common CFD methods solve or model the Navier-Stokes equations on a grid which stretches from the truck surface outward, vortex particle methods solve the vorticity equation on a Lagrangian basis of smooth particles and do not require a grid. They worked to advance the state of the art in vortex particle methods, improving their ability to handle the complicated, high Reynolds number flow around heavy vehicles. Specific challenges that they have addressed include finding strategies to accurate capture vorticity generation and resultant forces at the truck wall, handling the aerodynamics of spinning bodies such as tires, application of the method to the GTS model, computation time reduction through improved integration methods, a closest point transform for particle method in complex geometrics, and work on large eddy simulation (LES) turbulence modeling.« less
Ulissi, Zachary W.; Tang, Michael T.; Xiao, Jianping; ...
2017-07-27
Bimetallic catalysts are promising for the most difficult thermal and electrochemical reactions, but modeling the many diverse active sites on polycrystalline samples is an open challenge. Here, we present a general framework for addressing this complexity in a systematic and predictive fashion. Active sites for every stable low-index facet of a bimetallic crystal are enumerated and cataloged, yielding hundreds of possible active sites. The activity of these sites is explored in parallel using a neural-network-based surrogate model to share information between the many density functional theory (DFT) relaxations, resulting in activity estimates with an order of magnitude fewer explicit DFTmore » calculations. Sites with interesting activity were found and provide targets for follow-up calculations. This process was applied to the electrochemical reduction of CO 2 on nickel gallium bimetallics and indicated that most facets had similar activity to Ni surfaces, but a few exposed Ni sites with a very favorable on-top CO configuration. This motif emerged naturally from the predictive modeling and represents a class of intermetallic CO 2 reduction catalysts. These sites rationalize recent experimental reports of nickel gallium activity and why previous materials screens missed this exciting material. Most importantly these methods suggest that bimetallic catalysts will be discovered by studying facet reactivity and diversity of active sites more systematically.« less
QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION
Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy
2016-01-01
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method—named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)—for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results. PMID:26778864
NASA Astrophysics Data System (ADS)
Hu-ping, YANY; Chong-wei, ZHONG; Fei-fei, YAN; Cheng-yi, TANG
2018-03-01
In recent years, the energy crisis and greenhouse effect problem have caused wide public concern, if these issues cannot be resolved quickly, they will bring troubles to people’s lives.In response, many countries around the world have implemented policies to reduce energy consumption and greenhouse gas emissions. In our country, the electric power industry has made great contribution to the daily life of people and the development of industry, but it is also an industry of high consumption and high emission.In order to realize the sustainable development of society, it is necessary to make energy conservation and emission reduction in the power industry as an important part of the realization of this goal.In this context, power generation trade has become a hot topic in energy conservation and emission reduction.Through the electricity consumption of the units with different power efficiency and coal consumption rate,it can achieve the target of reducing coal consumption, reducing network loss, reducing greenhouse gas emission, and increasing social benefit,and so on. This article put forward a optimal energy model on the basis of guaranteeing safety and environmental protection.In this paper, they used the IEEE30, IEEE39, IEEE57 and IEEE118 node system as an example, and set up the control groups to prove the practicality of the presented model.The solving method of this model was interior-point method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulissi, Zachary W.; Tang, Michael T.; Xiao, Jianping
Bimetallic catalysts are promising for the most difficult thermal and electrochemical reactions, but modeling the many diverse active sites on polycrystalline samples is an open challenge. Here, we present a general framework for addressing this complexity in a systematic and predictive fashion. Active sites for every stable low-index facet of a bimetallic crystal are enumerated and cataloged, yielding hundreds of possible active sites. The activity of these sites is explored in parallel using a neural-network-based surrogate model to share information between the many density functional theory (DFT) relaxations, resulting in activity estimates with an order of magnitude fewer explicit DFTmore » calculations. Sites with interesting activity were found and provide targets for follow-up calculations. This process was applied to the electrochemical reduction of CO 2 on nickel gallium bimetallics and indicated that most facets had similar activity to Ni surfaces, but a few exposed Ni sites with a very favorable on-top CO configuration. This motif emerged naturally from the predictive modeling and represents a class of intermetallic CO 2 reduction catalysts. These sites rationalize recent experimental reports of nickel gallium activity and why previous materials screens missed this exciting material. Most importantly these methods suggest that bimetallic catalysts will be discovered by studying facet reactivity and diversity of active sites more systematically.« less
Precision GPS ephemerides and baselines
NASA Technical Reports Server (NTRS)
1991-01-01
Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.
Design of a practical model-observer-based image quality assessment method for CT imaging systems
NASA Astrophysics Data System (ADS)
Tseng, Hsin-Wu; Fan, Jiahua; Cao, Guangzhi; Kupinski, Matthew A.; Sainath, Paavana
2014-03-01
The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.
Mays, V M
1995-01-01
This exploratory study examined the use of two components (small and large groups) of a community-based intervention, the Focused Support Group (FSG) model, to alleviate employment-related stressors in Black women. Participants were assigned to small groups based on occupational status. Groups met for five weekly 3-hr sessions in didactic or small- and large-group formats. Two evaluations following the didactic session and the small and large group sessions elicited information on satisfaction with each of the formats, self-reported change in stress, awareness of interpersonal and sociopolitical issues affecting Black women in the labor force, assessing support networks, and usefulness of specific discussion topics to stress reduction. Results indicated the usefulness of the small- and large-group formats in reduction of self-reported stress and increases in personal and professional sources of support. Discussions on race and sex discrimination in the workplace were effective in overall stress reduction. The study highlights labor force participation as a potential source of stress for Black women, and supports the development of culture- and gender-appropriate community interventions as viable and cost-effective methods for stress reduction.
MAYS, VICKIE M.
2013-01-01
This exploratory study examined the use of two components (small and large groups) of a community-based intervention, the Focused Support Group (FSG) model, to alleviate employment-related stressors in Black women. Participants were assigned to small groups based on occupational status. Groups met for five weekly 3-hr sessions in didactic or small- and large-group formats. Two evaluations following the didactic session and the small and large group sessions elicited information on satisfaction with each of the formats, self-reported change in stress, awareness of interpersonal and sociopolitical issues affecting Black women in the labor force, assessing support networks, and usefulness of specific discussion topics to stress reduction. Results indicated the usefulness of the small- and large-group formats in reduction of self-reported stress and increases in personal and professional sources of support. Discussions on race and sex discrimination in the workplace were effective in overall stress reduction. The study highlights labor force participation as a potential source of stress for Black women, and supports the development of culture- and gender-appropriate community interventions as viable and cost-effective methods for stress reduction. PMID:9225548
NASA Astrophysics Data System (ADS)
Erfanian, A.; Fomenko, L.; Wang, G.
2016-12-01
Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling
Finite element analysis using NASTRAN applied to helicopter transmission vibration/noise reduction
NASA Technical Reports Server (NTRS)
Howells, R. W.; Sciarra, J. J.
1975-01-01
A finite element NASTRAN model of the complete forward rotor transmission housing for the Boeing Vertol CH-47 helicopter was developed and applied to reduce transmission vibration/noise at its source. In addition to a description of the model, a technique for vibration/noise prediction and reduction is outlined. Also included are the dynamic response as predicted by NASTRAN, test data, the use of strain energy methods to optimize the housing for minimum vibration/noise, and determination of design modifications which will be manufactured and tested. The techniques presented are not restricted to helicopters but are applicable to any power transmission system. The transmission housing model developed can be used further to evaluate static and dynamic stresses, thermal distortions, deflections and load paths, fail-safety/vulnerability, and composite materials.
VAMPnets for deep learning of molecular kinetics.
Mardt, Andreas; Pasquali, Luca; Wu, Hao; Noé, Frank
2018-01-02
There is an increasing demand for computing the relevant structures, equilibria, and long-timescale kinetics of biomolecular processes, such as protein-drug binding, from high-throughput molecular dynamics simulations. Current methods employ transformation of simulated coordinates into structural features, dimension reduction, clustering the dimension-reduced data, and estimation of a Markov state model or related model of the interconversion rates between molecular structures. This handcrafted approach demands a substantial amount of modeling expertise, as poor decisions at any step will lead to large modeling errors. Here we employ the variational approach for Markov processes (VAMP) to develop a deep learning framework for molecular kinetics using neural networks, dubbed VAMPnets. A VAMPnet encodes the entire mapping from molecular coordinates to Markov states, thus combining the whole data processing pipeline in a single end-to-end framework. Our method performs equally or better than state-of-the-art Markov modeling methods and provides easily interpretable few-state kinetic models.
NASA Astrophysics Data System (ADS)
Exbrayat, Jean-François; Bloom, A. Anthony; Falloon, Pete; Ito, Akihiko; Smallman, T. Luke; Williams, Mathew
2018-02-01
Multi-model averaging techniques provide opportunities to extract additional information from large ensembles of simulations. In particular, present-day model skill can be used to evaluate their potential performance in future climate simulations. Multi-model averaging methods have been used extensively in climate and hydrological sciences, but they have not been used to constrain projected plant productivity responses to climate change, which is a major uncertainty in Earth system modelling. Here, we use three global observationally orientated estimates of current net primary productivity (NPP) to perform a reliability ensemble averaging (REA) method using 30 global simulations of the 21st century change in NPP based on the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) business as usual
emissions scenario. We find that the three REA methods support an increase in global NPP by the end of the 21st century (2095-2099) compared to 2001-2005, which is 2-3 % stronger than the ensemble ISIMIP mean value of 24.2 Pg C y-1. Using REA also leads to a 45-68 % reduction in the global uncertainty of 21st century NPP projection, which strengthens confidence in the resilience of the CO2 fertilization effect to climate change. This reduction in uncertainty is especially clear for boreal ecosystems although it may be an artefact due to the lack of representation of nutrient limitations on NPP in most models. Conversely, the large uncertainty that remains on the sign of the response of NPP in semi-arid regions points to the need for better observations and model development in these regions.
Cavity radiation model for solar central receivers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipps, F.W.
1981-01-01
The Energy Laboratory of the University of Houston has developed a computer simulation program called CREAM (i.e., Cavity Radiations Exchange Analysis Model) for application to the solar central receiver system. The zone generating capability of CREAM has been used in several solar re-powering studies. CREAM contains a geometric configuration factor generator based on Nusselt's method. A formulation of Nusselt's method provides support for the FORTRAN subroutine NUSSELT. Numerical results from NUSSELT are compared to analytic values and values from Sparrow's method. Sparrow's method is based on a double contour integral and its reduction to a single integral which is approximatedmore » by Guassian methods. Nusselt's method is adequate for the intended engineering applications, but Sparrow's method is found to be an order of magnitude more efficient in many situations.« less
NASA Astrophysics Data System (ADS)
Raghupathy, Arun; Ghia, Karman; Ghia, Urmila
2008-11-01
Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.
Guiding Conformation Space Search with an All-Atom Energy Potential
Brunette, TJ; Brock, Oliver
2009-01-01
The most significant impediment for protein structure prediction is the inadequacy of conformation space search. Conformation space is too large and the energy landscape too rugged for existing search methods to consistently find near-optimal minima. To alleviate this problem, we present model-based search, a novel conformation space search method. Model-based search uses highly accurate information obtained during search to build an approximate, partial model of the energy landscape. Model-based search aggregates information in the model as it progresses, and in turn uses this information to guide exploration towards regions most likely to contain a near-optimal minimum. We validate our method by predicting the structure of 32 proteins, ranging in length from 49 to 213 amino acids. Our results demonstrate that model-based search is more effective at finding low-energy conformations in high-dimensional conformation spaces than existing search methods. The reduction in energy translates into structure predictions of increased accuracy. PMID:18536015
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Design sensitivity analysis of rotorcraft airframe structures for vibration reduction
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1987-01-01
Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
NASA Technical Reports Server (NTRS)
Martos, Borja; Kiszely, Paul; Foster, John V.
2011-01-01
As part of the NASA Aviation Safety Program (AvSP), a novel pitot-static calibration method was developed to allow rapid in-flight calibration for subscale aircraft while flying within confined test areas. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. This method has been demonstrated in subscale flight tests and has shown small 2- error bounds with significant reduction in test time compared to other methods. The current research was motivated by the desire to further evaluate and develop this method for full-scale aircraft. A goal of this research was to develop an accurate calibration method that enables reductions in test equipment and flight time, thus reducing costs. The approach involved analysis of data acquisition requirements, development of efficient flight patterns, and analysis of pressure error models based on system identification methods. Flight tests were conducted at The University of Tennessee Space Institute (UTSI) utilizing an instrumented Piper Navajo research aircraft. In addition, the UTSI engineering flight simulator was used to investigate test maneuver requirements and handling qualities issues associated with this technique. This paper provides a summary of piloted simulation and flight test results that illustrates the performance and capabilities of the NASA calibration method. Discussion of maneuver requirements and data analysis methods is included as well as recommendations for piloting technique.
Bifurcation Analysis and Application for Impulsive Systems with Delayed Impulses
NASA Astrophysics Data System (ADS)
Church, Kevin E. M.; Liu, Xinzhi
In this article, we present a systematic approach to bifurcation analysis of impulsive systems with autonomous or periodic right-hand sides that may exhibit delayed impulse terms. Methods include Lyapunov-Schmidt reduction and center manifold reduction. Both methods are presented abstractly in the context of the stroboscopic map associated to a given impulsive system, and are illustrated by way of two in-depth examples: the analysis of a SIR model of disease transmission with seasonality and unevenly distributed moments of treatment, and a scalar logistic differential equation with a delayed census impulsive harvesting effort. It is proven that in some special cases, the logistic equation can exhibit a codimension two bifurcation at a 1:1 resonance point.
Application of empirical and dynamical closure methods to simple climate models
NASA Astrophysics Data System (ADS)
Padilla, Lauren Elizabeth
This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.
Yang, Juan; Li, Lu-jin; Wang, Kun; He, Ying-chun; Sheng, Yu-cheng; Xu, Ling; Huang, Xiao-hui; Guo, Feng; Zheng, Qing-shan
2011-01-01
Aim: To evaluate race differences in the pharmacodynamics of rosuvastatin in Western and Asian hypercholesterolemia patients using a population pharmacodynamic (PPD) model generated and validated using published clinical efficacy trials. Methods: Published studies randomized trials with rosuvastatin treatment for at least 4 weeks in hypercholesterolemia patients were used for model building and validation. Population pharmacodynamic analyses were performed to describe the dose-response relationship with the mean values of LDL-C reduction (%) from dose-ranging trials using NONMEM software. Baseline LDL-C and race were analyzed as the potential covariates. Model robustness was evaluated using the bootstrap method and the data-splitting method, and Monte Carlo simulation was performed to assess the predictive performance of the PPD model with the mean effects from the one-dose trials. Results: Of the 36 eligible trials, 14 dose-ranging trials were used in model development and 22 one-dose trials were used for model prediction. The dose-response of rosuvastatin was successfully described by a simple Emax model with a fixed E0, which provided a common Emax and an approximate twofold difference in ED50 for Westerners and Asians. The PPD model was demonstrated to be stable and predictive. Conclusion: The race differences in the pharmacodynamics of rosuvastatin are consistent with those observed in the pharmacokinetics of the drug, confirming that there is no significant difference in the exposure-response relationship for LDL-C reduction between Westerners and Asians. The study suggests that for a new compound with a mechanism of action similar to that of rosuvastatin, its efficacy in Western populations plus its pharmacokinetics in bridging studies in Asian populations may be used to support a registration of the new compound in Asian countries. PMID:21151159
Synthesis of Refractory Compounds with Gasless Combustion Reactions.
1983-09-01
either Al or Mg as the reducing agent. With Al as the reductant, the stoichiometric equation is 2Mo03 + B203 + 6 Al 2MoB + 3 A120 3 Using the methods...r. rr. . . ;. ., .s . , . . o. . - •-~~- ~- . . . .4 calculated to be 267.3 kcal/mole. With Mg as the reductant, the stoichio- metric equation for the...reaction conditions also are assumed for each model. Conservation of energy and heat diffusion equations are applied to a characteristic control volume
Sufficient Dimension Reduction for Longitudinally Measured Predictors
Pfeiffer, Ruth M.; Forzani, Liliana; Bura, Efstathia
2013-01-01
We propose a method to combine several predictors (markers) that are measured repeatedly over time into a composite marker score without assuming a model and only requiring a mild condition on the predictor distribution. Assuming that the first and second moments of the predictors can be decomposed into a time and a marker component via a Kronecker product structure, that accommodates the longitudinal nature of the predictors, we develop first moment sufficient dimension reduction techniques to replace the original markers with linear transformations that contain sufficient information for the regression of the predictors on the outcome. These linear combinations can then be combined into a score that has better predictive performance than the score built under a general model that ignores the longitudinal structure of the data. Our methods can be applied to either continuous or categorical outcome measures. In simulations we focus on binary outcomes and show that our method outperforms existing alternatives using the AUC, the area under the receiver-operator characteristics (ROC) curve, as a summary measure of the discriminatory ability of a single continuous diagnostic marker for binary disease outcomes. PMID:22161635
A model for solving the prescribed burn planning problem.
Rachmawati, Ramya; Ozlen, Melih; Reinke, Karin J; Hearne, John W
2015-01-01
The increasing frequency of destructive wildfires, with a consequent loss of life and property, has led to fire and land management agencies initiating extensive fuel management programs. This involves long-term planning of fuel reduction activities such as prescribed burning or mechanical clearing. In this paper, we propose a mixed integer programming (MIP) model that determines when and where fuel reduction activities should take place. The model takes into account multiple vegetation types in the landscape, their tolerance to frequency of fire events, and keeps track of the age of each vegetation class in each treatment unit. The objective is to minimise fuel load over the planning horizon. The complexity of scheduling fuel reduction activities has led to the introduction of sophisticated mathematical optimisation methods. While these approaches can provide optimum solutions, they can be computationally expensive, particularly for fuel management planning which extends across the landscape and spans long term planning horizons. This raises the question of how much better do exact modelling approaches compare to simpler heuristic approaches in their solutions. To answer this question, the proposed model is run using an exact MIP (using commercial MIP solver) and two heuristic approaches that decompose the problem into multiple single-period sub problems. The Knapsack Problem (KP), which is the first heuristic approach, solves the single period problems, using an exact MIP approach. The second heuristic approach solves the single period sub problem using a greedy heuristic approach. The three methods are compared in term of model tractability, computational time and the objective values. The model was tested using randomised data from 711 treatment units in the Barwon-Otway district of Victoria, Australia. Solutions for the exact MIP could be obtained for up to a 15-year planning only using a standard implementation of CPLEX. Both heuristic approaches can solve significantly larger problems, involving 100-year or even longer planning horizons. Furthermore there are no substantial differences in the solutions produced by the three approaches. It is concluded that for practical purposes a heuristic method is to be preferred to the exact MIP approach.
Hierarchical optimization for neutron scattering problems
Bao, Feng; Archibald, Rick; Bansal, Dipanshu; ...
2016-03-14
In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.
Hierarchical optimization for neutron scattering problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Archibald, Rick; Bansal, Dipanshu
In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.
1999-01-01
Drag reduction tests were conducted on the LASRE/X-33 flight experiment. The LASRE experiment is a flight test of a roughly 20% scale model of an X-33 forebody with a single aerospike engine at the rear. The experiment apparatus is mounted on top of an SR-71 aircraft. This paper suggests a method for reducing base drag by adding surface roughness along the forebody. Calculations show a potential for base drag reductions of 8-14%. Flight results corroborate the base drag reduction, with actual reductions of 15% in the high-subsonic flight regime. An unexpected result of this experiment is that drag benefits were shown to persist well into the supersonic flight regime. Flight results show no overall net drag reduction. Applied surface roughness causes forebody pressures to rise and offset base drag reductions. Apparently the grit displaced streamlines outward, causing forebody compression. Results of the LASRE drag experiments are inconclusive and more work is needed. Clearly, however, the forebody grit application works as a viable drag reduction tool.
Effect of reduction degree on the adsorption properties of graphene sponge for dyes
NASA Astrophysics Data System (ADS)
Yu, Baowei; Chen, Lingyun; Wu, Ruihan; Liu, Xiaoyang; Li, Hongliang; Yang, Hua; Ming, Zhu; Bai, Yitong; Yang, Sheng-Tao
2017-04-01
Graphene sponge (GS) is usually prepared by reducing graphene oxide for the adsorption of pollutants. Different reduction methods lead to different reduction degrees, but the relationship between reduction degree and adsorption performance is still unexplored. In this study, we prepared three GS samples of different reduction degrees and compared their adsorption properties for different dyes. Taking methylene blue (MB) as the model dye, the adsorption isotherms, kinetics and influencing factors were investigated. The adsorptions of different dyes on three GS samples were also compared. Our results indicated that the adsorption of MB on GS was inhibited at high reduction degree by reducing the electrostatic interaction between oxygen containing groups and MB molecules. The adsorption kinetics slowed down at lower reduction degree. The pH showed more significant influence for highly reduced GS, which should be assigned to the deprotonation of hydroxyl groups at high pH. Ionic strength had ignorable effect on the adsorption. Beyond that, the dye properties also regulated the adsorption. The implication to the design of better GS adsorbents based on reduction degree is discussed.
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.; Fowler, T. R.
1977-01-01
Methods for the reduction of remotely sensed data and its application in hydrologic land use assessment, surface water inventory, and soil property studies are presented. LANDSAT data is used to provide quantitative parameters and coefficients to construct watershed transfer functions for a hydrologic planning model aimed at estimating peak outflow from rainfall inputs.
NASA Astrophysics Data System (ADS)
Bergion, Viktor; Sokolova, Ekaterina; Åström, Johan; Lindhe, Andreas; Sörén, Kaisa; Rosén, Lars
2017-01-01
Waterborne outbreaks of gastrointestinal diseases are of great concern to drinking water producers and can give rise to substantial costs to the society. The World Health Organisation promotes an approach where the emphasis is on mitigating risks close to the contamination source. In order to handle microbial risks efficiently, there is a need for systematic risk management. In this paper we present a framework for microbial risk management of drinking water systems. The framework incorporates cost-benefit analysis as a decision support method. The hydrological Soil and Water Assessment Tool (SWAT) model, which was set up for the Stäket catchment area in Sweden, was used to simulate the effects of four different mitigation measures on microbial concentrations. The modelling results showed that the two mitigation measures that resulted in a significant (p < 0.05) reduction of Cryptosporidium spp. and Escherichia coli concentrations were a vegetative filter strip linked to cropland and improved treatment (by one Log10 unit) at the wastewater treatment plants. The mitigation measure with a vegetative filter strip linked to grazing areas resulted in a significant reduction of Cryptosporidium spp., but not of E. coli concentrations. The mitigation measure with enhancing the removal efficiency of all on-site wastewater treatment systems (total removal of 2 Log10 units) did not achieve any significant reduction of E. coli or Cryptosporidium spp. concentrations. The SWAT model was useful when characterising the effect of different mitigation measures on microbial concentrations. Hydrological modelling implemented within an appropriate risk management framework is a key decision support element as it identifies the most efficient alternative for microbial risk reduction.
Modelling Electrical Energy Consumption in Automotive Paint Shop
NASA Astrophysics Data System (ADS)
Oktaviandri, Muchamad; Safiee, Aidil Shafiza Bin
2018-03-01
Industry players are seeking ways to reduce operational cost to sustain in a challenging economic trend. One key aspect is an energy cost reduction. However, implementing energy reduction strategy often struggle with obstructions, which slow down their realization and implementation. Discrete event simulation method is an approach actively discussed in current research trend to overcome such obstructions because of its flexibility and comprehensiveness. Meanwhile, in automotive industry, paint shop is considered the most energy consumer area which is reported consuming about 50%-70% of overall automotive plant consumption. Hence, this project aims at providing a tool to model and simulate energy consumption at paint shop area by conducting a case study at XYZ Company, one of the automotive companies located at Pekan, Pahang. The simulation model was developed using Tecnomatix Plant Simulation software version 13. From the simulation result, the model was accurately within ±5% for energy consumption and ±15% for maximum demand after validation with real system. Two different energy saving scenarios were tested. Scenario 1 was based on production scheduling approach under low demand situation which results energy saving up to 30% on the consumption. Meanwhile scenario 2 was based on substituting high power compressor with the lower power compressor. The results were energy consumption saving of approximately 1.42% and maximum demand reduction about 1.27%. This approach would help managers and engineers to justify worthiness of investment for implementing the reduction strategies.
Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y
2012-01-01
A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.
Effects of radon mitigation vs smoking cessation in reducing radon-related risk of lung cancer.
Mendez, D; Warner, K E; Courant, P N
1998-01-01
OBJECTIVES: The purpose of this paper is to provide smokers with information on the relative benefits of mitigating radon and quitting smoking in reducing radon-related lung cancer risk. METHODS: The standard radon risk model, linked with models characterizing residential radon exposure and patterns of moving to new homes, was used to estimate the risk reduction produced by remediating high-radon homes, quitting smoking, or both. RESULTS: Quitting smoking reduces lung cancer risk from radon more than does reduction of radon exposure itself. CONCLUSIONS: Smokers should understand that, in addition to producing other health benefits, quitting smoking dominates strategies to deal with the problem posed by radon. PMID:9585753
Modern CACSD using the Robust-Control Toolbox
NASA Technical Reports Server (NTRS)
Chiang, Richard Y.; Safonov, Michael G.
1989-01-01
The Robust-Control Toolbox is a collection of 40 M-files which extend the capability of PC/PRO-MATLAB to do modern multivariable robust control system design. Included are robust analysis tools like singular values and structured singular values, robust synthesis tools like continuous/discrete H(exp 2)/H infinity synthesis and Linear Quadratic Gaussian Loop Transfer Recovery methods and a variety of robust model reduction tools such as Hankel approximation, balanced truncation and balanced stochastic truncation, etc. The capabilities of the toolbox are described and illustated with examples to show how easily they can be used in practice. Examples include structured singular value analysis, H infinity loop-shaping and large space structure model reduction.
EEG data reduction by means of autoregressive representation and discriminant analysis procedures.
Blinowska, K J; Czerwosz, L T; Drabik, W; Franaszczuk, P J; Ekiert, H
1981-06-01
A program for automatic evaluation of EEG spectra, providing considerable reduction of data, was devised. Artefacts were eliminated in two steps: first, the longer duration eye movement artefacts were removed by a fast and simple 'moving integral' methods, then occasional spikes were identified by means of a detection function defined in the formalism of the autoregressive (AR) model. The evaluation of power spectra was performed by means of an FFT and autoregressive representation, which made possible the comparison of both methods. The spectra obtained by means of the AR model had much smaller statistical fluctuations and better resolution, enabling us to follow the time changes of the EEG pattern. Another advantage of the autoregressive approach was the parametric description of the signal. This last property appeared to be essential in distinguishing the changes in the EEG pattern. In a drug study the application of the coefficients of the AR model as input parameters in the discriminant analysis, instead of arbitrary chosen frequency bands, brought a significant improvement in distinguishing the effects of the medication. The favourable properties of the AR model are connected with the fact that the above approach fulfils the maximum entropy principle. This means that the method describes in a maximally consistent way the available information and is free from additional assumptions, which is not the case for the FFT estimate.
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less
Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap
NASA Astrophysics Data System (ADS)
Spiwok, Vojtěch; Králová, Blanka
2011-12-01
Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Gibson, Richard L.
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai, E-mail: kaigao87@gmail.com; Fu, Shubin, E-mail: shubinfu89@gmail.com; Gibson, Richard L., E-mail: gibson@tamu.edu
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less
Gao, Kai; Fu, Shubin; Gibson, Richard L.; ...
2015-04-14
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less
Advection modes by optimal mass transfer
NASA Astrophysics Data System (ADS)
Iollo, Angelo; Lombardi, Damiano
2014-02-01
Classical model reduction techniques approximate the solution of a physical model by a limited number of global modes. These modes are usually determined by variants of principal component analysis. Global modes can lead to reduced models that perform well in terms of stability and accuracy. However, when the physics of the model is mainly characterized by advection, the nonlocal representation of the solution by global modes essentially reduces to a Fourier expansion. In this paper we describe a method to determine a low-order representation of advection. This method is based on the solution of Monge-Kantorovich mass transfer problems. Examples of application to point vortex scattering, Korteweg-de Vries equation, and hurricane Dean advection are discussed.
Proposal for a model to assess the effect of seismic activity on the triggering of debris flows
NASA Astrophysics Data System (ADS)
Vidar Vangelsten, Bjørn; Liu, Zhongqiang; Eidsvig, Unni; Luna, Byron Quan; Nadim, Farrokh
2013-04-01
Landslide triggered by earthquakes is a serious threat for many communities around the world, and in some cases is known to have caused 25-50% of the earthquake fatalities. Seismic shaking can contribute to the triggering of debris flows either during the seismic event or indirectly by increasing the susceptibility of the slope to debris flow during intense rainfall in a period after the seismic event. The paper proposes a model to quantify both these effects. The model is based on an infinite slope formulation where precipitation and earthquakes influence the slope stability as follows: (1) During the shaking, the factor of safety is reduced due to cyclic pore pressure build-up where the cyclic pore pressure is modelled as a function of earthquake duration and intensity (measured as number of equivalent shear stress cycles and cyclic shear stress magnitude) and in-situ soil conditions (measured as average normalised shear stress). The model is calibrated using cyclic triaxial and direct simple shear (DSS) test data on clay and sand. (2) After the shaking, the factor of safety is modified using a combined empirical and analytical model that links observed earthquake induced changes in rainfall thresholds for triggering of debris flow to an equivalent reduction in soil shear strength. The empirical part uses data from past earthquakes to propose a conceptual model linking a site-specific reduction factor for rainfall intensity threshold (needed to trigger debris flows) to earthquake magnitude, distance from the epicentre and time period after the earthquake. The analytical part is a hydrological model for transient rainfall infiltration into an infinite slope in order to translate the change in rainfall intensity threshold into an equivalent reduction in soil shear strength. This is generalised into a functional form giving a site-specific shear strength reduction factor as function of earthquake history and soil conditions. The model is suitable for hazard and risk assessment at local and regional scale for earthquake and rainfall induced landslide. The research leading to these results has received funding from the European Community's Seventh Framework Programme [FP7/2007-2013] under grant agreement No 265138 New Multi-HAzard and MulTi-RIsK Assessment MethodS for Europe (MATRIX).
FW/CADIS-O: An Angle-Informed Hybrid Method for Neutron Transport
NASA Astrophysics Data System (ADS)
Munk, Madicken
The development of methods for deep-penetration radiation transport is of continued importance for radiation shielding, nonproliferation, nuclear threat reduction, and medical applications. As these applications become more ubiquitous, the need for transport methods that can accurately and reliably model the systems' behavior will persist. For these types of systems, hybrid methods are often the best choice to obtain a reliable answer in a short amount of time. Hybrid methods leverage the speed and uniform uncertainty distribution of a deterministic solution to bias Monte Carlo transport to reduce the variance in the solution. At present, the Consistent Adjoint-Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) hybrid methods are the gold standard by which to model systems that have deeply-penetrating radiation. They use an adjoint scalar flux to generate variance reduction parameters for Monte Carlo. However, in problems where there exists strong anisotropy in the flux, CADIS and FW-CADIS are not as effective at reducing the problem variance as isotropic problems. This dissertation covers the theoretical background, implementation of, and characteri- zation of a set of angle-informed hybrid methods that can be applied to strongly anisotropic deep-penetration radiation transport problems. These methods use a forward-weighted adjoint angular flux to generate variance reduction parameters for Monte Carlo. As a result, they leverage both adjoint and contributon theory for variance reduction. They have been named CADIS-O and FW-CADIS-O. To characterize CADIS-O, several characterization problems with flux anisotropies were devised. These problems contain different physical mechanisms by which flux anisotropy is induced. Additionally, a series of novel anisotropy metrics by which to quantify flux anisotropy are used to characterize the methods beyond standard Figure of Merit (FOM) and relative error metrics. As a result, a more thorough investigation into the effects of anisotropy and the degree of anisotropy on Monte Carlo convergence is possible. The results from the characterization of CADIS-O show that it performs best in strongly anisotropic problems that have preferential particle flowpaths, but only if the flowpaths are not comprised of air. Further, the characterization of the method's sensitivity to deterministic angular discretization showed that CADIS-O has less sensitivity to discretization than CADIS for both quadrature order and PN order. However, more variation in the results were observed in response to changing quadrature order than PN order. Further, as a result of the forward-normalization in the O-methods, ray effect mitigation was observed in many of the characterization problems. The characterization of the CADIS-O-method in this dissertation serves to outline a path forward for further hybrid methods development. In particular, the response that the O-method has with changes in quadrature order, PN order, and on ray effect mitigation are strong indicators that the method is more resilient than its predecessors to strong anisotropies in the flux. With further method characterization, the full potential of the O-methods can be realized. The method can then be applied to geometrically complex, materially diverse problems and help to advance system modelling in deep-penetration radiation transport problems with strong anisotropies in the flux.
Noise reduction of a tilt-rotor aircraft including effects on weight and performance
NASA Technical Reports Server (NTRS)
Gibs, J.; Stepniewski, W. Z.; Spencer, R.; Kohler, G.
1973-01-01
Various methods for far-field noise reduction of a tilt-rotor acoustic signature and the performance and weight tradeoffs which result from modification of the noise sources are considered in this report. In order to provide a realistic approach for the investigation, the Boeing tilt-rotor flight research aircraft (Model 222), was selected as the baseline. This aircraft has undergone considerable engineering development. Its rotor has been manufactured and tested in the Ames full-scale wind tunnel. The study reflects the current state-of-the-art of aircraft design for far-field acoustic signature reduction and is not based solely on an engineering feasibility aircraft. This report supplements a previous study investigating reduction of noise signature through the management of the terminal flight trajectory.
A visual servo-based teleoperation robot system for closed diaphyseal fracture reduction.
Li, Changsheng; Wang, Tianmiao; Hu, Lei; Zhang, Lihai; Du, Hailong; Zhao, Lu; Wang, Lifeng; Tang, Peifu
2015-09-01
Common fracture treatments include open reduction and intramedullary nailing technology. However, these methods have disadvantages such as intraoperative X-ray radiation, delayed union or nonunion and postoperative rotation. Robots provide a novel solution to the aforementioned problems while posing new challenges. Against this scientific background, we develop a visual servo-based teleoperation robot system. In this article, we present a robot system, analyze the visual servo-based control system in detail and develop path planning for fracture reduction, inverse kinematics, and output forces of the reduction mechanism. A series of experimental tests is conducted on a bone model and an animal bone. The experimental results demonstrate the feasibility of the robot system. The robot system uses preoperative computed tomography data to realize high precision and perform minimally invasive teleoperation for fracture reduction via the visual servo-based control system while protecting surgeons from radiation. © IMechE 2015.
NASA Astrophysics Data System (ADS)
Yan, Zhen-Ya
2001-10-01
In this paper, similarity reductions of Boussinesq-like equations with nonlinear dispersion (simply called B(m,n) equations) utt=(u^n)xx+(u^m)xxxx, which is a generalized model of Boussinesq equation utt=(u^2)xx+uxxxx and modified Bousinesq equation utt=(u^3)xx+uxxxx, are considered by using the direct reduction method. As a result, several new types of similarity reductions are found. Based on the reduction equations and some simple transformations, we obtain the solitary wave solutions and compacton solutions (which are solitary waves with the property that after colliding with other compacton solutions, they re-emerge with the same coherent shape) of B(1,n) equations and B(m,m) equations, respectively. The project supported by National Key Basic Research Development Project Program of China under Grant No. G1998030600 and Doctoral Foundation of China under Grant No. 98014119
Jeon, Sangchoon; Walkup, John T; Woods, Douglas W.; Peterson, Alan; Piacentini, John; Wilhelm, Sabine; Katsovich, Lily; McGuire, Joseph F.; Dziura, James; Scahill, Lawrence
2014-01-01
Objective To compare three statistical strategies for classifying positive treatment response based on a dimensional measure (Yale Global Tic Severity Scale [YGTSS]) and a categorical measure (Clinical Global Impression-Improvement [CGI-I]). Method Subjects (N=232; 69.4% male; ages 9-69 years) with Tourette syndrome or chronic tic disorder participated in one of two 10-week, randomized controlled trials comparing behavioral treatment to supportive therapy. The YGTSS and CGI-I were rated by clinicians blind to treatment assignment. We examined the percent reduction in the YGTSS-Total Tic Score (TTS) against Much Improved or Very Much Improved on the CGI-I, computed a signal detection analysis (SDA) and built a mixture model to classify dimensional response based on the change in the YGTSS-TTS. Results A 25% decrease on the YGTSS-TTS predicted positive response on the CGI-I during the trial. The SDA showed that a 25% reduction in the YGTSS-TTS provided optimal sensitivity (87%) and specificity (84%) for predicting positive response. Using a mixture model without consideration of the CGI-I, the dimensional response was defined by 23% (or greater) reduction on the YGTSS-TTS. The odds ratio (OR) of positive response (OR=5.68, 95% CI=[2.99, 10.78]) on the CGI-I for behavioral intervention was greater than the dimensional response (OR=2.86, 95% CI=[1.65, 4.99]). Conclusion A twenty five percent reduction on the YGTSS-TTS is highly predictive of positive response by all three analytic methods. For trained raters, however, tic severity alone does not drive the classification of positive response. PMID:24001701
Local spatiotemporal time-frequency peak filtering method for seismic random noise reduction
NASA Astrophysics Data System (ADS)
Liu, Yanping; Dang, Bo; Li, Yue; Lin, Hongbo
2014-12-01
To achieve a higher level of seismic random noise suppression, the Radon transform has been adopted to implement spatiotemporal time-frequency peak filtering (TFPF) in our previous studies. Those studies involved performing TFPF in full-aperture Radon domain, including linear Radon and parabolic Radon. Although the superiority of this method to the conventional TFPF has been tested through processing on synthetic seismic models and field seismic data, there are still some limitations in the method. Both full-aperture linear Radon and parabolic Radon are applicable and effective for some relatively simple situations (e.g., curve reflection events with regular geometry) but inapplicable for complicated situations such as reflection events with irregular shapes, or interlaced events with quite different slope or curvature parameters. Therefore, a localized approach to the application of the Radon transform must be applied. It would serve the filter method better by adapting the transform to the local character of the data variations. In this article, we propose an idea that adopts the local Radon transform referred to as piecewise full-aperture Radon to realize spatiotemporal TFPF, called local spatiotemporal TFPF. Through experiments on synthetic seismic models and field seismic data, this study demonstrates the advantage of our method in seismic random noise reduction and reflection event recovery for relatively complicated situations of seismic data.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Mekonnen, Tekeshe A.; Odden, Michelle C.; Coxson, Pamela G.; Guzman, David; Lightwood, James; Wang, Y. Claire; Bibbins-Domingo, Kirsten
2013-01-01
Background Consumption of sugar-sweetened beverage (SSB) has risen over the past two decades, with over 10 million Californians drinking one or more SSB per day. High SSB intake is associated with risk of type 2 diabetes, obesity, hypertension, and coronary heart disease (CHD). Reduction of SSB intake and the potential impact on health outcomes in California and among racial, ethnic, and low-income sub-groups has not been quantified. Methods We projected the impact of reduced SSB consumption on health outcomes among all Californians and California subpopulations from 2013 to 2022. We used the CVD Policy Model – CA, an established computer simulation of diabetes and heart disease adapted to California. We modeled a reduction in SSB intake by 10–20% as has been projected to result from proposed penny-per-ounce excise tax on SSB and modeled varying effects of this reduction on health parameters including body mass index, blood pressure, and diabetes risk. We projected avoided cases of diabetes and CHD, and associated health care cost savings in 2012 US dollars. Results Over the next decade, a 10–20% SSB consumption reduction is projected to result in a 1.8–3.4% decline in the new cases of diabetes and an additional drop of 0.5–1% in incident CHD cases and 0.5–0.9% in total myocardial infarctions. The greatest reductions are expected in African Americans, Mexican Americans, and those with limited income regardless of race and ethnicity. This reduction in SSB consumption is projected to yield $320–620 million in medical cost savings associated with diabetes cases averted and an additional savings of $14–27 million in diabetes-related CHD costs avoided. Conclusions A reduction of SSB consumption could yield substantial population health benefits and cost savings for California. In particular, racial, ethnic, and low-income subgroups of California could reap the greatest health benefits. PMID:24349119
Kulish-Sklyanin-type models: Integrability and reductions
NASA Astrophysics Data System (ADS)
Gerdjikov, V. S.
2017-08-01
We start with a Riemann-Hilbert problem ( RHP) related to BD.I- type symmetric spaces SO(2 r + 1)/ S( O(2 r - 2 s+1) ⊗ O(2 s)), s ≥ 1. We consider two RHPs: the first is formulated on the real axis R in the complex-λ plane; the second, on R ⊗ iR. The first RHP for s = 1 allows solving the Kulish-Sklyanin (KS) model; the second RHP is related to a new type of KS model. We consider an important example of nontrivial deep reductions of the KS model and show its effect on the scattering matrix. In particular, we obtain new two-component nonlinear Schrödinger equations. Finally, using the Wronski relations, we show that the inverse scattering method for KS models can be understood as generalized Fourier transforms. We thus find a way to characterize all the fundamental properties of KS models including the hierarchy of equations and the hierarchy of their Hamiltonian structures.
Formal modeling of a system of chemical reactions under uncertainty.
Ghosh, Krishnendu; Schlipf, John
2014-10-01
We describe a novel formalism representing a system of chemical reactions, with imprecise rates of reactions and concentrations of chemicals, and describe a model reduction method, pruning, based on the chemical properties. We present two algorithms, midpoint approximation and interval approximation, for construction of efficient model abstractions with uncertainty in data. We evaluate computational feasibility by posing queries in computation tree logic (CTL) on a prototype of extracellular-signal-regulated kinase (ERK) pathway.
Zepeda-Tello, Rodrigo; Rodrigues, Eliane R.; Colchero-Aragonés, Arantxa; Rojas-Martínez, Rosalba; Lazcano-Ponce, Eduardo; Hernández-Ávila, Mauricio; Rivera-Dommarco, Juan; Meza, Rafael
2017-01-01
Study question What effect on body mass index, obesity and diabetes can we expect from the 1-peso-per-litre tax to sugar sweetened beverages in Mexico? Methods Using recently published estimates of the reductions in beverage purchases due to the tax, we modelled its expected long-term impacts on body mass index (BMI), obesity and diabetes. Microsimulations based on a nationally representative dataset were used to estimate the impact of the tax on BMI and obesity. A Markov population model, built upon an age-period-cohort model of diabetes incidence, was used to estimate the impact on diagnosed diabetes in Mexico. To analyse the potential of tax increases we also modelled a 2-peso-per-litre tax scenario. Study answer and limitations Ten years after the implementation of the tax, we expect an average reduction of 0.15 kg/m2 per person, which translates into a 2.54% reduction in obesity prevalence. People in the lowest level of socioeconomic status and those between 20 and 35 years of age showed the largest reductions in BMI and overweight and obesity prevalence. Simulations show that by 2030, under the current implementation of 1-peso-per-litre, the tax would prevent 86 to 134 thousand cases of diabetes. Overall, the 2-peso-per-litre scenario is expected to produce twice as much of a reduction. These estimates assume the tax effect on consumption remains stable over time. Sensitivity analyses were conducted to assess the robustness of findings; similar results were obtained with various parameter assumptions and alternative modelling approaches. What this study adds The sugar-sweetened beverages tax in Mexico is expected to produce sizable and sustained reductions in obesity and diabetes. Increasing the tax could produce larger benefits. While encouraging, estimates will need to be updated once data on direct changes in consumption becomes available. PMID:28520716
Henne, Erik; Kesten, Steven; Herth, Felix J F
2013-01-01
A method of achieving endoscopic lung volume reduction for emphysema has been developed that utilizes precise amounts of thermal energy in the form of water vapor to ablate lung tissue. This study evaluates the energy output and implications of the commercial InterVapor system and compares it to the clinical trial system. Two methods of evaluating the energy output of the vapor systems were used, a direct energy measurement and a quantification of resultant thermal profile in a lung model. Direct measurement of total energy and the component attributable to gas (vapor energy) was performed by condensing vapor in a water bath and measuring the temperature and mass changes. Infrared images of a lung model were taken after vapor delivery. The images were quantified to characterize the thermal profile. The total energy and vapor energy of the InterVapor system was measured at various dose levels and compared to the clinical trial system at a dose of 10.0 cal/g. An InterVapor dose of 8.5 cal/g was found to have the most similar vapor energy output with the smallest associated reduction in total energy. This was supported by characterization of the thermal profile in the lung model that demonstrated the profile of InterVapor at 8.5 cal/g to not exceed the profile of the clinical trial system. Considering both total energy and vapor energy is important during the development of clinical vapor applications. For InterVapor, a closer study of both energy types justified a reduced target vapor-dosing range for lung volume reduction. The clinical implication is a potential improvement for benefiting the risk profile. Copyright © 2013 S. Karger AG, Basel.
Gupta, Amit O; Jain, Sourav; Dawane, Jayshree Shriram
2017-01-01
Introduction The incidence of arthritis is quite high and there is a need for the search of natural products to halt the progression of disease or provide symptomatic relief without significant adverse effects. Aim This study aimed at evaluating the anti-inflammatory and analgesic activities of topical Pterocarpus santalinus in an animal model of chronic inflammation. Materials and Methods Albino rats of either sex were divided into five groups of six rats each (Group I – Control, Group II –Gel base, Group III –P. santalinus paste, Group IV –P. santalinus gel, Group V– Diclofenac gel). Chronic inflammation was induced on day 0 by injecting 0.1 ml Complete Freund’s Adjuvant (CFA) in sub-plantar tissue of left hind paw of the rats. Topical treatment was started from day 12 till day 28. Body weight and paw volume (Plethysmometer) were assessed on day 0, 12 and 28. Pain assessment was done using Randall and Selitto paw withdrawal method. Data was analysed using GraphPad Prism version 5. Unpaired students t-test and ANOVA followed by Tukey’s test was used for comparison among groups. Results Only topical P.santalinus gel significantly reduced the body weight (p=0.02) due to reduction in inflammatory oedema of the left limb. P. santalinus gel also showed significant reduction (p=0.03) in paw volume of rats compared to the other groups. There was significant reduction in pain threshold (gm/sec) due to chronic inflammation, with all the study drugs (p<0.05) but with P. santalinus gel, this reduction was less (p<0.001). Conclusion Gel showed significant anti-inflammatory and mild analgesic activity on topical application in rat model of chronic inflammation. PMID:28892928
NASA Astrophysics Data System (ADS)
Lafranchi, B. W.; Goldstein, A. H.; Cohen, R. C.
2011-07-01
Observations of NOx in the Sacramento, CA region show that mixing ratios decreased by 30 % between 2001 and 2008. Here we use an observation-based method to quantify net ozone (O3) production rates in the outflow from the Sacramento metropolitan region and examine the O3 decrease resulting from reductions in NOx emissions. This observational method does not rely on assumptions about detailed chemistry of ozone production, rather it is an independent means to verify and test these assumptions. We use an instantaneous steady-state model as well as a detailed 1-D plume model to aid in interpretation of the ozone production inferred from observations. In agreement with the models, the observations show that early in the plume, the NOx dependence for Ox (Ox = O3 + NO2) production is strongly coupled with temperature, suggesting that temperature-dependent biogenic VOC emissions and other temperature-related effects can drive Ox production between NOx-limited and NOx-suppressed regimes. As a result, NOx reductions were found to be most effective at higher temperatures over the 7 year period. We show that violations of the California 1-h O3 standard (90 ppb) in the region have been decreasing linearly with decreases in NOx (at a given temperature) and predict that reductions of NOx concentrations (and presumably emissions) by an additional 30 % (relative to 2007 levels) will eliminate violations of the state 1 h standard in the region. If current trends continue, a 30 % decrease in NOx is expected by 2012, and an end to violations of the 1 h standard in the Sacramento region appears to be imminent.
Consumer preference for dinoprostone vaginal gel using stated preference discrete choice modelling.
Taylor, Susan; Armour, Carol
2003-01-01
To assess consumer preference for two methods of induction of labour using stated preference discrete choice modelling. The methods of induction were artificial rupture of the membranes (ARM) plus oxytocin and dinoprostone (prostaglandin E(2)) vaginal gel, followed by oxytocin if necessary. Consumer preference was measured in terms of willingness to pay for each of the attributes. These attributes were the method of administration, place of care, length of time from induction to delivery, need for epidural anaesthetic, type of delivery and cost. Levels were assigned to each of the attributes. Pregnant women attending a public hospital antenatal clinic were asked to read a description of the two methods and then to choose between them in 18 different scenarios in which the levels of the attributes were varied. Women were willing to pay 11 Australian dollars for a 1% reduction in the chance of needing oxytocin as well as the gel and 55 Australian dollars for every 1 hour reduction in the length of time from induction to delivery. For a 1% reduction in the chance of needing an epidural anaesthetic or Caesarean section, women expressed a willingness to pay of 20 Australian dollars and 90 Australian dollars, respectively. All estimates were obtained in 1998 and expressed in Australian dollars (1 Australian dollar = 0.63 US dollars). Women valued the less invasive method of administration of the gel and the associated greater freedom of movement during labour. However, they valued the shorter time from induction to delivery associated with ARM plus oxytocin more highly. A policy which allows women access to the gel for up to two doses would accommodate this consumer preference.
Low-dimensional, morphologically accurate models of subthreshold membrane potential
Kellems, Anthony R.; Roos, Derrick; Xiao, Nan; Cox, Steven J.
2009-01-01
The accurate simulation of a neuron’s ability to integrate distributed synaptic input typically requires the simultaneous solution of tens of thousands of ordinary differential equations. For, in order to understand how a cell distinguishes between input patterns we apparently need a model that is biophysically accurate down to the space scale of a single spine, i.e., 1 μm. We argue here that one can retain this highly detailed input structure while dramatically reducing the overall system dimension if one is content to accurately reproduce the associated membrane potential at a small number of places, e.g., at the site of action potential initiation, under subthreshold stimulation. The latter hypothesis permits us to approximate the active cell model with an associated quasi-active model, which in turn we reduce by both time-domain (Balanced Truncation) and frequency-domain (ℋ2 approximation of the transfer function) methods. We apply and contrast these methods on a suite of typical cells, achieving up to four orders of magnitude in dimension reduction and an associated speed-up in the simulation of dendritic democratization and resonance. We also append a threshold mechanism and indicate that this reduction has the potential to deliver an accurate quasi-integrate and fire model. PMID:19172386
NASA Astrophysics Data System (ADS)
Miura, Yasunari; Sugiyama, Yuki
2017-12-01
We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.
Multiscale model reduction for shale gas transport in poroelastic fractured media
NASA Astrophysics Data System (ADS)
Akkutlu, I. Yucel; Efendiev, Yalchin; Vasilyeva, Maria; Wang, Yuhe
2018-01-01
Inherently coupled flow and geomechanics processes in fractured shale media have implications for shale gas production. The system involves highly complex geo-textures comprised of a heterogeneous anisotropic fracture network spatially embedded in an ultra-tight matrix. In addition, nonlinearities due to viscous flow, diffusion, and desorption in the matrix and high velocity gas flow in the fractures complicates the transport. In this paper, we develop a multiscale model reduction approach to couple gas flow and geomechanics in fractured shale media. A Discrete Fracture Model (DFM) is used to treat the complex network of fractures on a fine grid. The coupled flow and geomechanics equations are solved using a fixed stress-splitting scheme by solving the pressure equation using a continuous Galerkin method and the displacement equation using an interior penalty discontinuous Galerkin method. We develop a coarse grid approximation and coupling using the Generalized Multiscale Finite Element Method (GMsFEM). GMsFEM constructs the multiscale basis functions in a systematic way to capture the fracture networks and their interactions with the shale matrix. Numerical results and an error analysis is provided showing that the proposed approach accurately captures the coupled process using a few multiscale basis functions, i.e. a small fraction of the degrees of freedom of the fine-scale problem.
Bi, Jian
2010-01-01
As the desire to promote health increases, reductions of certain ingredients, for example, sodium, sugar, and fat in food products, are widely requested. However, the reduction is not risk free in sensory and marketing aspects. Over reduction may change the taste and influence the flavor of a product and lead to a decrease in consumer's overall liking or purchase intent for the product. This article uses the benchmark dose (BMD) methodology to determine an appropriate reduction. Calculations of BMD and one-sided lower confidence limit of BMD are illustrated. The article also discusses how to calculate BMD and BMDL for over dispersed binary data in replicated testing based on a corrected beta-binomial model. USEPA Benchmark Dose Software (BMDS) were used and S-Plus programs were developed. The method discussed in the article is originally used to determine an appropriate reduction of certain ingredients, for example, sodium, sugar, and fat in food products, considering both health reason and sensory or marketing risk.
A Molecular Dynamic Modeling of Hemoglobin-Hemoglobin Interactions
NASA Astrophysics Data System (ADS)
Wu, Tao; Yang, Ye; Sheldon Wang, X.; Cohen, Barry; Ge, Hongya
2010-05-01
In this paper, we present a study of hemoglobin-hemoglobin interaction with model reduction methods. We begin with a simple spring-mass system with given parameters (mass and stiffness). With this known system, we compare the mode superposition method with Singular Value Decomposition (SVD) based Principal Component Analysis (PCA). Through PCA we are able to recover the principal direction of this system, namely the model direction. This model direction will be matched with the eigenvector derived from mode superposition analysis. The same technique will be implemented in a much more complicated hemoglobin-hemoglobin molecule interaction model, in which thousands of atoms in hemoglobin molecules are coupled with tens of thousands of T3 water molecule models. In this model, complex inter-atomic and inter-molecular potentials are replaced by nonlinear springs. We employ the same method to get the most significant modes and their frequencies of this complex dynamical system. More complex physical phenomena can then be further studied by these coarse grained models.
Estimation of social value of statistical life using willingness-to-pay method in Nanjing, China.
Yang, Zhao; Liu, Pan; Xu, Xin
2016-10-01
Rational decision making regarding the safety related investment programs greatly depends on the economic valuation of traffic crashes. The primary objective of this study was to estimate the social value of statistical life in the city of Nanjing in China. A stated preference survey was conducted to investigate travelers' willingness to pay for traffic risk reduction. Face-to-face interviews were conducted at stations, shopping centers, schools, and parks in different districts in the urban area of Nanjing. The respondents were categorized into two groups, including motorists and non-motorists. Both the binary logit model and mixed logit model were developed for the two groups of people. The results revealed that the mixed logit model is superior to the fixed coefficient binary logit model. The factors that significantly affect people's willingness to pay for risk reduction include income, education, gender, age, drive age (for motorists), occupation, whether the charged fees were used to improve private vehicle equipment (for motorists), reduction in fatality rate, and change in travel cost. The Monte Carlo simulation method was used to generate the distribution of value of statistical life (VSL). Based on the mixed logit model, the VSL had a mean value of 3,729,493 RMB ($586,610) with a standard deviation of 2,181,592 RMB ($343,142) for motorists; and a mean of 3,281,283 RMB ($505,318) with a standard deviation of 2,376,975 RMB ($366,054) for non-motorists. Using the tax system to illustrate the contribution of different income groups to social funds, the social value of statistical life was estimated. The average social value of statistical life was found to be 7,184,406 RMB ($1,130,032). Copyright © 2016 Elsevier Ltd. All rights reserved.
The relationship between stochastic and deterministic quasi-steady state approximations.
Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R
2015-11-23
The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.
Electric Power Distribution System Model Simplification Using Segment Substitution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Skeletal Mechanism Generation of Surrogate Jet Fuels for Aeropropulsion Modeling
NASA Astrophysics Data System (ADS)
Sung, Chih-Jen; Niemeyer, Kyle E.
2010-05-01
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with skeletal reductions of two important hydrocarbon components, n-heptane and n-decane, relevant to surrogate jet fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each previous method, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal.
O'Regan, Barry; Devine, Maria; Bhopal, Sats
2013-01-01
Stable anatomical fracture reduction and segment control before miniplate fixation can be difficult to achieve in comminuted midfacial fractures. Fracture mobilization and reduction methods include Gillies elevation, malar hook, and Dingman elevators. No single method is used universally. Disadvantages include imprecise segment alignment and poor segment stability/control. We have employed screw-wire osteo-traction (SWOT) to address this problem. A literature review revealed two published reports. The aims were to evaluate the SWOT technique effectiveness as a fracture reduction method and to examine rates of revision fixation and plate removal. We recruited 40 consecutive patients requiring open reduction and internal fixation of multisegment midfacial fractures (2009–2012) and employed miniplate osteosynthesis in all patients. SWOT was used as a default reduction method in all patients. The rates of successful fracture reduction achieved by SWOT alone or in combination and of revision fixation and plate removal, were used as outcome indices of the reduction method effectiveness. The SWOT technique achieved satisfactory anatomical reduction in 27/40 patients when used alone. Other reduction methods were also used in 13/40 patients. No patient required revision fixation and three patients required late plate removal. SWOT can be used across the midface fracture pattern in conjunction with other methods or as a sole reduction method before miniplate fixation. PMID:24436763
Model predictive control based on reduced order models applied to belt conveyor system.
Chen, Wei; Li, Xin
2016-11-01
In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Al-Rabadi, Anas N.
2009-10-01
This research introduces a new method of intelligent control for the control of the Buck converter using newly developed small signal model of the pulse width modulation (PWM) switch. The new method uses supervised neural network to estimate certain parameters of the transformed system matrix [Ã]. Then, a numerical algorithm used in robust control called linear matrix inequality (LMI) optimization technique is used to determine the permutation matrix [P] so that a complete system transformation {[B˜], [C˜], [Ẽ]} is possible. The transformed model is then reduced using the method of singular perturbation, and state feedback control is applied to enhance system performance. The experimental results show that the new control methodology simplifies the model in the Buck converter and thus uses a simpler controller that produces the desired system response for performance enhancement.
Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.
2010-01-01
Epistasis or gene-gene interaction is a fundamental component of the genetic architecture of complex traits such as disease susceptibility. Multifactor dimensionality reduction (MDR) was developed as a nonparametric and model-free method to detect epistasis when there are no significant marginal genetic effects. However, in many studies of complex disease, other covariates like age of onset and smoking status could have a strong main effect and may potentially interfere with MDR's ability to achieve its goal. In this paper, we present a simple and computationally efficient sampling method to adjust for covariate effects in MDR. We use simulation to show that after adjustment, MDR has sufficient power to detect true gene-gene interactions. We also compare our method with the state-of-art technique in covariate adjustment. The results suggest that our proposed method performs similarly, but is more computationally efficient. We then apply this new method to an analysis of a population-based bladder cancer study in New Hampshire. PMID:20924193
Vijayaraghavan, Krish; Seigneur, Christian; Bronson, Rochelle; Chen, Shu-Yun; Karamchandani, Prakash; Walters, Justin T; Jansen, John J; Brandmeyer, Jo Ellen; Knipping, Eladio M
2010-03-01
The contrasting effects of point source nitrogen oxides (NOx) and sulfur dioxide (SO2) air emission reductions on regional atmospheric nitrogen deposition are analyzed for the case study of a coal-fired power plant in the southeastern United States. The effect of potential emission reductions at the plant on nitrogen deposition to Escambia Bay and its watershed on the Florida-Alabama border is simulated using the three-dimensional Eulerian Community Multiscale Air Quality (CMAQ) model. A method to quantify the relative and individual effects of NOx versus SO2 controls on nitrogen deposition using air quality modeling results obtained from the simultaneous application of NOx and SO2 emission controls is presented and discussed using the results from CMAQ simulations conducted with NOx-only and SO2-only emission reductions; the method applies only to cases in which ambient inorganic nitrate is present mostly in the gas phase; that is, in the form of gaseous nitric acid (HNO3). In such instances, the individual effects of NOx and SO2 controls on nitrogen deposition can be approximated by the effects of combined NOx + SO2 controls on the deposition of NOy, (the sum of oxidized nitrogen species) and reduced nitrogen species (NHx), respectively. The benefit of controls at the plant in terms of the decrease in nitrogen deposition to Escambia Bay and watershed is less than 6% of the overall benefit due to regional Clean Air Interstate Rule (CAIR) controls.
Office-based deep sedation for pediatric ophthalmologic procedures using a sedation service model.
Lalwani, Kirk; Tomlinson, Matthew; Koh, Jeffrey; Wheeler, David
2012-01-01
Aims. (1) To assess the efficacy and safety of pediatric office-based sedation for ophthalmologic procedures using a pediatric sedation service model. (2) To assess the reduction in hospital charges of this model of care delivery compared to the operating room (OR) setting for similar procedures. Background. Sedation is used to facilitate pediatric procedures and to immobilize patients for imaging and examination. We believe that the pediatric sedation service model can be used to facilitate office-based deep sedation for brief ophthalmologic procedures and examinations. Methods. After IRB approval, all children who underwent office-based ophthalmologic procedures at our institution between January 1, 2000 and July 31, 2008 were identified using the sedation service database and the electronic health record. A comparison of hospital charges between similar procedures in the operating room was performed. Results. A total of 855 procedures were reviewed. Procedure completion rate was 100% (C.I. 99.62-100). There were no serious complications or unanticipated admissions. Our analysis showed a significant reduction in hospital charges (average of $1287 per patient) as a result of absent OR and recovery unit charges. Conclusions. Pediatric ophthalmologic minor procedures can be performed using a sedation service model with significant reductions in hospital charges.
Wavelet packets for multi- and hyper-spectral imagery
NASA Astrophysics Data System (ADS)
Benedetto, J. J.; Czaja, W.; Ehler, M.; Flake, C.; Hirn, M.
2010-01-01
State of the art dimension reduction and classification schemes in multi- and hyper-spectral imaging rely primarily on the information contained in the spectral component. To better capture the joint spatial and spectral data distribution we combine the Wavelet Packet Transform with the linear dimension reduction method of Principal Component Analysis. Each spectral band is decomposed by means of the Wavelet Packet Transform and we consider a joint entropy across all the spectral bands as a tool to exploit the spatial information. Dimension reduction is then applied to the Wavelet Packets coefficients. We present examples of this technique for hyper-spectral satellite imaging. We also investigate the role of various shrinkage techniques to model non-linearity in our approach.
Reduction of parameters in Finite Unified Theories and the MSSM
NASA Astrophysics Data System (ADS)
Heinemeyer, Sven; Mondragón, Myriam; Tracas, Nicholas; Zoupanos, George
2018-02-01
The method of reduction of couplings developed by W. Zimmermann, combined with supersymmetry, can lead to realistic quantum field theories, where the gauge and Yukawa sectors are related. It is the basis to find all-loop Finite Unified Theories, where the β-function vanishes to all-loops in perturbation theory. It can also be applied to the Minimal Supersymmetric Standard Model, leading to a drastic reduction in the number of parameters. Both Finite Unified Theories and the reduced MSSM lead to successful predictions for the masses of the third generation of quarks and the Higgs boson, and also predict a heavy supersymmetric spectrum, consistent with the non-observation of supersymmetry so far.
NASA Astrophysics Data System (ADS)
Chen, Guangyan; Xia, Huaijian; Chen, Meiling; Wang, Dong; Jia, Sujin
2017-10-01
Energy saving and emission reduction policies affects the development of high power industry, thereby affecting electricity demand, so the study of the electricity industry boom helps to master the national economy. This paper analyses the influence of energy saving and emission reduction on power generation structure and pollutant emission in power industry. Through the construction of electricity market composite boom index to indicate electricity boom, using boom index to study volatility characteristics and trend of electricity market. Here we provide a method for the enterprise and the government, that it can infer the overall operation of the national economy situation from power data.
[Vitamin K3-induced activation of molecular oxygen in glioma cells].
Krylova, N G; Kulagova, T A; Semenkova, G N; Cherenkevich, S N
2009-01-01
It has been shown by the method of fluorescent analysis that the rate of hydrogen peroxide generation in human U251 glioma cells under the effect of lipophilic (menadione) or hydrophilic (vikasol) analogues of vitamin K3 was different. Analyzing experimental data we can conclude that menadione underwent one- and two-electron reduction by intracellular reductases in glioma cells. Reduced forms of menadione interact with molecular oxygen leading to reactive oxygen species (ROS) generation. The theoretical model of ROS generation including two competitive processes of one- and two-electron reduction of menadione has been proposed. Rate constants of ROS generation mediated by one-electron reduction process have been estimated.
Aurumskjöld, Marie-Louise; Ydström, Kristina; Tingberg, Anders; Söderberg, Marcus
2017-01-01
The number of computed tomography (CT) examinations is increasing and leading to an increase in total patient exposure. It is therefore important to optimize CT scan imaging conditions in order to reduce the radiation dose. The introduction of iterative reconstruction methods has enabled an improvement in image quality and a reduction in radiation dose. To investigate how image quality depends on reconstruction method and to discuss patient dose reduction resulting from the use of hybrid and model-based iterative reconstruction. An image quality phantom (Catphan® 600) and an anthropomorphic torso phantom were examined on a Philips Brilliance iCT. The image quality was evaluated in terms of CT numbers, noise, noise power spectra (NPS), contrast-to-noise ratio (CNR), low-contrast resolution, and spatial resolution for different scan parameters and dose levels. The images were reconstructed using filtered back projection (FBP) and different settings of hybrid (iDose 4 ) and model-based (IMR) iterative reconstruction methods. iDose 4 decreased the noise by 15-45% compared with FBP depending on the level of iDose 4 . The IMR reduced the noise even further, by 60-75% compared to FBP. The results are independent of dose. The NPS showed changes in the noise distribution for different reconstruction methods. The low-contrast resolution and CNR were improved with iDose 4 , and the improvement was even greater with IMR. There is great potential to reduce noise and thereby improve image quality by using hybrid or, in particular, model-based iterative reconstruction methods, or to lower radiation dose and maintain image quality. © The Foundation Acta Radiologica 2016.
Final technical report provides test methods used and verification results to be published on ETV web sites. The ETS UV System Model UVL-200-4 was tested to validate the UV dose delivered by the system using biodosimetry and a set line approach. The set line for 40 mJ/cm2 Red...
Zhang, Wandi; Chen, Feng; Wang, Zijia; Huang, Jianling; Wang, Bo
2017-11-01
Public transportation automatic fare collection (AFC) systems are able to continuously record large amounts of passenger travel information, providing massive, low-cost data for research on regulations pertaining to public transport. These data can be used not only to analyze characteristics of passengers' trips but also to evaluate transport policies that promote a travel mode shift and emission reduction. In this study, models combining card, survey, and geographic information systems (GIS) data are established with a research focus on the private driving restriction policies being implemented in an ever-increasing number of cities. The study aims to evaluate the impact of these policies on the travel mode shift, as well as relevant carbon emission reductions. The private driving restriction policy implemented in Beijing is taken as an example. The impact of the restriction policy on the travel mode shift from cars to subways is analyzed through a model based on metro AFC data. The routing paths of these passengers are also analyzed based on the GIS method and on survey data, while associated carbon emission reductions are estimated. The analysis method used in this study can provide reference for the application of big data in evaluating transport policies. Motor vehicles have become the most prevalent source of emissions and subsequently air pollution within Chinese cities. The evaluation of the effects of driving restriction policies on the travel mode shift and vehicle emissions will be useful for other cities in the future. Transport big data, playing an important support role in estimating the travel mode shift and emission reduction considered, can help related departments to estimate the effects of traffic jam alleviation and environment improvement before the implementation of these restriction policies and provide a reference for relevant decisions.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
1997-01-01
The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.
Clinical and histopathological study of the TriPollar home-use device for body treatments.
Boisnic, Sylvie; Branchet, Marie-Christine; Birnstiel, Oliver; Beilin, Ghislaine
2010-01-01
Professional non invasive treatments for body contouring based on radiofrequency (RF) became popular in aesthetic clinics due to proven efficacy and safety. A new home-use RF device for body treatments has been developed based on TriPollar technology. Our objective was to evaluate the TriPollar home-use device for circumference reduction, cellulite improvement and skin tightening using objective and subjective methods. An ex-vivo human skin model was used for histological and biochemical evaluations of the TriPollar clinical effect. Additionally, twenty four subjects used the new device on the abdomen and thigh areas and the circumference reduction was measured. Ex-vivo models indicated a significant increase of 82% in hypodermal glycerol release. Histology revealed a 34% alteration in adipocyte appearance. Collagen synthesis increased by 31% following TriPollar treatment. A significant average reduction of 2.4 cm was measured on the treated thighs. On the control thighs a lesser, non-significant reduction was found. Average abdominal laxity was reduced from 1.4 at baseline to 0.8 following treatments. A certain reduction was measured in the abdomen circumferences, although it was not significant. The reported results demonstrate the safety and efficacy of the new TriPollar home-use device for body contouring and skin tightening. Treatment may lead to discrete circumference reduction and moderate laxity improvement.
Patino, Manuel; Fuentes, Jorge M; Hayano, Koichi; Kambadakone, Avinash R; Uyeda, Jennifer W; Sahani, Dushyant V
2015-02-01
OBJECTIVE. The objective of our study was to compare the performance of three hybrid iterative reconstruction techniques (IRTs) (ASiR, iDose4, SAFIRE) and their respective strengths for image noise reduction on low-dose CT examinations using filtered back projection (FBP) as the standard reference. Also, we compared the performance of these three hybrid IRTs with two model-based IRTs (Veo and IMR) for image noise reduction on low-dose examinations. MATERIALS AND METHODS. An anthropomorphic abdomen phantom was scanned at 100 and 120 kVp and different tube current-exposure time products (25-100 mAs) on three CT systems (for ASiR and Veo, Discovery CT750 HD; for iDose4 and IMR, Brilliance iCT; and for SAFIRE, Somatom Definition Flash). Images were reconstructed using FBP and using IRTs at various strengths. Nine noise measurements (mean ROI size, 423 mm(2)) on extracolonic fat for the different strengths of IRTs were recorded and compared with FBP using ANOVA. Radiation dose, which was measured as the volume CT dose index and dose-length product, was also compared. RESULTS. There were no significant differences in radiation dose and image noise among the scanners when FBP was used (p > 0.05). Gradual image noise reduction was observed with each increasing increment of hybrid IRT strength, with a maximum noise suppression of approximately 50% (48.2-53.9%). Similar noise reduction was achieved on the scanners by applying specific hybrid IRT strengths. Maximum noise reduction was higher on model-based IRTs (68.3-81.1%) than hybrid IRTs (48.2-53.9%) (p < 0.05). CONCLUSION. When constant scanning parameters are used, radiation dose and image noise on FBP are similar for CT scanners made by different manufacturers. Significant image noise reduction is achieved on low-dose CT examinations rendered with IRTs. The image noise on various scanners can be matched by applying specific hybrid IRT strengths. Model-based IRTs attain substantially higher noise reduction than hybrid IRTs irrespective of the radiation dose.
NASA Astrophysics Data System (ADS)
Lyashenko, Ya. A.; Popov, V. L.
2018-01-01
A dynamic model of the nanostructuring burnishing of a surface of metallic details taking into consideration plastic deformations has been suggested. To describe the plasticity, the ideology of dimension reduction method supplemented with the plasticity criterion is used. The model considers the action of the normal burnishing force and the tangential friction force. The effect of the coefficient of friction and the periodical oscillation of the burnishing force on the burnishing kinetics are investigated.
Mathematical neuroscience: from neurons to circuits to systems.
Gutkin, Boris; Pinto, David; Ermentrout, Bard
2003-01-01
Applications of mathematics and computational techniques to our understanding of neuronal systems are provided. Reduction of membrane models to simplified canonical models demonstrates how neuronal spike-time statistics follow from simple properties of neurons. Averaging over space allows one to derive a simple model for the whisker barrel circuit and use this to explain and suggest several experiments. Spatio-temporal pattern formation methods are applied to explain the patterns seen in the early stages of drug-induced visual hallucinations.
van Manen's method and reduction in a phenomenological hermeneutic study.
Heinonen, Kristiina
2015-03-01
To describe van Manen's method and concept of reduction in a study that used a phenomenological hermeneutic approach. Nurse researchers have used van Manen's method in different ways. Participants' lifeworlds are described in depth, but descriptions of reduction have been brief. The literature and knowledge review and manual search of research articles. Databases Web Science, PubMed, CINAHL and PsycINFO, without applying a time period, to identify uses of van Manen's method. This paper shows how van Manen's method has been used in nursing research and gives some examples of van Manen's reduction. Reduction enables us to conduct in-depth phenomenological hermeneutic research and understand people's lifeworlds. As there are many variations in adapting reduction, it is complex and confusing. This paper contributes to the discussion of phenomenology, hermeneutic study and reduction. It opens up reduction as a method for researchers to exploit.
Grebenstein, Patricia E.; Burroughs, Danielle; Roiko, Samuel A.; Pentel, Paul R.; LeSage, Mark G.
2015-01-01
Background The FDA is considering reducing the nicotine content in tobacco products as a population-based strategy to reduce tobacco addiction. Research is needed to determine the threshold level of nicotine needed to maintain smoking and the extent of compensatory smoking that could occur during nicotine reduction. Sources of variability in these measures across sub-populations also need to be identified so that policies can take into account the risks and benefits of nicotine reduction in vulnerable populations. Methods The present study examined these issues in a rodent nicotine self- administration model of nicotine reduction policy to characterize individual differences in nicotine reinforcement thresholds, degree of compensation, and elasticity of demand during progressive reduction of the unit nicotine dose. The ability of individual differences in baseline nicotine intake and nicotine pharmacokinetics to predict responses to dose reduction was also examined. Results Considerable variability in the reinforcement threshold, compensation, and elasticity of demand was evident. High baseline nicotine intake was not correlated with the reinforcement threshold, but predicted less compensation and less elastic demand. Higher nicotine clearance predicted low reinforcement thresholds, greater compensation, and less elastic demand. Less elastic demand also predicted lower reinforcement thresholds. Conclusions These findings suggest that baseline nicotine intake, nicotine clearance, and the essential value of nicotine (i.e. elasticity of demand) moderate the effects of progressive nicotine reduction in rats and warrant further study in humans. They also suggest that smokers with fast nicotine metabolism may be more vulnerable to the risks of nicotine reduction. PMID:25891231
75 FR 12753 - Agency Forms Undergoing Paperwork Reduction Act Review
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-17
... effective at improving health care quality. While evidence-based approaches for decisionmaking have become standard in healthcare, this has been limited in laboratory medicine. No single- evidence-based model for... (LMBP) initiative to develop new systematic evidence reviews methods for making evidence-based...
Sources of Mercury Exposure for U.S. Seafood Consumers: Implications for Policy
Recent policies attempting to reduce adverse effects of methylmercury exposure from fish consumption in the U.S. have targeted reductions in anthropogenic emissions from U.S. sources. Methods: We use models that simulate global atmospheric chemistry (GEOS-Chem); the fate, transp...
Estimation of PM2.5 Concentration Efficiency and Potential Public Mortality Reduction in Urban China
Yu, Anyu; Jia, Guangshe; You, Jianxin
2018-01-01
The particulate matter 2.5 (PM2.5) is a serious air-pollutant emission in China, which has caused serious risks to public health. To reduce the pollution and corresponding public mortality, this paper proposes a method by incorporating slacks-based data envelopment analysis (DEA) and an integrated exposure risk (IER) model. By identifying the relationship between the PM2.5 concentration and mortality, the potential PM2.5 concentration efficiency and mortality reduction were measured. The proposed method has been applied to China’s 243 cities in 2015. Some implications are achieved. (1) There are urban disparities in estimated results around China. The geographic distribution of urban mortality reduction is consistent with that of the PM2.5 concentration efficiency, but some inconsistency also exists. (2) The pollution reduction and public health improvement should be addressed among China’s cities, especially for those in northern coastal, eastern coastal, and middle Yellow River areas. The reduction experience of PM2.5 concentration in cities of the southern coastal area could be advocated in China. (3) Environmental consideration should be part of the production adjustment of urban central China. The updating of technology is suggested for specific cities and should be considered by the policymaker. PMID:29543783
Reduction of Dynamic Loads in Mine Lifting Installations
NASA Astrophysics Data System (ADS)
Kuznetsov, N. K.; Eliseev, S. V.; Perelygina, A. Yu
2018-01-01
Article is devoted to a problem of decrease in the dynamic loadings arising in transitional operating modes of the mine lifting installations leading to heavy oscillating motions of lifting vessels and decrease in efficiency and reliability of work. The known methods and means of decrease in dynamic loadings and oscillating motions of the similar equipment are analysed. It is shown that an approach based on the concept of the inverse problems of dynamics can be effective method of the solution of this problem. The article describes the design model of a one-ended lifting installation in the form of a two-mass oscillation system, in which the inertial elements are the mass of the lifting vessel and the reduced mass of the engine, reducer, drum and pulley. The simplified mathematical model of this system and results of an efficiency research of an active way of reduction of dynamic loadings of lifting installation on the basis of the concept of the inverse problems of dynamics are given.
Computations of Flow over a Hump Model Using Higher Order Method with Turbulence Modeling
NASA Technical Reports Server (NTRS)
Balakumar, P.
2005-01-01
Turbulent separated flow over a two-dimensional hump is computed by solving the RANS equations with k - omega (SST) turbulence model for the baseline, steady suction and oscillatory blowing/suction flow control cases. The flow equations and the turbulent model equations are solved using a fifth-order accurate weighted essentially. nonoscillatory (WENO) scheme for space discretization and a third order, total variation diminishing (TVD) Runge-Kutta scheme for time integration. Qualitatively the computed pressure distributions exhibit the same behavior as those observed in the experiments. The computed separation regions are much longer than those observed experimentally. However, the percentage reduction in the separation region in the steady suction case is closer to what was measured in the experiment. The computations did not predict the expected reduction in the separation length in the oscillatory case. The predicted turbulent quantities are two to three times smaller than the measured values pointing towards the deficiencies in the existing turbulent models when they are applied to strong steady/unsteady separated flows.
Evaluation of wetland implementation strategies on phosphorus reduction at a watershed scale
NASA Astrophysics Data System (ADS)
Abouali, Mohammad; Nejadhashemi, A. Pouyan; Daneshvar, Fariborz; Adhikari, Umesh; Herman, Matthew R.; Calappi, Timothy J.; Rohn, Bridget G.
2017-09-01
Excessive nutrient use in agricultural practices is a major cause of water quality degradation around the world, which results in eutrophication of the freshwater systems. Among the nutrients, phosphorus enrichment has recently drawn considerable attention due to major environmental issues such as Lake Erie and Chesapeake Bay eutrophication. One approach for mitigating the impacts of excessive nutrients on water resources is the implementation of wetlands. However, proper site selection for wetland implementation is the key for effective water quality management at the watershed scale, which is the goal of this study. In this regard, three conventional and two pseudo-random targeting methods were considered. A watershed model called the Soil and Water Assessment Tool (SWAT) was coupled with another model called System for Urban Stormwater Treatment and Analysis IntegratioN (SUSTAIN) to simulate the impacts of wetland implementation scenarios in the Saginaw River watershed, located in Michigan. The inter-group similarities of the targeting strategies were investigated and it was shown that the level of similarity increases as the target area increases (0.54-0.86). In general, the conventional targeting method based on phosphorus load generated per unit area at the subwatershed scale had the highest average reduction among all the scenarios (44.46 t/year). However, when considering the total area of implemented wetlands, the conventional method based on long-term impacts of wetland implementation showed the highest amount of phosphorus reduction (36.44 t/year).
Novel Framework for Reduced Order Modeling of Aero-engine Components
NASA Astrophysics Data System (ADS)
Safi, Ali
The present study focuses on the popular dynamic reduction methods used in design of complex assemblies (millions of Degrees of Freedom) where numerous iterations are involved to achieve the final design. Aerospace manufacturers such as Rolls Royce and Pratt & Whitney are actively seeking techniques that reduce computational time while maintaining accuracy of the models. This involves modal analysis of components with complex geometries to determine the dynamic behavior due to non-linearity and complicated loading conditions. In such a case the sub-structuring and dynamic reduction techniques prove to be an efficient tool to reduce design cycle time. The components whose designs are finalized can be dynamically reduced to mass and stiffness matrices at the boundary nodes in the assembly. These matrices conserve the dynamics of the component in the assembly, and thus avoid repeated calculations during the analysis runs for design modification of other components. This thesis presents a novel framework in terms of modeling and meshing of any complex structure, in this case an aero-engine casing. In this study the affect of meshing techniques on the run time are highlighted. The modal analysis is carried out using an extremely fine mesh to ensure all minor details in the structure are captured correctly in the Finite Element (FE) model. This is used as the reference model, to compare against the results of the reduced model. The study also shows the conditions/criteria under which dynamic reduction can be implemented effectively, proving the accuracy of Criag-Bampton (C.B.) method and limitations of Static Condensation. The study highlights the longer runtime needed to produce the reduced matrices of components compared to the overall runtime of the complete unreduced model. Although once the components are reduced, the assembly run is significantly. Hence the decision to use Component Mode Synthesis (CMS) is to be taken judiciously considering the number of iterations that may be required during the design cycle.
Reductive capacity measurement of waste forms for secondary radioactive wastes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Um, Wooyong; Yang, Jung-Seok; Serne, R. Jeffrey
2015-12-01
The reductive capacities of dry ingredients and final solid waste forms were measured using both the Cr(VI) and Ce(IV) methods and the results were compared. Blast furnace slag (BFS), sodium sulfide, SnF2, and SnCl2 used as dry ingredients to make various waste forms showed significantly higher reductive capacities compared to other ingredients regardless of which method was used. Although the BFS exhibits appreciable reductive capacity, it requires greater amounts of time to fully react. In almost all cases, the Ce(IV) method yielded larger reductive capacity values than those from the Cr(VI) method and can be used as an upper boundmore » for the reductive capacity of the dry ingredients and waste forms, because the Ce(IV) method subjects the solids to a strong acid (low pH) condition that dissolves much more of the solids. Because the Cr(VI) method relies on a neutral pH condition, the Cr(VI) method can be used to estimate primarily the waste form surface-related and readily dissolvable reductive capacity. However, the Cr(VI) method does not measure the total reductive capacity of the waste form, the long-term reductive capacity afforded by very slowly dissolving solids, or the reductive capacity present in the interior pores and internal locations of the solids.« less
On the precision of quasi steady state assumptions in stochastic dynamics
NASA Astrophysics Data System (ADS)
Agarwal, Animesh; Adams, Rhys; Castellani, Gastone C.; Shouval, Harel Z.
2012-07-01
Many biochemical networks have complex multidimensional dynamics and there is a long history of methods that have been used for dimensionality reduction for such reaction networks. Usually a deterministic mass action approach is used; however, in small volumes, there are significant fluctuations from the mean which the mass action approach cannot capture. In such cases stochastic simulation methods should be used. In this paper, we evaluate the applicability of one such dimensionality reduction method, the quasi-steady state approximation (QSSA) [L. Menten and M. Michaelis, "Die kinetik der invertinwirkung," Biochem. Z 49, 333369 (1913)] for dimensionality reduction in case of stochastic dynamics. First, the applicability of QSSA approach is evaluated for a canonical system of enzyme reactions. Application of QSSA to such a reaction system in a deterministic setting leads to Michaelis-Menten reduced kinetics which can be used to derive the equilibrium concentrations of the reaction species. In the case of stochastic simulations, however, the steady state is characterized by fluctuations around the mean equilibrium concentration. Our analysis shows that a QSSA based approach for dimensionality reduction captures well the mean of the distribution as obtained from a full dimensional simulation but fails to accurately capture the distribution around that mean. Moreover, the QSSA approximation is not unique. We have then extended the analysis to a simple bistable biochemical network model proposed to account for the stability of synaptic efficacies; the substrate of learning and memory [J. E. Lisman, "A mechanism of memory storage insensitive to molecular turnover: A bistable autophosphorylating kinase," Proc. Natl. Acad. Sci. U.S.A. 82, 3055-3057 (1985)], 10.1073/pnas.82.9.3055. Our analysis shows that a QSSA based dimensionality reduction method results in errors as big as two orders of magnitude in predicting the residence times in the two stable states.
Iterative deblending of simultaneous-source data using a coherency-pass shaping operator
NASA Astrophysics Data System (ADS)
Zu, Shaohuan; Zhou, Hui; Mao, Weijian; Zhang, Dong; Li, Chao; Pan, Xiao; Chen, Yangkang
2017-10-01
Simultaneous-source acquisition helps greatly boost an economic saving, while it brings an unprecedented challenge of removing the crosstalk interference in the recorded seismic data. In this paper, we propose a novel iterative method to separate the simultaneous source data based on a coherency-pass shaping operator. The coherency-pass filter is used to constrain the model, that is, the unblended data to be estimated, in the shaping regularization framework. In the simultaneous source survey, the incoherent interference from adjacent shots greatly increases the rank of the frequency domain Hankel matrix that is formed from the blended record. Thus, the method based on rank reduction is capable of separating the blended record to some extent. However, the shortcoming is that it may cause residual noise when there is strong blending interference. We propose to cascade the rank reduction and thresholding operators to deal with this issue. In the initial iterations, we adopt a small rank to severely separate the blended interference and a large thresholding value as strong constraints to remove the residual noise in the time domain. In the later iterations, since more and more events have been recovered, we weaken the constraint by increasing the rank and shrinking the threshold to recover weak events and to guarantee the convergence. In this way, the combined rank reduction and thresholding strategy acts as a coherency-pass filter, which only passes the coherent high-amplitude component after rank reduction instead of passing both signal and noise in traditional rank reduction based approaches. Two synthetic examples are tested to demonstrate the performance of the proposed method. In addition, the application on two field data sets (common receiver gathers and stacked profiles) further validate the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Deker, H.
1971-01-01
The West German tracking stations are equipped with ballistic cameras. Plate measurement and plate reduction must therefore follow photogrammetric methods. Approximately 100 star positions and 200 satellite positions are measured on each plate. The mathematical model for spatial rotation of the bundle of rays is extended by including terms for distortion and internal orientation of the camera as well as by providing terms for refraction which are computed for the measured coordinates of the star positions on the plate. From the measuring accuracy of the plate coordinates it follows that the timing accuracy for the exposures has to be about one millisecond, in order to obtain a homogeneous system.
Shape control of an adaptive wing for transonic drag reduction
NASA Astrophysics Data System (ADS)
Austin, Fred; Van Nostrand, William C.
1995-05-01
Theory and experiments to control the static shape of flexible structures by employing internal translational actuators are summarized and plants to extend the work to adaptive wings are presented. Significant reductions in the shock-induced drag are achievable during transonic- cruise by small adaptive modifications to the wing cross-sectional profile. Actuators are employed as truss elements of active ribs to deform the wing cross section. An adaptive-rib model was constructed, and experiments validated the shape-control theory. Plans for future development under an ARPA/AFWAL contract include payoff assessments of the method on an actual aircraft, the development of inchworm TERFENOL-D actuators, and the development of a method to optimize the wing cross-sectional shapes by direct-drag measurements.
Adaptive model reduction for continuous systems via recursive rational interpolation
NASA Technical Reports Server (NTRS)
Lilly, John H.
1994-01-01
A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.
NASA Astrophysics Data System (ADS)
Kenway, Gaetan K. W.
This thesis presents new tools and techniques developed to address the challenging problem of high-fidelity aerostructural optimization with respect to large numbers of design variables. A new mesh-movement scheme is developed that is both computationally efficient and sufficiently robust to accommodate large geometric design changes and aerostructural deformations. A fully coupled Newton-Krylov method is presented that accelerates the convergence of aerostructural systems and provides a 20% performance improvement over the traditional nonlinear block Gauss-Seidel approach and can handle more exible structures. A coupled adjoint method is used that efficiently computes derivatives for a gradient-based optimization algorithm. The implementation uses only machine accurate derivative techniques and is verified to yield fully consistent derivatives by comparing against the complex step method. The fully-coupled large-scale coupled adjoint solution method is shown to have 30% better performance than the segregated approach. The parallel scalability of the coupled adjoint technique is demonstrated on an Euler Computational Fluid Dynamics (CFD) model with more than 80 million state variables coupled to a detailed structural finite-element model of the wing with more than 1 million degrees of freedom. Multi-point high-fidelity aerostructural optimizations of a long-range wide-body, transonic transport aircraft configuration are performed using the developed techniques. The aerostructural analysis employs Euler CFD with a 2 million cell mesh and a structural finite element model with 300 000 DOF. Two design optimization problems are solved: one where takeoff gross weight is minimized, and another where fuel burn is minimized. Each optimization uses a multi-point formulation with 5 cruise conditions and 2 maneuver conditions. The optimization problems have 476 design variables are optimal results are obtained within 36 hours of wall time using 435 processors. The TOGW minimization results in a 4.2% reduction in TOGW with a 6.6% fuel burn reduction, while the fuel burn optimization resulted in a 11.2% fuel burn reduction with no change to the takeoff gross weight.
Drug-target interaction prediction using ensemble learning and dimensionality reduction.
Ezzat, Ali; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong
2017-10-01
Experimental prediction of drug-target interactions is expensive, time-consuming and tedious. Fortunately, computational methods help narrow down the search space for interaction candidates to be further examined via wet-lab techniques. Nowadays, the number of attributes/features for drugs and targets, as well as the amount of their interactions, are increasing, making these computational methods inefficient or occasionally prohibitive. This motivates us to derive a reduced feature set for prediction. In addition, since ensemble learning techniques are widely used to improve the classification performance, it is also worthwhile to design an ensemble learning framework to enhance the performance for drug-target interaction prediction. In this paper, we propose a framework for drug-target interaction prediction leveraging both feature dimensionality reduction and ensemble learning. First, we conducted feature subspacing to inject diversity into the classifier ensemble. Second, we applied three different dimensionality reduction methods to the subspaced features. Third, we trained homogeneous base learners with the reduced features and then aggregated their scores to derive the final predictions. For base learners, we selected two classifiers, namely Decision Tree and Kernel Ridge Regression, resulting in two variants of ensemble models, EnsemDT and EnsemKRR, respectively. In our experiments, we utilized AUC (Area under ROC Curve) as an evaluation metric. We compared our proposed methods with various state-of-the-art methods under 5-fold cross validation. Experimental results showed EnsemKRR achieving the highest AUC (94.3%) for predicting drug-target interactions. In addition, dimensionality reduction helped improve the performance of EnsemDT. In conclusion, our proposed methods produced significant improvements for drug-target interaction prediction. Copyright © 2017 Elsevier Inc. All rights reserved.
Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J
2014-05-01
In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.
Kim, Hyung Chul; Wallington, Timothy J; Sullivan, John L; Keoleian, Gregory A
2015-08-18
Lightweighting is a key strategy to improve vehicle fuel economy. Assessing the life-cycle benefits of lightweighting requires a quantitative description of the use-phase fuel consumption reduction associated with mass reduction. We present novel methods of estimating mass-induced fuel consumption (MIF) and fuel reduction values (FRVs) from fuel economy and dynamometer test data in the U.S. Environmental Protection Agency (EPA) database. In the past, FRVs have been measured using experimental testing. We demonstrate that FRVs can be mathematically derived from coast down coefficients in the EPA vehicle test database avoiding additional testing. MIF and FRVs calculated for 83 different 2013 MY vehicles are in the ranges 0.22-0.43 and 0.15-0.26 L/(100 km 100 kg), respectively, and increase to 0.27-0.53 L/(100 km 100 kg) with powertrain resizing to retain equivalent vehicle performance. We show how use-phase fuel consumption can be estimated using MIF and FRVs in life cycle assessments (LCAs) of vehicle lightweighting from total vehicle and vehicle component perspectives with, and without, powertrain resizing. The mass-induced fuel consumption model is illustrated by estimating lifecycle greenhouse gas (GHG) emission benefits from lightweighting a grille opening reinforcement component using magnesium or carbon fiber composite for 83 different vehicle models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitrios
Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A totalmore » of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A new wavelet-based EFCM clustering model was introduced toward noise reduction and detail preservation. The proposed method improves the overall US image quality, which in turn could affect the decision-making on whether additional imaging and/or intervention is needed.« less
New modeling method for the dielectric relaxation of a DRAM cell capacitor
NASA Astrophysics Data System (ADS)
Choi, Sujin; Sun, Wookyung; Shin, Hyungsoon
2018-02-01
This study proposes a new method for automatically synthesizing the equivalent circuit of the dielectric relaxation (DR) characteristic in dynamic random access memory (DRAM) without frequency dependent capacitance measurement. Charge loss due to DR can be observed by a voltage drop at the storage node and this phenomenon can be analyzed by an equivalent circuit. The Havariliak-Negami model is used to accurately determine the electrical characteristic parameters of an equivalent circuit. The DRAM sensing operation is performed in HSPICE simulations to verify this new method. The simulation demonstrates that the storage node voltage drop resulting from DR and the reduction in the sensing voltage margin, which has a critical impact on DRAM read operation, can be accurately estimated using this new method.
NASA Astrophysics Data System (ADS)
Tzabiras, John; Spiliotopoulos, Marios; Kokkinos, Kostantinos; Fafoutis, Chrysostomos; Sidiropoulos, Pantelis; Vasiliades, Lampros; Papaioannou, George; Loukas, Athanasios; Mylopoulos, Nikitas
2015-04-01
The overall objective of this work is the development of an Information System which could be used by stakeholders for the purposes of water management as well as for planning and strategic decision-making in semi-arid areas. An integrated modeling system has been developed and applied to evaluate the sustainability of water resources management strategies in Lake Karla watershed, Greece. The modeling system, developed in the framework of "HYDROMENTOR" research project, is based on a GIS modelling approach which uses remote sensing data and includes coupled models for the simulation of surface water and groundwater resources, the operation of hydrotechnical projects (reservoir operation and irrigation works) and the estimation of water demands at several spatial scales. Lake Karla basin was the region where the system was tested but the methodology may be the basis for future analysis elsewhere. Τwo (2) base and three (3) management scenarios were investigated. In total, eight (8) water management scenarios were evaluated: i) Base scenario without operation of the reservoir and the designed Lake Karla district irrigation network (actual situation) • Reduction of channel losses • Alteration of irrigation methods • Introduction of greenhouse cultivation ii) Base scenario including the operation of the reservoir and the Lake Karla district irrigation network • Reduction of channel losses • Alteration of irrigation methods • Introduction of greenhouse cultivation The results show that, under the existing water resources management, the water deficit of Lake Karla watershed is very large. However, the operation of the reservoir and the cooperative Lake Karla district irrigation network coupled with water demand management measures, like reduction of water distribution system losses and alteration of irrigation methods, could alleviate the problem and lead to sustainable and ecological use of water resources in the study area. Acknowledgements: This study has been supported by the research project "Hydromentor" funded by the Greek General Secretariat of Research and Technology in the framework of the E.U. co-funded National Action "Cooperation"
NASA Astrophysics Data System (ADS)
Li, Xueying; Peng, Ying; Zhang, Jing
2017-03-01
Under the background of a low carbon economy, this paper examines the impact of carbon tax policy on supply chain network emission reduction. The integer linear programming method is used to establish a supply chain network emission reduction such a model considers the cost of CO2 emissions, and analyses the impact of different carbon price on cost and carbon emissions in supply chains. The results show that the implementation of a carbon tax policy can reduce CO2 emissions in building supply chain, but the increase in carbon price does not produce a reduction effect, and may bring financial burden to the enterprise. This paper presents a reasonable carbon price range and provides decision makers with strategies towards realizing a low carbon building supply chain in an economical manner.
NASA Astrophysics Data System (ADS)
Reimann, S.; Vollmer, M. K.; Henne, S.; Brunner, D.; Emmenegger, L.; Manning, A.; Fraser, P. J.; Krummel, P. B.; Dunse, B. L.; DeCola, P.; Tarasova, O. A.
2016-12-01
In the recently adopted Paris Agreement the community of signatory states has agreed to limit the future global temperature increase between +1.5 °C and +2.0 °C, compared to pre-industrial times. To achieve this goal, emission reduction targets have been submitted by individual nations (called Intended Nationally Determined Contributions, INDCs). Inventories will be used for checking progress towards these envisaged goals. These inventories are calculated by combining information on specific activities (e.g. passenger cars, agriculture) with activity-related, typically IPCC-sanctioned, emission factors - the so-called bottom-up method. These calculated emissions are reported on an annual basis and are checked by external bodies by using the same method. A second independent method estimates emissions by translating greenhouse gas measurements made at regionally representative stations into regional/global emissions using meteorologically-based transport models. In recent years this so-called top-down approach has been substantially advanced into a powerful tool and emission estimates at the national/regional level have become possible. This method is already used in Switzerland, in the United Kingdom and in Australia to estimate greenhouse gas emissions and independently support the national bottom-up emission inventories within the UNFCCC framework. Examples of the comparison of the two independent methods will be presented and the added-value will be discussed. The World Meteorological Organization (WMO) and partner organizations are currently developing a plan to expand this top-down approach and to expand the globally representative GAW network of ground-based stations and remote-sensing platforms and integrate their information with atmospheric transport models. This Integrated Global Greenhouse Gas Information System (IG3IS) initiative will help nations to improve the accuracy of their country-based emissions inventories and their ability to evaluate the success of emission reductions strategies. This could foster trans-national collaboration on methodologies for estimation of emissions. Furthermore, more accurate emission knowledge will clarify the value of emission reduction efforts and could encourage countries to strengthen their reduction pledges.
A Unified Development of Basis Reduction Methods for Rotor Blade Analysis
NASA Technical Reports Server (NTRS)
Ruzicka, Gene C.; Hodges, Dewey H.; Rutkowski, Michael (Technical Monitor)
2001-01-01
The axial foreshortening effect plays a key role in rotor blade dynamics, but approximating it accurately in reduced basis models has long posed a difficult problem for analysts. Recently, though, several methods have been shown to be effective in obtaining accurate,reduced basis models for rotor blades. These methods are the axial elongation method,the mixed finite element method, and the nonlinear normal mode method. The main objective of this paper is to demonstrate the close relationships among these methods, which are seemingly disparate at first glance. First, the difficulties inherent in obtaining reduced basis models of rotor blades are illustrated by examining the modal reduction accuracy of several blade analysis formulations. It is shown that classical, displacement-based finite elements are ill-suited for rotor blade analysis because they can't accurately represent the axial strain in modal space, and that this problem may be solved by employing the axial force as a variable in the analysis. It is shown that the mixed finite element method is a convenient means for accomplishing this, and the derivation of a mixed finite element for rotor blade analysis is outlined. A shortcoming of the mixed finite element method is that is that it increases the number of variables in the analysis. It is demonstrated that this problem may be rectified by solving for the axial displacements in terms of the axial forces and the bending displacements. Effectively, this procedure constitutes a generalization of the widely used axial elongation method to blades of arbitrary topology. The procedure is developed first for a single element, and then extended to an arbitrary assemblage of elements of arbitrary type. Finally, it is shown that the generalized axial elongation method is essentially an approximate solution for an invariant manifold that can be used as the basis for a nonlinear normal mode.
Poisson sigma models, reduction and nonlinear gauge theories
NASA Astrophysics Data System (ADS)
Signori, Daniele
This dissertation comprises two main lines of research. Firstly, we study non-linear gauge theories for principal bundles, where the structure group is replaced by a Lie groupoid. We follow the approach of Moerdijk-Mrcun and establish its relation with the existing physics literature. In particular, we derive a new formula for the gauge transformation which closely resembles and generalizes the classical formulas found in Yang Mills gauge theories. Secondly, we give a field theoretic interpretation of the of the BRST (Becchi-Rouet-Stora-Tyutin) and BFV (Batalin-Fradkin-Vilkovisky) methods for the reduction of coisotropic submanifolds of Poisson manifolds. The generalized Poisson sigma models that we define are related to the quantization deformation problems of coisotropic submanifolds using homotopical algebras.
NASA Astrophysics Data System (ADS)
Sun, Dihua; Chen, Dong; Zhao, Min; Liu, Weining; Zheng, Linjiang
2018-07-01
In this paper, the general nonlinear car-following model with multi-time delays is investigated in order to describe the reactions of vehicle to driving behavior. Platoon stability and string stability criteria are obtained for the general nonlinear car-following model. Burgers equation and Korteweg de Vries (KdV) equation and their solitary wave solutions are derived adopting the reductive perturbation method. We investigate the properties of typical optimal velocity model using both analytic and numerical methods, which estimates the impact of delays about the evolution of traffic congestion. The numerical results show that time delays in sensing relative movement is more sensitive to the stability of traffic flow than time delays in sensing host motion.
Noise Reduction Design of the Volute for a Centrifugal Compressor
NASA Astrophysics Data System (ADS)
Song, Zhen; Wen, Huabing; Hong, Liangxing; Jin, Yudong
2017-08-01
In order to effectively control the aerodynamic noise of a compressor, this paper takes into consideration a marine exhaust turbocharger compressor as a research object. According to the different design concept of volute section, tongue and exit cone, six different volute models were established. The finite volume method is used to calculate the flow field, whiles the finite element method is used for the acoustic calculation. Comparison and analysis of different structure designs from three aspects: noise level, isentropic efficiency and Static pressure recovery coefficient. The results showed that under the concept of volute section model 1 yielded the best result, under the concept of tongue analysis model 3 yielded the best result and finally under exit cone analysis model 6 yielded the best results.
Modeling the electrophoretic separation of short biological molecules in nanofluidic devices
NASA Astrophysics Data System (ADS)
Fayad, Ghassan; Hadjiconstantinou, Nicolas
2010-11-01
Via comparisons with Brownian Dynamics simulations of the worm-like-chain and rigid-rod models, and the experimental results of Fu et al. [Phys. Rev. Lett., 97, 018103 (2006)], we demonstrate that, for the purposes of low-to-medium field electrophoretic separation in periodic nanofilter arrays, sufficiently short biomolecules can be modeled as point particles, with their orientational degrees of freedom accounted for using partition coefficients. This observation is used in the present work to build a particularly simple and efficient Brownian Dynamics simulation method. Particular attention is paid to the model's ability to quantitatively capture experimental results using realistic values of all physical parameters. A variance-reduction method is developed for efficiently simulating arbitrarily small forcing electric fields.
Reduction of initial shock in decadal predictions using a new initialization strategy
NASA Astrophysics Data System (ADS)
He, Yujun; Wang, Bin; Liu, Mimi; Liu, Li; Yu, Yongqiang; Liu, Juanjuan; Li, Ruizhe; Zhang, Cheng; Xu, Shiming; Huang, Wenyu; Liu, Qun; Wang, Yong; Li, Feifei
2017-08-01
A novel full-field initialization strategy based on the dimension-reduced projection four-dimensional variational data assimilation (DRP-4DVar) is proposed to alleviate the well-known initial shock occurring in the early years of decadal predictions. It generates consistent initial conditions, which best fit the monthly mean oceanic analysis data along the coupled model trajectory in 1 month windows. Three indices to measure the initial shock intensity are also proposed. Results indicate that this method does reduce the initial shock in decadal predictions by Flexible Global Ocean-Atmosphere-Land System model, Grid-point version 2 (FGOALS-g2) compared with the three-dimensional variational data assimilation-based nudging full-field initialization for the same model and is comparable to or even better than the different initialization strategies for other fifth phase of the Coupled Model Intercomparison Project (CMIP5) models. Better hindcasts of global mean surface air temperature anomalies can be obtained than in other FGOALS-g2 experiments. Due to the good model response to external forcing and the reduction of initial shock, higher decadal prediction skill is achieved than in other CMIP5 models.
Solitons, τ-functions and hamiltonian reduction for non-Abelian conformal affine Toda theories
NASA Astrophysics Data System (ADS)
Ferreira, L. A.; Miramontes, J. Luis; Guillén, Joaquín Sánchez
1995-02-01
We consider the Hamiltonian reduction of the "two-loop" Wess-Zumino-Novikov-Witten model (WZNW) based on an untwisted affine Kac-Moody algebra G. The resulting reduced models, called Generalized Non-Abelian Conformal Affine Toda (G-CAT), are conformally invariant and a wide class of them possesses soliton solutions; these models constitute non-Abelian generalizations of the conformal affine Toda models. Their general solution is constructed by the Leznov-Saveliev method. Moreover, the dressing transformations leading to the solutions in the orbit of the vacuum are considered in detail, as well as the τ-functions, which are defined for any integrable highest weight representation of G, irrespectively of its particular realization. When the conformal symmetry is spontaneously broken, the G-CAT model becomes a generalized affine Toda model, whose soliton solutions are constructed. Their masses are obtained exploring the spontaneous breakdown of the conformal symmetry, and their relation to the fundamental particle masses is discussed. We also introduce what we call the two-loop Virasoro algebra, describing extended symmetries of the two-loop WZNW models.
Sibbitt, Wilmer; Sibbitt, Randy R; Michael, Adrian A; Fu, Druce I; Draeger, Hilda T; Twining, Jon M; Bankhurst, Arthur D
2006-04-01
To evaluate physician control of needle and syringe during aspiration-injection syringe procedures by comparing the new reciprocating procedure syringe to a traditional conventional syringe. Twenty-six physicians were tested for their individual ability to control the reciprocating and conventional syringes in typical aspiration-injection procedures using a novel quantitative needle-based displacement procedure model. Subsequently, the physicians performed 48 clinical aspiration-injection (arthrocentesis) procedures on 32 subjects randomized to the reciprocating or conventional syringes. Clinical outcomes included procedure time, patient pain, and operator satisfaction. Multivariate modeling methods were used to determine the experimental variables in the syringe control model most predictive of clinical outcome measures. In the model system, the reciprocating syringe significantly improved physician control of the syringe and needle, with a 66% reduction in unintended forward penetration (p < 0.001) and a 68% reduction in unintended retraction (p < 0.001). In clinical arthrocentesis, improvements were also noted: 30% reduction in procedure time (p < 0.03), 57% reduction in patient pain (p < 0.001), and a 79% increase in physician satisfaction (p < 0.001). The variables in the experimental system--unintended forward penetration, unintended retraction, and operator satisfaction--independently predicted the outcomes of procedure time, patient pain, and physician satisfaction in the clinical study (p < or = 0.001). The reciprocating syringe reduces procedure time and patient pain and improves operator satisfaction with the procedure syringe. The reciprocating syringe improves physician performance in both the validated quantitative needle-based displacement model and in real aspiration-injection syringe procedures, including arthrocentesis.
Cohen-Mazor, Meital; Mathur, Prabodh; Stanley, James R.L.; Mendelsohn, Farrell O.; Lee, Henry; Baird, Rose; Zani, Brett G.; Markham, Peter M.; Rocha-Singh, Krishna
2014-01-01
Objective: To evaluate the safety and effectiveness of different bipolar radiofrequency system algorithms in interrupting the renal sympathetic nerves and reducing renal norepinephrine in a healthy porcine model. Methods: A porcine model (N = 46) was used to investigate renal norepinephrine levels and changes to renal artery tissues and nerves following percutaneous renal denervation with radiofrequency bipolar electrodes mounted on a balloon catheter. Parameters of the radiofrequency system (i.e. electrode length and energy delivery algorithm), and the effects of single and longitudinal treatments along the artery were studied with a 7-day model in which swine received unilateral radiofrequency treatments. Additional sets of animals were used to examine norepinephrine and histological changes 28 days following bilateral percutaneous radiofrequency treatment or surgical denervation; untreated swine were used for comparison of renal norepinephrine levels. Results: Seven days postprocedure, norepinephrine concentrations decreased proportionally to electrode length, with 81, 60 and 38% reductions (vs. contralateral control) using 16, 4 and 2-mm electrodes, respectively. Applying a temperature-control algorithm with the 4-mm electrodes increased efficacy, with a mean 89.5% norepinephrine reduction following a 30-s treatment at 68°C. Applying this treatment along the entire artery length affected more nerves vs. a single treatment, resulting in superior norepinephrine reduction 28 days following bilateral treatment. Conclusion: Percutaneous renal artery application of bipolar radiofrequency energy demonstrated safety and resulted in a significant renal norepinephrine content reduction and renal nerve injury compared with untreated controls in porcine models. PMID:24875181
Reduction of background noise induced by wind tunnel jet exit vanes
NASA Technical Reports Server (NTRS)
Martin, R. M.; Brooks, T. F.; Hoad, D. R.
1985-01-01
The NASA-Langley 4 x 7 m wind tunnel develops low frequency flow pulsations at certain velocity ranges during open throat mode operation, affecting the aerodynamics of the flow and degrading the resulting model test data. Triangular vanes attached to the trailing edge of flat steel rails, mounted 10 cm from the inside of the jet exit walls, have been used to reduce this effect; attention is presently given to methods used to reduce the inherent noise generation of the vanes while retaining their pulsation reduction features.
NASA Astrophysics Data System (ADS)
Arasoglu, Tülin; Derman, Serap; Mansuroglu, Banu
2016-01-01
The aim of the present study was to evaluate the antimicrobial activity of nanoparticle and free formulations of the CAPE compound using different methods and comparing the results in the literature for the first time. In parallel with this purpose, encapsulation of CAPE with the PLGA nanoparticle system (CAPE-PLGA-NPs) and characterization of nanoparticles were carried out. Afterwards, antimicrobial activity of free CAPE and CAPE-PLGA-NPs was determined using agar well diffusion, disk diffusion, broth microdilution and reduction percentage methods. P. aeroginosa, E. coli, S. aureus and methicillin-resistant S. aureus (MRSA) were chosen as model bacteria since they have different cell wall structures. CAPE-PLGA-NPs within the range of 214.0 ± 8.80 nm particle size and with an encapsulation efficiency of 91.59 ± 4.97% were prepared using the oil-in-water (o-w) single-emulsion solvent evaporation method. The microbiological results indicated that free CAPE did not have any antimicrobial activity in any of the applied methods whereas CAPE-PLGA-NPs had significant antimicrobial activity in both broth dilution and reduction percentage methods. CAPE-PLGA-NPs showed moderate antimicrobial activity against S. aureus and MRSA strains particularly in hourly measurements at 30.63 and 61.25 μg ml-1 concentrations (both p < 0.05), whereas they failed to show antimicrobial activity against Gram-negative bacteria (P. aeroginosa and E. coli, p > 0.05). In the reduction percentage method, in which the highest results of antimicrobial activity were obtained, it was observed that the antimicrobial effect on S. aureus was more long-standing (3 days) and higher in reduction percentage (over 90%). The appearance of antibacterial activity of CAPE-PLGA-NPs may be related to higher penetration into cells due to low solubility of free CAPE in the aqueous medium. Additionally, the biocompatible and biodegradable PLGA nanoparticles could be an alternative to solvents such as ethanol, methanol or DMSO. Consequently, obtained results show that the method of selection is extremely important and will influence the results. Thus, broth microdilution and reduction percentage methods can be recommended as reliable and useful screening methods for determination of antimicrobial activity of PLGA nanoparticle formulations used particularly in drug delivery systems compared to both agar well and disk diffusion methods.
Clinical and MRI activity as determinants of sample size for pediatric multiple sclerosis trials
Verhey, Leonard H.; Signori, Alessio; Arnold, Douglas L.; Bar-Or, Amit; Sadovnick, A. Dessa; Marrie, Ruth Ann; Banwell, Brenda
2013-01-01
Objective: To estimate sample sizes for pediatric multiple sclerosis (MS) trials using new T2 lesion count, annualized relapse rate (ARR), and time to first relapse (TTFR) endpoints. Methods: Poisson and negative binomial models were fit to new T2 lesion and relapse count data, and negative binomial time-to-event and exponential models were fit to TTFR data of 42 children with MS enrolled in a national prospective cohort study. Simulations were performed by resampling from the best-fitting model of new T2 lesion count, number of relapses, or TTFR, under various assumptions of the effect size, trial duration, and model parameters. Results: Assuming a 50% reduction in new T2 lesions over 6 months, 90 patients/arm are required, whereas 165 patients/arm are required for a 40% treatment effect. Sample sizes for 2-year trials using relapse-related endpoints are lower than that for 1-year trials. For 2-year trials and a conservative assumption of overdispersion (ϑ), sample sizes range from 70 patients/arm (using ARR) to 105 patients/arm (TTFR) for a 50% reduction in relapses, and 230 patients/arm (ARR) to 365 patients/arm (TTFR) for a 30% relapse reduction. Assuming a less conservative ϑ, 2-year trials using ARR require 45 patients/arm (60 patients/arm for TTFR) for a 50% reduction in relapses and 145 patients/arm (200 patients/arm for TTFR) for a 30% reduction. Conclusion: Six-month phase II trials using new T2 lesion count as an endpoint are feasible in the pediatric MS population; however, trials powered on ARR or TTFR will need to be 2 years in duration and will require multicentered collaboration. PMID:23966255
1978-12-01
multinational corporation in the 1960’s placed extreme emphasis on the need for effective and efficient noise suppression devices. Phase I of work...through model and engine testing applicable to an afterburning turbojet engine. Suppressor designs were based primarily on empirical methods. Phase II...using "ray" acoustics. This method is in contrast to the purely empirical method which consists of the curve -fitting of normalized data. In order to
Using SCR methods to analyze requirements documentation
NASA Technical Reports Server (NTRS)
Callahan, John; Morrison, Jeffery
1995-01-01
Software Cost Reduction (SCR) methods are being utilized to analyze and verify selected parts of NASA's EOS-DIS Core System (ECS) requirements documentation. SCR is being used as a spot-inspection tool. Through this formal and systematic approach of the SCR requirements methods, insights as to whether the requirements are internally inconsistent or incomplete as the scenarios of intended usage evolve in the OC (Operations Concept) documentation. Thus, by modelling the scenarios and requirements as mode charts using the SCR methods, we have been able to identify problems within and between the documents.
Modelling the impact of vector control interventions on Anopheles gambiae population dynamics
2011-01-01
Background Intensive anti-malaria campaigns targeting the Anopheles population have demonstrated substantial reductions in adult mosquito density. Understanding the population dynamics of Anopheles mosquitoes throughout their whole lifecycle is important to assess the likely impact of vector control interventions alone and in combination as well as to aid the design of novel interventions. Methods An ecological model of Anopheles gambiae sensu lato populations incorporating a rainfall-dependent carrying capacity and density-dependent regulation of mosquito larvae in breeding sites is developed. The model is fitted to adult mosquito catch and rainfall data from 8 villages in the Garki District of Nigeria (the 'Garki Project') using Bayesian Markov Chain Monte Carlo methods and prior estimates of parameters derived from the literature. The model is used to compare the impact of vector control interventions directed against adult mosquito stages - long-lasting insecticide treated nets (LLIN), indoor residual spraying (IRS) - and directed against aquatic mosquito stages, alone and in combination on adult mosquito density. Results A model in which density-dependent regulation occurs in the larval stages via a linear association between larval density and larval death rates provided a good fit to seasonal adult mosquito catches. The effective mosquito reproduction number in the presence of density-dependent regulation is dependent on seasonal rainfall patterns and peaks at the start of the rainy season. In addition to killing adult mosquitoes during the extrinsic incubation period, LLINs and IRS also result in less eggs being oviposited in breeding sites leading to further reductions in adult mosquito density. Combining interventions such as the application of larvicidal or pupacidal agents that target the aquatic stages of the mosquito lifecycle with LLINs or IRS can lead to substantial reductions in adult mosquito density. Conclusions Density-dependent regulation of anopheline larvae in breeding sites ensures robust, stable mosquito populations that can persist in the face of intensive vector control interventions. Selecting combinations of interventions that target different stages in the vector's lifecycle will result in maximum reductions in mosquito density. PMID:21798055
30 CFR 550.302 - Definitions concerning air quality.
Code of Federal Regulations, 2013 CFR
2013-07-01
... pollutant means any combination of agents for which the Environmental Protection Agency (EPA) has... is calculated by air quality modeling (or other methods determined by the Administrator of EPA to be... available control technology (BACT) means an emission limitation based on the maximum degree of reduction...
30 CFR 250.302 - Definitions concerning air quality.
Code of Federal Regulations, 2010 CFR
2010-07-01
... any combination of agents for which the Environmental Protection Agency (EPA) has established... by air quality modeling (or other methods determined by the Administrator of EPA to be reliable) not... control technology (BACT) means an emission limitation based on the maximum degree of reduction for each...
30 CFR 550.302 - Definitions concerning air quality.
Code of Federal Regulations, 2012 CFR
2012-07-01
... pollutant means any combination of agents for which the Environmental Protection Agency (EPA) has... is calculated by air quality modeling (or other methods determined by the Administrator of EPA to be... available control technology (BACT) means an emission limitation based on the maximum degree of reduction...
30 CFR 550.302 - Definitions concerning air quality.
Code of Federal Regulations, 2014 CFR
2014-07-01
... pollutant means any combination of agents for which the Environmental Protection Agency (EPA) has... is calculated by air quality modeling (or other methods determined by the Administrator of EPA to be... available control technology (BACT) means an emission limitation based on the maximum degree of reduction...
Spray Drift Reduction Evaluations of Spray Nozzles Using a Standardized Testing Protocol
2010-07-01
Drop Size Characteristics in a Spray Using Optical Nonimaging Light-Scattering Instruments,” Annual Book of ASTM Standards, Vol. 14-02, ASTM...Test Method for Determining Liquid Drop Size Characteristics in a Spray Using Optical Non- imaging Light-Scattering Instruments 22. AGDISP Model
Community Intervention Model to Reduce Inappropriate Antibiotic Use
ERIC Educational Resources Information Center
Alder, Stephen; Wuthrich, Amy; Haddadin, Bassam; Donnelly, Sharon; Hannah, Elizabeth Lyon; Stoddard, Greg; Benuzillo, Jose; Bateman, Kim; Samore, Matthew
2010-01-01
Background: The Inter-Mountain Project on Antibiotic Resistance and Therapy (IMPART) is an intervention that addresses emerging antimicrobial resistance and the reduction of unnecessary antimicrobial use. Purpose: This study assesses the design and implementation of the community intervention component of IMPART. Methods: The study was conducted…
78 FR 9698 - Agency Forms Undergoing Paperwork Reduction Act Review
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-11
... effective at improving health care quality. While evidence-based approaches for decision-making have become standard in healthcare, this has been limited in laboratory medicine. No single-evidence-based model for... (LMBP) initiative to develop new systematic evidence reviews methods for making evidence-based...
NASA Astrophysics Data System (ADS)
Venkatesh, Aranya
Increasing concerns about the environmental impacts of fossil fuels used in the U.S. transportation and electricity sectors have spurred interest in alternate energy sources, such as natural gas and biofuels. Life cycle assessment (LCA) methods can be used to estimate the environmental impacts of incumbent energy sources and potential impact reductions achievable through the use of alternate energy sources. Some recent U.S. climate policies have used the results of LCAs to encourage the use of low carbon fuels to meet future energy demands in the U.S. However, the LCA methods used to estimate potential reductions in environmental impact have some drawbacks. First, the LCAs are predominantly based on deterministic approaches that do not account for any uncertainty inherent in life cycle data and methods. Such methods overstate the accuracy of the point estimate results, which could in turn lead to incorrect and (consequent) expensive decision-making. Second, system boundaries considered by most LCA studies tend to be limited (considered a manifestation of uncertainty in LCA). Although LCAs can estimate the benefits of transitioning to energy systems of lower environmental impact, they may not be able to characterize real world systems perfectly. Improved modeling of energy systems mechanisms can provide more accurate representations of reality and define more likely limits on potential environmental impact reductions. This dissertation quantitatively and qualitatively examines the limitations in LCA studies outlined previously. The first three research chapters address the uncertainty in life cycle greenhouse gas (GHG) emissions associated with petroleum-based fuels, natural gas and coal consumed in the U.S. The uncertainty in life cycle GHG emissions from fossil fuels was found to range between 13 and 18% of their respective mean values. For instance, the 90% confidence interval of the life cycle GHG emissions of average natural gas consumed in the U.S was found to range between -8 to 9% (17%) of the mean value of 66 g CO2e/MJ. Results indicate that uncertainty affects the conclusions of comparative life cycle assessments, especially when differences in average environmental impacts between two competing fuels/products are small. In the final two research chapters of this thesis, system boundary limitations in LCA are addressed. Simplified economic dispatch models for are developed to examine changes in regional power plant dispatch that occur when coal power plants are retired and when natural gas prices drop. These models better reflect reality by estimating the order in which existing power plants are dispatched to meet electricity demand based on short-run marginal costs. Results indicate that the reduction in air emissions are lower than suggested by LCA studies, since they generally do not include the complexity of regional electricity grids, predominantly driven by comparative fuel prices. For instance, comparison, this study estimates 7-15% reductions in emissions with low natural gas prices. Although this is a significant reduction in itself, it is still lower than the benefits reported in traditional life cycle comparisons of coal and natural gas-based power (close to 50%), mainly due to the effects of plant dispatch.
NASA Astrophysics Data System (ADS)
Kehs, Joshua Paul
It is well documented in the literature that boat-tailed base cavities reduce the drag on blunt based bodies. The majority of the previous work has been focused on the final result, namely reporting the resulting drag reduction or base pressure increase without examining the methods in which such a device changes the fluid flow to enact such end results. The current work investigates the underlying physical means in which these devices change the flow around the body so as to reduce the overall drag. A canonical model with square cross section was developed for the purpose of studying the flow field around a blunt based body. The boat-tailed base cavity tested consisted of 4 panels of length equal to half the width of the body extending from the edges of the base at an angle towards the models center axis of 12°. Drag and surface pressure measurements were made at Reynolds numbers based on width from 2.3x105 to 3.6x10 5 in the Clarkson University high-speed wind tunnel over a range of pitch and yaw angles. Cross-stream hotwire wake surveys were used to identify wake width and turbulence intensities aft of the body at Reynolds numbers of 2.3x105 to 3.0x105. Particle Image Velocimetry (PIV) was used to quantify the flow field in the wake of the body, including the mean flow, vorticity, and turbulence measurements. The results indicated that the boat-tailed aft cavity decreases the drag significantly due to increased pressure on the base. Hotwire measurements indicated a reduction in wake width as well as a reduction in turbulence in the wake. PIV measurements indicated a significant reduction in wake turbulence and revealed that there exists a co-flowing stream that exits the cavity parallel to the free stream, reducing the shear in the flow at the flow separation point. The reduction in shear at the separation point indicated the method by which the turbulence was reduced. The reduction in turbulence combined with the reduction in wake size provided the mechanism of drag reduction by limiting the rate of entrainment of fluid in the recirculating wake to the free stream and by limiting the area over which this entrainment occurs.
Wavelet-Based Motion Artifact Removal for Electrodermal Activity
Chen, Weixuan; Jaques, Natasha; Taylor, Sara; Sano, Akane; Fedor, Szymon; Picard, Rosalind W.
2017-01-01
Electrodermal activity (EDA) recording is a powerful, widely used tool for monitoring psychological or physiological arousal. However, analysis of EDA is hampered by its sensitivity to motion artifacts. We propose a method for removing motion artifacts from EDA, measured as skin conductance (SC), using a stationary wavelet transform (SWT). We modeled the wavelet coefficients as a Gaussian mixture distribution corresponding to the underlying skin conductance level (SCL) and skin conductance responses (SCRs). The goodness-of-fit of the model was validated on ambulatory SC data. We evaluated the proposed method in comparison with three previous approaches. Our method achieved a greater reduction of artifacts while retaining motion-artifact-free data. PMID:26737714
van der Ster, Björn J P; Bennis, Frank C; Delhaas, Tammo; Westerhof, Berend E; Stok, Wim J; van Lieshout, Johannes J
2017-01-01
Introduction: In the initial phase of hypovolemic shock, mean blood pressure (BP) is maintained by sympathetically mediated vasoconstriction rendering BP monitoring insensitive to detect blood loss early. Late detection can result in reduced tissue oxygenation and eventually cellular death. We hypothesized that a machine learning algorithm that interprets currently used and new hemodynamic parameters could facilitate in the detection of impending hypovolemic shock. Method: In 42 (27 female) young [mean (sd): 24 (4) years], healthy subjects central blood volume (CBV) was progressively reduced by application of -50 mmHg lower body negative pressure until the onset of pre-syncope. A support vector machine was trained to classify samples into normovolemia (class 0), initial phase of CBV reduction (class 1) or advanced CBV reduction (class 2). Nine models making use of different features were computed to compare sensitivity and specificity of different non-invasive hemodynamic derived signals. Model features included : volumetric hemodynamic parameters (stroke volume and cardiac output), BP curve dynamics, near-infrared spectroscopy determined cortical brain oxygenation, end-tidal carbon dioxide pressure, thoracic bio-impedance, and middle cerebral artery transcranial Doppler (TCD) blood flow velocity. Model performance was tested by quantifying the predictions with three methods : sensitivity and specificity, absolute error, and quantification of the log odds ratio of class 2 vs. class 0 probability estimates. Results: The combination with maximal sensitivity and specificity for classes 1 and 2 was found for the model comprising volumetric features (class 1: 0.73-0.98 and class 2: 0.56-0.96). Overall lowest model error was found for the models comprising TCD curve hemodynamics. Using probability estimates the best combination of sensitivity for class 1 (0.67) and specificity (0.87) was found for the model that contained the TCD cerebral blood flow velocity derived pulse height. The highest combination for class 2 was found for the model with the volumetric features (0.72 and 0.91). Conclusion: The most sensitive models for the detection of advanced CBV reduction comprised data that describe features from volumetric parameters and from cerebral blood flow velocity hemodynamics. In a validated model of hemorrhage in humans these parameters provide the best indication of the progression of central hypovolemia.
Reducing salt in food; setting product-specific criteria aiming at a salt intake of 5 g per day
Dötsch-Klerk, M; PMM Goossens, W; Meijer, G W; van het Hof, K H
2015-01-01
Background/Objectives: There is an increasing public health concern regarding high salt intake, which is generally between 9 and 12 g per day, and much higher than the 5 g recommended by World Health Organization. Several relevant sectors of the food industry are engaged in salt reduction, but it is a challenge to reduce salt in products without compromising on taste, shelf-life or expense for consumers. The objective was to develop globally applicable salt reduction criteria as guidance for product reformulation. Subjects/Methods: Two sets of product group-specific sodium criteria were developed to reduce salt levels in foods to help consumers reduce their intake towards an interim intake goal of 6 g/day, and—on the longer term—5 g/day. Data modelling using survey data from the United States, United Kingdom and Netherlands was performed to assess the potential impact on population salt intake of cross-industry food product reformulation towards these criteria. Results: Modelling with 6 and 5 g/day criteria resulted in estimated reductions in population salt intake of 25 and 30% for the three countries, respectively, the latter representing an absolute decrease in the median salt intake of 1.8–2.2 g/day. Conclusions: The sodium criteria described in this paper can serve as guidance for salt reduction in foods. However, to enable achieving an intake of 5 g/day, salt reduction should not be limited to product reformulation. A multi-stakeholder approach is needed to make consumers aware of the need to reduce their salt intake. Nevertheless, dietary impact modelling shows that product reformulation by food industry has the potential to contribute substantially to salt-intake reduction. PMID:25690867
Simulation of Thermographic Responses of Delaminations in Composites with Quadrupole Method
NASA Technical Reports Server (NTRS)
Winfree, William P.; Zalameda, Joseph N.; Howell, Patricia A.; Cramer, K. Elliott
2016-01-01
The application of the quadrupole method for simulating thermal responses of delaminations in carbon fiber reinforced epoxy composites materials is presented. The method solves for the flux at the interface containing the delamination. From the interface flux, the temperature at the surface is calculated. While the results presented are for single sided measurements, with ash heating, expansion of the technique to arbitrary temporal flux heating or through transmission measurements is simple. The quadrupole method is shown to have two distinct advantages relative to finite element or finite difference techniques. First, it is straight forward to incorporate arbitrary shaped delaminations into the simulation. Second, the quadrupole method enables calculation of the thermal response at only the times of interest. This, combined with a significant reduction in the number of degrees of freedom for the same simulation quality, results in a reduction of the computation time by at least an order of magnitude. Therefore, it is a more viable technique for model based inversion of thermographic data. Results for simulations of delaminations in composites are presented and compared to measurements and finite element method results.
NASA Astrophysics Data System (ADS)
Yang, Jia Sheng
2018-06-01
In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.
Rotational relaxation of molecular hydrogen at moderate temperatures
NASA Technical Reports Server (NTRS)
Sharma, S. P.
1994-01-01
Using a coupled rotation-vibration-dissociation model the rotational relaxation times for molecular hydrogen as a function of final temperature (500-5000 K), in a hypothetical scenario of sudden compression, are computed. The theoretical model is based on a master equation solver. The bound-bound and bound-free transition rates have been computed using a quasiclassical trajectory method. A review of the available experimental data on the rotational relaxation of hydrogen is presented, with a critical overview of the method of measurements and data reduction, including the sources of errors. These experimental data are then compared with the computed results.
Based on user interest level of modeling scenarios and browse content
NASA Astrophysics Data System (ADS)
Zhao, Yang
2017-08-01
User interest modeling is the core of personalized service, taking into account the impact of situational information on user preferences, the user behavior days of financial information. This paper proposes a method of user interest modeling based on scenario information, which is obtained by calculating the similarity of the situation. The user's current scene of the approximate scenario set; on the "user - interest items - scenarios" three-dimensional model using the situation pre-filtering method of dimension reduction processing. View the content of the user interested in the theme, the analysis of the page content to get each topic of interest keywords, based on the level of vector space model user interest. The experimental results show that the user interest model based on the scenario information is within 9% of the user's interest prediction, which is effective.
NASA Astrophysics Data System (ADS)
Wu, Yu-liang; Jiang, Ze-yi; Zhang, Xin-xin; Wang, Peng; She, Xue-feng
2013-07-01
A mathematical model was established to describe the direct reduction of pellets in a rotary hearth furnace (RHF). In the model, heat transfer, mass transfer, and gas-solid chemical reactions were taken into account. The behaviors of iron metallization and dezincification were analyzed by the numerical method, which was validated by experimental data of the direct reduction of pellets in a Si-Mo furnace. The simulation results show that if the production targets of iron metallization and dezincification are up to 80% and 90%, respectively, the furnace temperature for high-temperature sections must be set higher than 1300°C. Moreover, an undersupply of secondary air by 20% will lead to a decline in iron metallization rate of discharged pellets by 10% and a decrease in dezincing rate by 13%. In addition, if the residence time of pellets in the furnace is over 20 min, its further extension will hardly lead to an obvious increase in production indexes under the same furnace temperature curve.
Goetzel, Ron Z.; Tabrizi, Maryam; Henke, Rachel Mosher; Benevent, Richele; Brockbank, Claire v. S.; Stinson, Kaylan; Trotter, Margo; Newman, Lee S.
2015-01-01
Objective To determine whether changes in health risks for workers in small businesses can produce medical and productivity cost savings. Methods A 1-year pre- and posttest study tracked changes in 10 modifiable health risks for 2458 workers at 121 Colorado businesses that participated in a comprehensive worksite health promotion program. Risk reductions were entered into a return-on-investment (ROI) simulation model. Results Reductions were recorded in 10 risk factors examined, including obesity (−2.0%), poor eating habits (−5.8%), poor physical activity (−6.5%), tobacco use (−1.3%), high alcohol consumption (−1.7%), high stress (−3.5%), depression (−2.3%), high blood pressure (−0.3%), high total cholesterol (−0.9%), and high blood glucose (−0.2%). The ROI model estimated medical and productivity savings of $2.03 for every $1.00 invested. Conclusions Pooled data suggest that small businesses can realize a positive ROI from effective risk reduction programs. PMID:24806569
Simplifying silicon burning: Application of quasi-equilibrium to (alpha) network nucleosynthesis
NASA Technical Reports Server (NTRS)
Hix, W. R.; Thielemann, F.-K.; Khokhlov, A. M.; Wheeler, J. C.
1997-01-01
While the need for accurate calculation of nucleosynthesis and the resulting rate of thermonuclear energy release within hydrodynamic models of stars and supernovae is clear, the computational expense of these nucleosynthesis calculations often force a compromise in accuracy to reduce the computational cost. To redress this trade-off of accuracy for speed, the authors present an improved nuclear network which takes advantage of quasi- equilibrium in order to reduce the number of independent nuclei, and hence the computational cost of nucleosynthesis, without significant reduction in accuracy. In this paper they will discuss the first application of this method, the further reduction in size of the minimal alpha network. The resultant QSE- reduced alpha network is twice as fast as the conventional alpha network it replaces and requires the tracking of half as many abundance variables, while accurately estimating the rate of energy generation. Such reduction in cost is particularly necessary for future generation of multi-dimensional models for supernovae.
Kim, Jongsik; McNamara, Nicholas D; Her, Theresa H; Hicks, Jason C
2013-11-13
This work describes a novel method for the preparation of titanium oxide nanoparticles supported on amorphous carbon with nanoporosity (Ti/NC) via the post-synthetic modification of a Zn-based MOF with an amine functionality, IRMOF-3, with titanium isopropoxide followed by its carbothermal pyrolysis. This material exhibited high purity, high surface area (>1000 m(2)/g), and a high dispersion of metal oxide nanoparticles while maintaining a small particle size (~4 nm). The material was shown to be a promising catalyst for oxidative desulfurization of diesel using dibenzothiophene as a model compound as it exhibited enhanced catalytic activity as compared with titanium oxide supported on activated carbon via the conventional incipient wetness impregnation method. The formation mechanism of Ti/NC was also proposed based on results obtained when the carbothermal reduction temperature was varied.
Global quasi-linearization (GQL) versus QSSA for a hydrogen-air auto-ignition problem.
Yu, Chunkan; Bykov, Viatcheslav; Maas, Ulrich
2018-04-25
A recently developed automatic reduction method for systems of chemical kinetics, the so-called Global Quasi-Linearization (GQL) method, has been implemented to study and reduce the dimensions of a homogeneous combustion system. The results of application of the GQL and the Quasi-Steady State Assumption (QSSA) are compared. A number of drawbacks of the QSSA are discussed, i.e. the selection criteria of QSS-species and its sensitivity to system parameters, initial conditions, etc. To overcome these drawbacks, the GQL approach has been developed as a robust, automatic and scaling invariant method for a global analysis of the system timescale hierarchy and subsequent model reduction. In this work the auto-ignition problem of the hydrogen-air system is considered in a wide range of system parameters and initial conditions. The potential of the suggested approach to overcome most of the drawbacks of the standard approaches is illustrated.
Modeling of sonochemistry in water in the presence of dissolved carbon dioxide.
Authier, Olivier; Ouhabaz, Hind; Bedogni, Stefano
2018-07-01
CO 2 capture and utilization (CCU) is a process that captures CO 2 emissions from sources such as fossil fuel power plants and reuses them so that they will not enter the atmosphere. Among the various ways of recycling CO 2 , reduction reactions are extensively studied at lab-scale. However, CO 2 reduction by standard methods is difficult. Sonochemistry may be used in CO 2 gas mixtures bubbled through water subjected to ultrasound waves. Indeed, the sonochemical reduction of CO 2 in water has been already investigated by some authors, showing that fuel species (CO and H 2 ) are obtained in the final products. The aim of this work is to model, for a single bubble, the close coupling of the mechanisms of bubble dynamics with the kinetics of gas phase reactions in the bubble that can lead to CO 2 reduction. An estimation of time-scales is used to define the controlling steps and consequently to solve a reduced model. The calculation of the concentration of free radicals and gases formed in the bubble is undertaken over many cycles to look at the effects of ultrasound frequency, pressure amplitude, initial bubble radius and bubble composition in CO 2 . The strong effect of bubble composition on the CO 2 reduction rate is confirmed in accordance with experimental data from the literature. When the initial fraction of CO 2 in the bubble is low, bubble growth and collapse are slightly modified with respect to simulation without CO 2 , and chemical reactions leading to CO 2 reduction are promoted. However, the peak collapse temperature depends on the thermal properties of the CO 2 and greatly decreases as the CO 2 increases in the bubble. The model shows that initial bubble radius, ultrasound frequency and pressure amplitude play a critical role in CO 2 reduction. Hence, in the case of a bubble with an initial radius of around 5 μm, CO 2 reduction appears to be more favorable at a frequency around 300 kHz than at a low frequency of around 20 kHz. Finally, the industrial application of ultrasound to CO 2 reduction in water would be largely dependent on sonochemical efficiency. Under the conditions tested, this process does not seem to be sufficiently efficient. Copyright © 2018 Elsevier B.V. All rights reserved.
Reduction of initial shock in decadal predictions using a new initialization strategy
NASA Astrophysics Data System (ADS)
He, Yujun; Wang, Bin
2017-04-01
Initial shock is a well-known problem occurring in the early years of a decadal prediction when assimilating full-field observations into a coupled model, which directly affects the prediction skill. For the purpose to alleviate this problem, we propose a novel full-field initialization method based on dimension-reduced projection four-dimensional variational data assimilation (DRP-4DVar). Different from the available solution strategies including anomaly assimilation and bias correction, it substantially reduces the initial shock through generating more consistent initial conditions for the coupled model, which, along with the model trajectory in one-month windows, best fit the monthly mean analysis data of oceanic temperature and salinity. We evaluate the performance of initialized hindcast experiments according to three proposed indices to measure the intensity of the initial shock. The results indicate that this strategy can obviously reduce the initial shock in decadal predictions by FGOALS-g2 (the Flexible Global Ocean-Atmosphere-Land System model, Grid-point Version 2) compared with the commonly-used nudging full-field initialization for the same model as well as the different full-field initialization strategies for other CMIP5 (the fifth phase of the Coupled Model Intercomparison Project) models whose decadal prediction results are available. It is also comparable to or even better than the anomaly initialization methods. Better hindcasts of global mean surface air temperature anomaly are obtained due to the reduction of initial shock by the new initialization scheme.
Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap.
Spiwok, Vojtěch; Králová, Blanka
2011-12-14
Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling. © 2011 American Institute of Physics
Online dimensionality reduction using competitive learning and Radial Basis Function network.
Tomenko, Vladimir
2011-06-01
The general purpose dimensionality reduction method should preserve data interrelations at all scales. Additional desired features include online projection of new data, processing nonlinearly embedded manifolds and large amounts of data. The proposed method, called RBF-NDR, combines these features. RBF-NDR is comprised of two modules. The first module learns manifolds by utilizing modified topology representing networks and geodesic distance in data space and approximates sampled or streaming data with a finite set of reference patterns, thus achieving scalability. Using input from the first module, the dimensionality reduction module constructs mappings between observation and target spaces. Introduction of specific loss function and synthesis of the training algorithm for Radial Basis Function network results in global preservation of data structures and online processing of new patterns. The RBF-NDR was applied for feature extraction and visualization and compared with Principal Component Analysis (PCA), neural network for Sammon's projection (SAMANN) and Isomap. With respect to feature extraction, the method outperformed PCA and yielded increased performance of the model describing wastewater treatment process. As for visualization, RBF-NDR produced superior results compared to PCA and SAMANN and matched Isomap. For the Topic Detection and Tracking corpus, the method successfully separated semantically different topics. Copyright © 2011 Elsevier Ltd. All rights reserved.
How to Compress Sequential Memory Patterns into Periodic Oscillations: General Reduction Rules
Zhang, Kechen
2017-01-01
A neural network with symmetric reciprocal connections always admits a Lyapunov function, whose minima correspond to the memory states stored in the network. Networks with suitable asymmetric connections can store and retrieve a sequence of memory patterns, but the dynamics of these networks cannot be characterized as readily as that of the symmetric networks due to the lack of established general methods. Here, a reduction method is developed for a class of asymmetric attractor networks that store sequences of activity patterns as associative memories, as in a Hopfield network. The method projects the original activity pattern of the network to a low-dimensional space such that sequential memory retrievals in the original network correspond to periodic oscillations in the reduced system. The reduced system is self-contained and provides quantitative information about the stability and speed of sequential memory retrievals in the original network. The time evolution of the overlaps between the network state and the stored memory patterns can also be determined from extended reduced systems. The reduction procedure can be summarized by a few reduction rules, which are applied to several network models, including coupled networks and networks with time-delayed connections, and the analytical solutions of the reduced systems are confirmed by numerical simulations of the original networks. Finally, a local learning rule that provides an approximation to the connection weights involving the pseudoinverse is also presented. PMID:24877729
NASA Astrophysics Data System (ADS)
Wang, Yu; Jiang, Wenchun; Luo, Yun; Zhang, Yucai; Tu, Shan-Tung
2017-12-01
The reduction and re-oxidation of anode have significant effects on the integrity of the solid oxide fuel cell (SOFC) sealed by the glass-ceramic (GC). The mechanical failure is mainly controlled by the stress distribution. Therefore, a three dimensional model of SOFC is established to investigate the stress evolution during the reduction and re-oxidation by finite element method (FEM) in this paper, and the failure probability is calculated using the Weibull method. The results demonstrate that the reduction of anode can decrease the thermal stresses and reduce the failure probability due to the volumetric contraction and porosity increasing. The re-oxidation can result in a remarkable increase of the thermal stresses, and the failure probabilities of anode, cathode, electrolyte and GC all increase to 1, which is mainly due to the large linear strain rather than the porosity decreasing. The cathode and electrolyte fail as soon as the linear strains are about 0.03% and 0.07%. Therefore, the re-oxidation should be controlled to ensure the integrity, and a lower re-oxidation temperature can decrease the stress and failure probability.
Sensitivity Analysis for Probabilistic Neural Network Structure Reduction.
Kowalski, Piotr A; Kusy, Maciej
2018-05-01
In this paper, we propose the use of local sensitivity analysis (LSA) for the structure simplification of the probabilistic neural network (PNN). Three algorithms are introduced. The first algorithm applies LSA to the PNN input layer reduction by selecting significant features of input patterns. The second algorithm utilizes LSA to remove redundant pattern neurons of the network. The third algorithm combines the proposed two and constitutes the solution of how they can work together. PNN with a product kernel estimator is used, where each multiplicand computes a one-dimensional Cauchy function. Therefore, the smoothing parameter is separately calculated for each dimension by means of the plug-in method. The classification qualities of the reduced and full structure PNN are compared. Furthermore, we evaluate the performance of PNN, for which global sensitivity analysis (GSA) and the common reduction methods are applied, both in the input layer and the pattern layer. The models are tested on the classification problems of eight repository data sets. A 10-fold cross validation procedure is used to determine the prediction ability of the networks. Based on the obtained results, it is shown that the LSA can be used as an alternative PNN reduction approach.
Stacul, Stefano; Squeglia, Nunziante
2018-02-15
A Boundary Element Method (BEM) approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ.
2018-01-01
A Boundary Element Method (BEM) approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ. PMID:29462857
Evaluation of Cost Leadership Strategy in Shipping Enterprises with Simulation Model
NASA Astrophysics Data System (ADS)
Ferfeli, Maria V.; Vaxevanou, Anthi Z.; Damianos, Sakas P.
2009-08-01
The present study will attempt the evaluation of cost leadership strategy that prevails in certain shipping enterprises and the creation of simulation models based on strategic model STAIR. The above model is an alternative method of strategic applications evaluation. This is held in order to be realised if the strategy of cost leadership creates competitive advantage [1] and this will be achieved via the technical simulation which appreciates the interactions between the operations of an enterprise and the decision-making strategy in conditions of uncertainty with reduction of undertaken risk.
An error reduction algorithm to improve lidar turbulence estimates for wind energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Clifton, Andrew
Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidarsmore » in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine-learning methods in L-TERRA was highly dependent on the input variables and training dataset used, suggesting that machine learning may not be the best technique for reducing lidar turbulence intensity (TI) error. Future work will include the use of a lidar simulator to better understand how different factors affect lidar turbulence error and to determine how these errors can be reduced using information from a stand-alone lidar.« less
An error reduction algorithm to improve lidar turbulence estimates for wind energy
Newman, Jennifer F.; Clifton, Andrew
2017-02-10
Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidarsmore » in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine-learning methods in L-TERRA was highly dependent on the input variables and training dataset used, suggesting that machine learning may not be the best technique for reducing lidar turbulence intensity (TI) error. Future work will include the use of a lidar simulator to better understand how different factors affect lidar turbulence error and to determine how these errors can be reduced using information from a stand-alone lidar.« less
Simon, Heather; Baker, Kirk R; Akhtar, Farhan; Napelenok, Sergey L; Possiel, Norm; Wells, Benjamin; Timin, Brian
2013-03-05
In setting primary ambient air quality standards, the EPA's responsibility under the law is to establish standards that protect public health. As part of the current review of the ozone National Ambient Air Quality Standard (NAAQS), the US EPA evaluated the health exposure and risks associated with ambient ozone pollution using a statistical approach to adjust recent air quality to simulate just meeting the current standard level, without specifying emission control strategies. One drawback of this purely statistical concentration rollback approach is that it does not take into account spatial and temporal heterogeneity of ozone response to emissions changes. The application of the higher-order decoupled direct method (HDDM) in the community multiscale air quality (CMAQ) model is discussed here to provide an example of a methodology that could incorporate this variability into the risk assessment analyses. Because this approach includes a full representation of the chemical production and physical transport of ozone in the atmosphere, it does not require assumed background concentrations, which have been applied to constrain estimates from past statistical techniques. The CMAQ-HDDM adjustment approach is extended to measured ozone concentrations by determining typical sensitivities at each monitor location and hour of the day based on a linear relationship between first-order sensitivities and hourly ozone values. This approach is demonstrated by modeling ozone responses for monitor locations in Detroit and Charlotte to domain-wide reductions in anthropogenic NOx and VOCs emissions. As seen in previous studies, ozone response calculated using HDDM compared well to brute-force emissions changes up to approximately a 50% reduction in emissions. A new stepwise approach is developed here to apply this method to emissions reductions beyond 50% allowing for the simulation of more stringent reductions in ozone concentrations. Compared to previous rollback methods, this application of modeled sensitivities to ambient ozone concentrations provides a more realistic spatial response of ozone concentrations at monitors inside and outside the urban core and at hours of both high and low ozone concentrations.
NASA Astrophysics Data System (ADS)
Divakov, D.; Sevastianov, L.; Nikolaev, N.
2017-01-01
The paper deals with a numerical solution of the problem of waveguide propagation of polarized light in smoothly-irregular transition between closed regular waveguides using the incomplete Galerkin method. This method consists in replacement of variables in the problem of reduction of the Helmholtz equation to the system of differential equations by the Kantorovich method and in formulation of the boundary conditions for the resulting system. The formulation of the boundary problem for the ODE system is realized in computer algebra system Maple. The stated boundary problem is solved using Maples libraries of numerical methods.
Effects of Vibrations on Metal Forming Process: Analytical Approach and Finite Element Simulations
NASA Astrophysics Data System (ADS)
Armaghan, Khan; Christophe, Giraud-Audine; Gabriel, Abba; Régis, Bigot
2011-01-01
Vibration assisted forming is one of the most recent and beneficial technique used to improve forming process. Effects of vibration on metal forming processes can be attributed to two causes. First, the volume effect links lowering of yield stress with the influence of vibration on the dislocation movement. Second, the surface effect explains lowering of the effective coefficient of friction by periodic reduction contact area. This work is related to vibration assisted forming process in viscoplastic domain. Impact of change in vibration waveform has been analyzed. For this purpose, two analytical models have been developed for two different types of vibration waveforms (sinusoidal and triangular). These models were developed on the basis of Slice method that is used to find out the required forming force for the process. Final relationships show that application of triangular waveform in forming process is more beneficial as compare to sinusoidal vibrations in terms of reduced forming force. Finite Element Method (FEM) based simulations were performed using Forge2008®and these confirmed the results of analytical models. The ratio of vibration speed to upper die speed is a critical factor in the reduction of the forming force.
Rodovalho, Edmo da Cunha; Lima, Hernani Mota; de Tomi, Giorgio
2016-05-01
The mining operations of loading and haulage have an energy source that is highly dependent on fossil fuels. In mining companies that select trucks for haulage, this input is the main component of mining costs. How can the impact of the operational aspects on the diesel consumption of haulage operations in surface mines be assessed? There are many studies relating the consumption of fuel trucks to several variables, but a methodology that prioritizes higher-impact variables under each specific condition is not available. Generic models may not apply to all operational settings presented in the mining industry. This study aims to create a method of analysis, identification, and prioritization of variables related to fuel consumption of haul trucks in open pit mines. For this purpose, statistical analysis techniques and mathematical modelling tools using multiple linear regressions will be applied. The model is shown to be suitable because the results generate a good description of the fuel consumption behaviour. In the practical application of the method, the reduction of diesel consumption reached 10%. The implementation requires no large-scale investments or very long deadlines and can be applied to mining haulage operations in other settings. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jackson, Charlotte; Mangtani, Punam; Fine, Paul; Vynnycky, Emilia
2014-01-01
Background Changes in children’s contact patterns between termtime and school holidays affect the transmission of several respiratory-spread infections. Transmission of varicella zoster virus (VZV), the causative agent of chickenpox, has also been linked to the school calendar in several settings, but temporal changes in the proportion of young children attending childcare centres may have influenced this relationship. Methods We used two modelling methods (a simple difference equations model and a Time series Susceptible Infectious Recovered (TSIR) model) to estimate fortnightly values of a contact parameter (the per capita rate of effective contact between two specific individuals), using GP consultation data for chickenpox in England and Wales from 1967–2008. Results The estimated contact parameters were 22–31% lower during the summer holiday than during termtime. The relationship between the contact parameter and the school calendar did not change markedly over the years analysed. Conclusions In England and Wales, reductions in contact between children during the school summer holiday lead to a reduction in the transmission of VZV. These estimates are relevant for predicting how closing schools and nurseries may affect an outbreak of an emerging respiratory-spread pathogen. PMID:24932994
Hybrid CMS methods with model reduction for assembly of structures
NASA Technical Reports Server (NTRS)
Farhat, Charbel
1991-01-01
Future on-orbit structures will be designed and built in several stages, each with specific control requirements. Therefore there must be a methodology which can predict the dynamic characteristics of the assembled structure, based on the dynamic characteristics of the subassemblies and their interfaces. The methodology developed by CSC to address this issue is Hybrid Component Mode Synthesis (HCMS). HCMS distinguishes itself from standard component mode synthesis algorithms in the following features: (1) it does not require the subcomponents to have displacement compatible models, which makes it ideal for analyzing the deployment of heterogeneous flexible multibody systems, (2) it incorporates a second-level model reduction scheme at the interface, which makes it much faster than other algorithms and therefore suitable for control purposes, and (3) it does answer specific questions such as 'how does the global fundamental frequency vary if I change the physical parameters of substructure k by a specified amount?'. Because it is based on an energy principle rather than displacement compatibility, this methodology can also help the designer to define an assembly process. Current and future efforts are devoted to applying the HCMS method to design and analyze docking and berthing procedures in orbital construction.
An open-source model and solution method to predict co-contraction in the finger.
MacIntosh, Alexander R; Keir, Peter J
2017-10-01
A novel open-source biomechanical model of the index finger with an electromyography (EMG)-constrained static optimization solution method are developed with the goal of improving co-contraction estimates and providing means to assess tendon tension distribution through the finger. The Intrinsic model has four degrees of freedom and seven muscles (with a 14 component extensor mechanism). A novel plugin developed for the OpenSim modelling software applied the EMG-constrained static optimization solution method. Ten participants performed static pressing in three finger postures and five dynamic free motion tasks. Index finger 3D kinematics, force (5, 15, 30 N), and EMG (4 extrinsic muscles and first dorsal interosseous) were used in the analysis. The Intrinsic model predicted co-contraction increased by 29% during static pressing over the existing model. Further, tendon tension distribution patterns and forces, known to be essential to produce finger action, were determined by the model across all postures. The Intrinsic model and custom solution method improved co-contraction estimates to facilitate force propagation through the finger. These tools improve our interpretation of loads in the finger to develop better rehabilitation and workplace injury risk reduction strategies.
Takahashi, Hideaki; Ohno, Hajime; Kishi, Ryohei; Nakano, Masayoshi; Matubayasi, Nobuyuki
2008-11-28
The isoalloxazine ring (flavin ring) is a part of the coenzyme flavin adenine dinucleotide and acts as an active site in the oxidation of a substrate. We have computed the free energy change Deltamicro(red) associated with one-electron reduction of the flavin ring immersed in water by utilizing the quantum mechanical/molecular mechanical method combined with the theory of energy representation (QM/MM-ER method) recently developed. As a novel treatment in implementing the QM/MM-ER method, we have identified the excess charge to be attached on the flavin ring as a solute while the remaining molecules, i.e., flavin ring and surrounding water molecules, are treated as solvent species. Then, the reduction free energy can be decomposed into the contribution Deltamicro(red)(QM) due to the oxidant described quantum chemically and the free energy Deltamicro(red)(MM) due to the water molecules represented by a classical model. By the sum of these contributions, the total reduction free energy Deltamicro(red) has been given as -80.1 kcal/mol. To examine the accuracy and efficiency of this approach, we have also conducted the Deltamicro(red) calculation using the conventional scheme that Deltamicro(red) is constructed from the solvation free energies of the flavin rings at the oxidized and reduced states. The conventional scheme has been implemented with the QM/MM-ER method and the calculated Deltamicro(red) has been estimated as -81.0 kcal/mol, showing excellent agreement with the value given by the new approach. The present approach is efficient, in particular, to compute free energy change for the reaction occurring in a protein since it enables ones to circumvent the numerical problem brought about by subtracting the huge solvation free energies of the proteins in two states before and after the reduction.
Tsantis, Stavros; Spiliopoulos, Stavros; Skouroliakou, Aikaterini; Karnabatidis, Dimitrios; Hazle, John D; Kagadis, George C
2014-07-01
Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists' qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. A new wavelet-based EFCM clustering model was introduced toward noise reduction and detail preservation. The proposed method improves the overall US image quality, which in turn could affect the decision-making on whether additional imaging and/or intervention is needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Nam Lyong; Lee, Sang-Seok; Graduate School of Engineering, Tottori University, 4-101 Koyama-Minami, Tottori
2013-07-15
The projection-reduction method introduced by the present authors is known to give a validated theory for optical transitions in the systems of electrons interacting with phonons. In this work, using this method, we derive the linear and first order nonlinear optical conductivites for an electron-impurity system and examine whether the expressions faithfully satisfy the quantum mechanical philosophy, in the same way as for the electron-phonon systems. The result shows that the Fermi distribution function for electrons, energy denominators, and electron-impurity coupling factors are contained properly in organized manners along with absorption of photons for each electron transition process in themore » final expressions. Furthermore, the result is shown to be represented properly by schematic diagrams, as in the formulation of electron-phonon interaction. Therefore, in conclusion, we claim that this method can be applied in modeling optical transitions of electrons interacting with both impurities and phonons.« less
A new drilling method—Earthworm-like vibration drilling
Wang, Peng; Wang, Ruihe
2018-01-01
The load transfer difficulty caused by borehole wall friction severely limits the penetration rate and extended-reach limit of complex structural wells. A new friction reduction technology termed “earthworm-like drilling” is proposed in this paper to improve the load transfer of complex structural wells. A mathematical model based on a “soft-string” model is developed and solved. The results show that earthworm-like drilling is more effective than single-point vibration drilling. The amplitude and frequency of the pulse pressure and the installation position of the shakers have a substantial impact on friction reduction and load transfer. An optimization model based on the projection gradient method is developed and used to optimize the position of three shakers in a horizontal well. The results verify the feasibility and advantages of earthworm-like drilling, and establish a solid theoretical foundation for its application in oil field drilling. PMID:29641615
Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models
Cowley, Benjamin R.; Doiron, Brent; Kohn, Adam
2016-01-01
Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction—shared dimensionality and percent shared variance—with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure. PMID:27926936
Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.
Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer
2017-08-16
Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G; Pan, X; Stayman, J
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less
Methods to Improve the Maintenance of the Earth Catalog of Satellites During Severe Solar Storms
NASA Technical Reports Server (NTRS)
Wilkin, Paul G.; Tolson, Robert H.
1998-01-01
The objective of this thesis is to investigate methods to improve the ability to maintain the inventory of orbital elements of Earth satellites during periods of atmospheric disturbance brought on by severe solar activity. Existing techniques do not account for such atmospheric dynamics, resulting in tracking errors of several seconds in predicted crossing time. Two techniques are examined to reduce of these tracking errors. First, density predicted from various atmospheric models is fit to the orbital decay rate for a number of satellites. An orbital decay model is then developed that could be used to reduce tracking errors by accounting for atmospheric changes. The second approach utilizes a Kalman filter to estimate the orbital decay rate of a satellite after every observation. The new information is used to predict the next observation. Results from the first approach demonstrated the feasibility of building an orbital decay model based on predicted atmospheric density. Correlation of atmospheric density to orbital decay was as high as 0.88. However, it is clear that contemporary: atmospheric models need further improvement in modeling density perturbations polar region brought on by solar activity. The second approach resulted in a dramatic reduction in tracking errors for certain satellites during severe solar Storms. For example, in the limited cases studied, the reduction in tracking errors ranged from 79 to 25 percent.
Xiao, Wei; Jin, Xianbo; Deng, Yuan; Wang, Dihua; Hu, Xiaohong; Chen, George Z
2006-08-11
The electrochemical reduction of solid SiO2 (quartz) to Si is studied in molten CaCl2 at 1173 K. Experimental observations are compared and agree well with a novel penetration model in relation with electrochemistry at the dynamic conductor|insulator|electrolyte three-phase interlines. The findings show that the reduction of a cylindrical quartz pellet at certain potentials is mainly determined by the diffusion of the O(2-) ions and also the ohmic polarisation in the reduction-generated porous silicon layer. The reduction rate increases with the overpotential to a maximum after which the process is retarded, most likely due to precipitation of CaO in the reaction region (cathodic passivation). Data are reported on the reduction rate, current efficiency and energy consumption during the electroreduction of quartz under potentiostatic conditions. These theoretical and experimental findings form the basis for an in-depth discussion on the optimisation of the electroreduction method for the production of silicon.
78 FR 18322 - Marine Mammals; File No. 17751
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
... reduction of sea ice in the Arctic with the goal of developing predictive ecosystem models. Research methods... applied in due form for a permit to conduct research on gray (Eschrichtius robustus) and killer (Orcinus..., Chukchi Sea, and Arctic Ocean. The objectives of the research are to examine the distribution and movement...
Role of Metabolomics in Environmental Chemical Exposure and Risk Assessment
The increasing demand for the reduction, replacement, and refinement of the use of animal models in exposure assessments has stimulated the pursuit of alternative methods. This has included not only the use of the in vitro systems (e.g., cell cultures) in lieu of in vivo whole an...
A method is presented and applied for evaluating an air quality model’s changes in pollutant concentrations stemming from changes in emissions while explicitly accounting for the uncertainties in the base emission inventory. Specifically, the Community Multiscale Air Quality (CMA...