Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua
2018-02-01
High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
A Fast Vector Radiative Transfer Model for Atmospheric and Oceanic Remote Sensing
NASA Astrophysics Data System (ADS)
Ding, J.; Yang, P.; King, M. D.; Platnick, S. E.; Meyer, K.
2017-12-01
A fast vector radiative transfer model is developed in support of atmospheric and oceanic remote sensing. This model is capable of simulating the Stokes vector observed at the top of the atmosphere (TOA) and the terrestrial surface by considering absorption, scattering, and emission. The gas absorption is parameterized in terms of atmospheric gas concentrations, temperature, and pressure. The parameterization scheme combines a regression method and the correlated-K distribution method, and can easily integrate with multiple scattering computations. The approach is more than four orders of magnitude faster than a line-by-line radiative transfer model with errors less than 0.5% in terms of transmissivity. A two-component approach is utilized to solve the vector radiative transfer equation (VRTE). The VRTE solver separates the phase matrices of aerosol and cloud into forward and diffuse parts and thus the solution is also separated. The forward solution can be expressed by a semi-analytical equation based on the small-angle approximation, and serves as the source of the diffuse part. The diffuse part is solved by the adding-doubling method. The adding-doubling implementation is computationally efficient because the diffuse component needs much fewer spherical function expansion terms. The simulated Stokes vector at both the TOA and the surface have comparable accuracy compared with the counterparts based on numerically rigorous methods.
Liu, Ping; Li, Guodong; Liu, Xinggao
2015-09-01
Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Park, Michael A.
2006-01-01
An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.
Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Park, Michael A.
2005-01-01
An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.
A linear-RBF multikernel SVM to classify big text corpora.
Romero, R; Iglesias, E L; Borrajo, L
2015-01-01
Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers.
1986-09-01
necessary to define "canonical" * parameterizations. Examples of proposed parameterizations are Munge N...of a slice of the surface oriented along the vector CT on the surface is given by STr -(A4.24) 11 is clear from the above expression, that when a slice
A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.; Garg, Devendra P.
1998-01-01
This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.
Flow Charts: Visualization of Vector Fields on Arbitrary Surfaces
Li, Guo-Shi; Tricoche, Xavier; Weiskopf, Daniel; Hansen, Charles
2009-01-01
We introduce a novel flow visualization method called Flow Charts, which uses a texture atlas approach for the visualization of flows defined over curved surfaces. In this scheme, the surface and its associated flow are segmented into overlapping patches, which are then parameterized and packed in the texture domain. This scheme allows accurate particle advection across multiple charts in the texture domain, providing a flexible framework that supports various flow visualization techniques. The use of surface parameterization enables flow visualization techniques requiring the global view of the surface over long time spans, such as Unsteady Flow LIC (UFLIC), particle-based Unsteady Flow Advection Convolution (UFAC), or dye advection. It also prevents visual artifacts normally associated with view-dependent methods. Represented as textures, Flow Charts can be naturally integrated into hardware accelerated flow visualization techniques for interactive performance. PMID:18599918
Godino-Llorente, J I; Gómez-Vilda, P
2004-02-01
It is well known that vocal and voice diseases do not necessarily cause perceptible changes in the acoustic voice signal. Acoustic analysis is a useful tool to diagnose voice diseases being a complementary technique to other methods based on direct observation of the vocal folds by laryngoscopy. Through the present paper two neural-network based classification approaches applied to the automatic detection of voice disorders will be studied. Structures studied are multilayer perceptron and learning vector quantization fed using short-term vectors calculated accordingly to the well-known Mel Frequency Coefficient cepstral parameterization. The paper shows that these architectures allow the detection of voice disorders--including glottic cancer--under highly reliable conditions. Within this context, the Learning Vector quantization methodology demonstrated to be more reliable than the multilayer perceptron architecture yielding 96% frame accuracy under similar working conditions.
Circular Conditional Autoregressive Modeling of Vector Fields.
Modlin, Danny; Fuentes, Montse; Reich, Brian
2012-02-01
As hurricanes approach landfall, there are several hazards for which coastal populations must be prepared. Damaging winds, torrential rains, and tornadoes play havoc with both the coast and inland areas; but, the biggest seaside menace to life and property is the storm surge. Wind fields are used as the primary forcing for the numerical forecasts of the coastal ocean response to hurricane force winds, such as the height of the storm surge and the degree of coastal flooding. Unfortunately, developments in deterministic modeling of these forcings have been hindered by computational expenses. In this paper, we present a multivariate spatial model for vector fields, that we apply to hurricane winds. We parameterize the wind vector at each site in polar coordinates and specify a circular conditional autoregressive (CCAR) model for the vector direction, and a spatial CAR model for speed. We apply our framework for vector fields to hurricane surface wind fields for Hurricane Floyd of 1999 and compare our CCAR model to prior methods that decompose wind speed and direction into its N-S and W-E cardinal components.
Circular Conditional Autoregressive Modeling of Vector Fields*
Modlin, Danny; Fuentes, Montse; Reich, Brian
2013-01-01
As hurricanes approach landfall, there are several hazards for which coastal populations must be prepared. Damaging winds, torrential rains, and tornadoes play havoc with both the coast and inland areas; but, the biggest seaside menace to life and property is the storm surge. Wind fields are used as the primary forcing for the numerical forecasts of the coastal ocean response to hurricane force winds, such as the height of the storm surge and the degree of coastal flooding. Unfortunately, developments in deterministic modeling of these forcings have been hindered by computational expenses. In this paper, we present a multivariate spatial model for vector fields, that we apply to hurricane winds. We parameterize the wind vector at each site in polar coordinates and specify a circular conditional autoregressive (CCAR) model for the vector direction, and a spatial CAR model for speed. We apply our framework for vector fields to hurricane surface wind fields for Hurricane Floyd of 1999 and compare our CCAR model to prior methods that decompose wind speed and direction into its N-S and W-E cardinal components. PMID:24353452
The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware
NASA Astrophysics Data System (ADS)
Kathiara, Jainik
There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.
Patch-based image reconstruction for PET using prior-image derived dictionaries
NASA Astrophysics Data System (ADS)
Tahaei, Marzieh S.; Reader, Andrew J.
2016-09-01
In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.
Parameterized post-Newtonian cosmology
NASA Astrophysics Data System (ADS)
Sanghai, Viraj A. A.; Clifton, Timothy
2017-03-01
Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC).
Normalized Implicit Radial Models for Scattered Point Cloud Data without Normal Vectors
2009-03-23
points by shrinking a discrete membrane, Computer Graphics Forum, Vol. 24-4, 2005, pp. 791-808 [8] Floater , M. S., Reimers, M.: Meshless...Parameterization and Surface Reconstruction, Computer Aided Geometric Design 18, 2001, pp 77-92 [9] Floater , M. S.: Parameterization of Triangulations and...Unorganized Points, In: Tutorials on Multiresolution in Geometric Modelling, A. Iske, E. Quak and M. S. Floater (eds.), Springer , 2002, pp. 287-316 [10
Betatron motion with coupling of horizontal and vertical degrees of freedom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebedev, V.A.; /Fermilab; Bogacz, S.A.
Presently, there are two most frequently used parameterizations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is described by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. Considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev's vertex-to-plane adapter.« less
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
Zhang, Jie; Xiao, Wendong; Zhang, Sen; Huang, Shoudong
2017-04-17
Device-free localization (DFL) is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF) DFL system, radio transmitters (RTs) and radio receivers (RXs) are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS) measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM) approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE) is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN), support vector machine (SVM), back propagation neural network (BPNN), as well as the well-known radio tomographic imaging (RTI) DFL approach.
Zhang, Jie; Xiao, Wendong; Zhang, Sen; Huang, Shoudong
2017-01-01
Device-free localization (DFL) is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF) DFL system, radio transmitters (RTs) and radio receivers (RXs) are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS) measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM) approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE) is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN), support vector machine (SVM), back propagation neural network (BPNN), as well as the well-known radio tomographic imaging (RTI) DFL approach. PMID:28420187
A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem
NASA Technical Reports Server (NTRS)
Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.
2003-01-01
In this paper we present, a comparison of trajectory optimization approaches for the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP). Quasi- Newton and Nelder-Meade Simplex. Several cost function parameterizations are considered for the direct approach. We choose one direct approach that appears to be the most flexible. Both the direct and indirect methods are applied to a variety of test cases which are chosen to demonstrate the performance of each method in different flight regimes. The first test case is a simple circular-to-circular coplanar rendezvous. The second test case is an elliptic-to-elliptic line of apsides rotation. The final test case is an orbit phasing maneuver sequence in a highly elliptic orbit. For each test case we present a comparison of the performance of all methods we consider in this paper.
Betatron motion with coupling of horizontal and vertical degrees of freedom
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. A. Bogacz; V. A. Lebedev
2002-11-21
The Courant-Snyder parameterization of one-dimensional linear betatron motion is generalized to two-dimensional coupled linear motion. To represent the 4 x 4 symplectic transfer matrix the following ten parameters were chosen: four beta-functions, four alpha-functions and two betatron phase advances which have a meaning similar to the Courant-Snyder parameterization. Such a parameterization works equally well for weak and strong coupling and can be useful for analysis of coupled betatron motion in circular accelerators as well as in transfer lines. Similarly, the transfer matrix, the bilinear form describing the phase space ellipsoid and the second order moments are related to the eigen-vectors.more » Corresponding equations can be useful in interpreting tracking results and experimental data.« less
Actinide electronic structure and atomic forces
NASA Astrophysics Data System (ADS)
Albers, R. C.; Rudin, Sven P.; Trinkle, Dallas R.; Jones, M. D.
2000-07-01
We have developed a new method[1] of fitting tight-binding parameterizations based on functional forms developed at the Naval Research Laboratory.[2] We have applied these methods to actinide metals and report our success using them (see below). The fitting procedure uses first-principles local-density-approximation (LDA) linear augmented plane-wave (LAPW) band structure techniques[3] to first calculate an electronic-structure band structure and total energy for fcc, bcc, and simple cubic crystal structures for the actinide of interest. The tight-binding parameterization is then chosen to fit the detailed energy eigenvalues of the bands along symmetry directions, and the symmetry of the parameterization is constrained to agree with the correct symmetry of the LDA band structure at each eigenvalue and k-vector that is fit to. By fitting to a range of different volumes and the three different crystal structures, we find that the resulting parameterization is robust and appears to accurately calculate other crystal structures and properties of interest.
NASA Astrophysics Data System (ADS)
Awatey, M. T.; Irving, J.; Oware, E. K.
2016-12-01
Markov chain Monte Carlo (McMC) inversion frameworks are becoming increasingly popular in geophysics due to their ability to recover multiple equally plausible geologic features that honor the limited noisy measurements. Standard McMC methods, however, become computationally intractable with increasing dimensionality of the problem, for example, when working with spatially distributed geophysical parameter fields. We present a McMC approach based on a sparse proper orthogonal decomposition (POD) model parameterization that implicitly incorporates the physics of the underlying process. First, we generate training images (TIs) via Monte Carlo simulations of the target process constrained to a conceptual model. We then apply POD to construct basis vectors from the TIs. A small number of basis vectors can represent most of the variability in the TIs, leading to dimensionality reduction. A projection of the starting model into the reduced basis space generates the starting POD coefficients. At each iteration, only coefficients within a specified sampling window are resimulated assuming a Gaussian prior. The sampling window grows at a specified rate as the number of iteration progresses starting from the coefficients corresponding to the highest ranked basis to those of the least informative basis. We found this gradual increment in the sampling window to be more stable compared to resampling all the coefficients right from the first iteration. We demonstrate the performance of the algorithm with both synthetic and lab-scale electrical resistivity imaging of saline tracer experiments, employing the same set of basis vectors for all inversions. We consider two scenarios of unimodal and bimodal plumes. The unimodal plume is consistent with the hypothesis underlying the generation of the TIs whereas bimodality in plume morphology was not theorized. We show that uncertainty quantification using McMC can proceed in the reduced dimensionality space while accounting for the physics of the underlying process.
A description of rotations for DEM models of particle systems
NASA Astrophysics Data System (ADS)
Campello, Eduardo M. B.
2015-06-01
In this work, we show how a vector parameterization of rotations can be adopted to describe the rotational motion of particles within the framework of the discrete element method (DEM). It is based on the use of a special rotation vector, called Rodrigues rotation vector, and accounts for finite rotations in a fully exact manner. The use of fictitious entities such as quaternions or complicated structures such as Euler angles is thereby circumvented. As an additional advantage, stick-slip friction models with inter-particle rolling motion are made possible in a consistent and elegant way. A few examples are provided to illustrate the applicability of the scheme. We believe that simple vector descriptions of rotations are very useful for DEM models of particle systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man
2015-06-01
Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less
Stochastic Convection Parameterizations: The Eddy-Diffusivity/Mass-Flux (EDMF) Approach (Invited)
NASA Astrophysics Data System (ADS)
Teixeira, J.
2013-12-01
In this presentation it is argued that moist convection parameterizations need to be stochastic in order to be realistic - even in deterministic atmospheric prediction systems. A new unified convection and boundary layer parameterization (EDMF) that optimally combines the Eddy-Diffusivity (ED) approach for smaller-scale boundary layer mixing with the Mass-Flux (MF) approach for larger-scale plumes is discussed. It is argued that for realistic simulations stochastic methods have to be employed in this new unified EDMF. Positive results from the implementation of the EDMF approach in atmospheric models are presented.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Newman, James C., III; Barnwell, Richard W.
1997-01-01
A three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed and is extended to model geometrically complex configurations. The advantage of unstructured grids (when compared with a structured-grid approach) is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional geometry and a Gauss-Seidel algorithm for the three-dimensional; similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Simple parameterization techniques are utilized for demonstrative purposes. Once the surface has been deformed, the unstructured grid is adapted by considering the mesh as a system of interconnected springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR (which is an advanced automatic-differentiation software tool). To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for a two-dimensional high-lift multielement airfoil and for a three-dimensional Boeing 747-200 aircraft.
3D surface parameterization using manifold learning for medial shape representation
NASA Astrophysics Data System (ADS)
Ward, Aaron D.; Hamarneh, Ghassan
2007-03-01
The choice of 3D shape representation for anatomical structures determines the effectiveness with which segmentation, visualization, deformation, and shape statistics are performed. Medial axis-based shape representations have attracted considerable attention due to their inherent ability to encode information about the natural geometry of parts of the anatomy. In this paper, we propose a novel approach, based on nonlinear manifold learning, to the parameterization of medial sheets and object surfaces based on the results of skeletonization. For each single-sheet figure in an anatomical structure, we skeletonize the figure, and classify its surface points according to whether they lie on the upper or lower surface, based on their relationship to the skeleton points. We then perform nonlinear dimensionality reduction on the skeleton, upper, and lower surface points, to find the intrinsic 2D coordinate system of each. We then center a planar mesh over each of the low-dimensional representations of the points, and map the meshes back to 3D using the mappings obtained by manifold learning. Correspondence between mesh vertices, established in their intrinsic 2D coordinate spaces, is used in order to compute the thickness vectors emanating from the medial sheet. We show results of our algorithm on real brain and musculoskeletal structures extracted from MRI, as well as an artificial multi-sheet example. The main advantages to this method are its relative simplicity and noniterative nature, and its ability to correctly compute nonintersecting thickness vectors for a medial sheet regardless of both the amount of coincident bending and thickness in the object, and of the incidence of local concavities and convexities in the object's surface.
Betatron motion with coupling of horizontal and vertical degrees of freedom
Lebedev, V. A.; Bogacz, S. A.
2010-10-21
Presently, there are two most frequently used parameterezations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is descrabed by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. In addition, considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev’s vertex-to-plane adapter.« less
Betatron motion with coupling of horizontal and vertical degrees of freedom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebedev, V. A.; Bogacz, S. A.
Presently, there are two most frequently used parameterezations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is descrabed by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. In addition, considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev’s vertex-to-plane adapter.« less
Barbu, Corentin; Dumonteil, Eric; Gourbière, Sébastien
2010-01-01
Background Chagas disease is a major parasitic disease in Latin America, prevented in part by vector control programs that reduce domestic populations of triatomines. However, the design of control strategies adapted to non-domiciliated vectors, such as Triatoma dimidiata, remains a challenge because it requires an accurate description of their spatio-temporal distributions, and a proper understanding of the underlying dispersal processes. Methodology/Principal Findings We combined extensive spatio-temporal data sets describing house infestation dynamics by T. dimidiata within a village, and spatially explicit population dynamics models in a selection model approach. Several models were implemented to provide theoretical predictions under different hypotheses on the origin of the dispersers and their dispersal characteristics, which we compared with the spatio-temporal pattern of infestation observed in the field. The best models fitted the dynamic of infestation described by a one year time-series, and also predicted with a very good accuracy the infestation process observed during a second replicate one year time-series. The parameterized models gave key insights into the dispersal of these vectors. i) About 55% of the triatomines infesting houses came from the peridomestic habitat, the rest corresponding to immigration from the sylvatic habitat, ii) dispersing triatomines were 5–15 times more attracted by houses than by peridomestic area, and iii) the moving individuals spread on average over rather small distances, typically 40–60 m/15 days. Conclusion/Significance Since these dispersal characteristics are associated with much higher abundance of insects in the periphery of the village, we discuss the possibility that spatially targeted interventions allow for optimizing the efficacy of vector control activities within villages. Such optimization could prove very useful in the context of limited resources devoted to vector control. PMID:20689823
A Novel Shape Parameterization Approach
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.
Approximate optimal tracking control for near-surface AUVs with wave disturbances
NASA Astrophysics Data System (ADS)
Yang, Qing; Su, Hao; Tang, Gongyou
2016-10-01
This paper considers the optimal trajectory tracking control problem for near-surface autonomous underwater vehicles (AUVs) in the presence of wave disturbances. An approximate optimal tracking control (AOTC) approach is proposed. Firstly, a six-degrees-of-freedom (six-DOF) AUV model with its body-fixed coordinate system is decoupled and simplified and then a nonlinear control model of AUVs in the vertical plane is given. Also, an exosystem model of wave disturbances is constructed based on Hirom approximation formula. Secondly, the time-parameterized desired trajectory which is tracked by the AUV's system is represented by the exosystem. Then, the coupled two-point boundary value (TPBV) problem of optimal tracking control for AUVs is derived from the theory of quadratic optimal control. By using a recently developed successive approximation approach to construct sequences, the coupled TPBV problem is transformed into a problem of solving two decoupled linear differential sequences of state vectors and adjoint vectors. By iteratively solving the two equation sequences, the AOTC law is obtained, which consists of a nonlinear optimal feedback item, an expected output tracking item, a feedforward disturbances rejection item, and a nonlinear compensatory term. Furthermore, a wave disturbances observer model is designed in order to solve the physically realizable problem. Simulation is carried out by using the Remote Environmental Unit (REMUS) AUV model to demonstrate the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Pakyuz-Charrier, Evren; Lindsay, Mark; Ogarko, Vitaliy; Giraud, Jeremie; Jessell, Mark
2018-04-01
Three-dimensional (3-D) geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces) and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization) and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors). Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE), a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than dip vector sampling for planar features and the use of a Bayesian approach to disturbance distribution parameterization is suggested. The influence of incorrect disturbance distributions is discussed and propositions are made and evaluated on synthetic and realistic cases to address the sighted issues. The distribution of the errors of the observed data (i.e., scedasticity) is shown to affect the quality of prior distributions for MCUE. Results demonstrate that the proposed workflows improve the reliability of uncertainty estimation and diminish the occurrence of artifacts.
NASA Technical Reports Server (NTRS)
Freitas, Saulo R.; Grell, Georg; Molod, Andrea; Thompson, Matthew A.
2017-01-01
We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, mid, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Here, we briefly introduce the recent developments, implementation, and preliminary results of this parameterization in the NASA GEOS modeling system.
Stochastic control system parameter identifiability
NASA Technical Reports Server (NTRS)
Lee, C. H.; Herget, C. J.
1975-01-01
The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.
The application of depletion curves for parameterization of subgrid variability of snow
C. H. Luce; D. G. Tarboton
2004-01-01
Parameterization of subgrid-scale variability in snow accumulation and melt is important for improvements in distributed snowmelt modelling. We have taken the approach of using depletion curves that relate fractional snowcovered area to element-average snow water equivalent to parameterize the effect of snowpack heterogeneity within a physically based mass and energy...
Vector Observation-Aided/Attitude-Rate Estimation Using Global Positioning System Signals
NASA Technical Reports Server (NTRS)
Oshman, Yaakov; Markley, F. Landis
1997-01-01
A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.
NASA Astrophysics Data System (ADS)
Liu, Yuefeng; Duan, Zhuoyi; Chen, Song
2017-10-01
Aerodynamic shape optimization design aiming at improving the efficiency of an aircraft has always been a challenging task, especially when the configuration is complex. In this paper, a hybrid FFD-RBF surface parameterization approach has been proposed for designing a civil transport wing-body configuration. This approach is simple and efficient, with the FFD technique used for parameterizing the wing shape and the RBF interpolation approach used for handling the wing body junction part updating. Furthermore, combined with Cuckoo Search algorithm and Kriging surrogate model with expected improvement adaptive sampling criterion, an aerodynamic shape optimization design system has been established. Finally, the aerodynamic shape optimization design on DLR F4 wing-body configuration has been carried out as a study case, and the result has shown that the approach proposed in this paper is of good effectiveness.
NASA Astrophysics Data System (ADS)
Huang, Dong; Liu, Yangang
2014-12-01
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.
Subgrid-scale parameterization and low-frequency variability: a response theory approach
NASA Astrophysics Data System (ADS)
Demaeyer, Jonathan; Vannitsem, Stéphane
2016-04-01
Weather and climate models are limited in the possible range of resolved spatial and temporal scales. However, due to the huge space- and time-scale ranges involved in the Earth System dynamics, the effects of many sub-grid processes should be parameterized. These parameterizations have an impact on the forecasts or projections. It could also affect the low-frequency variability present in the system (such as the one associated to ENSO or NAO). An important question is therefore to know what is the impact of stochastic parameterizations on the Low-Frequency Variability generated by the system and its model representation. In this context, we consider a stochastic subgrid-scale parameterization based on the Ruelle's response theory and proposed in Wouters and Lucarini (2012). We test this approach in the context of a low-order coupled ocean-atmosphere model, detailed in Vannitsem et al. (2015), for which a part of the atmospheric modes is considered as unresolved. A natural separation of the phase-space into a slow invariant set and its fast complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts. References: Vannitsem S, Demaeyer J, De Cruz L, Ghil M. 2015. Low-frequency variability and heat transport in a low-order nonlinear coupled ocean-atmosphere model. Physica D: Nonlinear Phenomena 309: 71-85. Wouters J, Lucarini V. 2012. Disentangling multi-level systems: averaging, correlations and memory. Journal of Statistical Mechanics: Theory and Experiment 2012(03): P03 003.
Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2014-05-01
The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.
Approaches for Subgrid Parameterization: Does Scaling Help?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-04-01
Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.
Wang, Lei; Beg, Faisal; Ratnanather, Tilak; Ceritoglu, Can; Younes, Laurent; Morris, John C.; Csernansky, John G.; Miller, Michael I.
2010-01-01
In large-deformation diffeomorphic metric mapping (LDDMM), the diffeomorphic matching of images are modeled as evolution in time, or a flow, of an associated smooth velocity vector field v controlling the evolution. The initial momentum parameterizes the whole geodesic and encodes the shape and form of the target image. Thus, methods such as principal component analysis (PCA) of the initial momentum leads to analysis of anatomical shape and form in target images without being restricted to small-deformation assumption in the analysis of linear displacements. We apply this approach to a study of dementia of the Alzheimer type (DAT). The left hippocampus in the DAT group shows significant shape abnormality while the right hippocampus shows similar pattern of abnormality. Further, PCA of the initial momentum leads to correct classification of 12 out of 18 DAT subjects and 22 out of 26 control subjects. PMID:17427733
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Liu, Yangang
2014-12-18
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost,more » allowing for more realistic representation of cloud radiation interactions in large-scale models.« less
How to assess the impact of a physical parameterization in simulations of moist convection?
NASA Astrophysics Data System (ADS)
Grabowski, Wojciech
2017-04-01
A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.
NASA Astrophysics Data System (ADS)
Brasseur, James; Paes, Paulo; Chamecki, Marcelo
2017-11-01
Large-eddy simulation (LES) of the high Reynolds number rough-wall boundary layer requires both a subfilter-scale model for the unresolved inertial term and a ``surface stress model'' (SSM) for space-time local surface momentum flux. Standard SSMs assume proportionality between the local surface shear stress vector and the local resolved-scale velocity vector at the first grid level. Because the proportionality coefficient incorporates a surface roughness scale z0 within a functional form taken from law-of-the-wall (LOTW), it is commonly stated that LOTW is ``assumed,'' and therefore ``forced'' on the LES. We show that this is not the case; the LOTW form is the ``drag law'' used to relate friction velocity to mean resolved velocity at the first grid level consistent with z0 as the height where mean velocity vanishes. Whereas standard SSMs do not force LOTW on the prediction, we show that parameterized roughness does not match ``true'' z0 when LOTW is not predicted, or does not exist. By extrapolating mean velocity, we show a serious mismatch between true z0 and parameterized z0 in the presence of a spurious ``overshoot'' in normalized mean velocity gradient. We shall discuss the source of the problem and its potential resolution.
NASA Astrophysics Data System (ADS)
Badawy, B.; Fletcher, C. G.
2017-12-01
The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.
Determination of Energy Independent Neutron Densities using Dirac Phenomenology based on the RIA
NASA Astrophysics Data System (ADS)
Clark, B. C.; Kerr, L. J.; Hama, S.; Mercer, R. L.
2002-04-01
A new method for extracting neutron densities from intermediate energy elastic proton-nucleus scattering observables using a global Dirac phenomenological (DP) approach based on the Relativistic Impulse Approximation (RIA) is presented. (B. C. Clark, et al.) BAPS Vol 46, No. 7 pg.139, 2001. We have considered data sets for ^40Ca, ^48Ca and ^208Pb and energies from 500 MeV to 1040 MeV. The global fits are successful in reproducing the data and in predicting data sets not included in the analysis. Using this global DP approach we have obtained energy independent neutron densities. The vector point proton density distribution, ρ^p_v, is determined from the empirical charge density after unfolding the proton form factor. The other densities, ρ^n_v, ρ^p_s, ρ^n_s, are parameterized using the cosh form given in our paper on global DP optical potentials.(E. D. Cooper, et al.) Phys Rev. 47C, pg. 297, 1993 Neutron skin thicknesses extracted using the global analysis are compared to predictions from theoretical models.
Sparse Bayesian learning machine for real-time management of reservoir releases
NASA Astrophysics Data System (ADS)
Khalil, Abedalrazq; McKee, Mac; Kemblowski, Mariush; Asefa, Tirusew
2005-11-01
Water scarcity and uncertainties in forecasting future water availabilities present serious problems for basin-scale water management. These problems create a need for intelligent prediction models that learn and adapt to their environment in order to provide water managers with decision-relevant information related to the operation of river systems. This manuscript presents examples of state-of-the-art techniques for forecasting that combine excellent generalization properties and sparse representation within a Bayesian paradigm. The techniques are demonstrated as decision tools to enhance real-time water management. A relevance vector machine, which is a probabilistic model, has been used in an online fashion to provide confident forecasts given knowledge of some state and exogenous conditions. In practical applications, online algorithms should recognize changes in the input space and account for drift in system behavior. Support vectors machines lend themselves particularly well to the detection of drift and hence to the initiation of adaptation in response to a recognized shift in system structure. The resulting model will normally have a structure and parameterization that suits the information content of the available data. The utility and practicality of this proposed approach have been demonstrated with an application in a real case study involving real-time operation of a reservoir in a river basin in southern Utah.
Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani
2016-01-01
This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.
NASA Technical Reports Server (NTRS)
Smalley, L. L.
1975-01-01
The coordinate independence of gravitational radiation and the parameterized post-Newtonian approximation from which it is extended are described. The general consistency of the field equations with Bianchi identities, gauge conditions, and the Newtonian limit of the perfect fluid equations of hydrodynamics are studied. A technique of modification is indicated for application to vector-metric or double metric theories, as well as to scalar-tensor theories.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2017-01-01
Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2017-01-01
Objectives Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Methods Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Results Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. Conclusion The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports. PMID:28166263
Observational and Modeling Studies of Clouds and the Hydrological Cycle
NASA Technical Reports Server (NTRS)
Somerville, Richard C. J.
1997-01-01
Our approach involved validating parameterizations directly against measurements from field programs, and using this validation to tune existing parameterizations and to guide the development of new ones. We have used a single-column model (SCM) to make the link between observations and parameterizations of clouds, including explicit cloud microphysics (e.g., prognostic cloud liquid water used to determine cloud radiative properties). Surface and satellite radiation measurements were used to provide an initial evaluation of the performance of the different parameterizations. The results of this evaluation will then used to develop improved cloud and cloud-radiation schemes, which were tested in GCM experiments.
Importance of Physico-Chemical Properties of Aerosols in the Formation of Arctic Ice Clouds
NASA Astrophysics Data System (ADS)
Keita, S. A.; Girard, E.
2014-12-01
Ice clouds play an important role in the Arctic weather and climate system but interactions between aerosols, clouds and radiation are poorly understood. Consequently, it is essential to fully understand their properties and especially their formation process. Extensive measurements from ground-based sites and satellite remote sensing reveal the existence of two Types of Ice Clouds (TICs) in the Arctic during the polar night and early spring. TIC-1 are composed by non-precipitating very small (radar-unseen) ice crystals whereas TIC-2 are detected by both sensors and are characterized by a low concentration of large precipitating ice crystals. It is hypothesized that TIC-2 formation is linked to the acidification of aerosols, which inhibit the ice nucleating properties of ice nuclei (IN). As a result, the IN concentration is reduced in these regions, resulting to a smaller concentration of larger ice crystals. Over the past 10 years, several parameterizations of homogeneous and heterogeneous ice nucleation have been developed to reflect the various physical and chemical properties of aerosols. These parameterizations are derived from laboratory studies on aerosols of different chemical compositions. The parameterizations are also developed according to two main approaches: stochastic (that nucleation is a probabilistic process, which is time dependent) and singular (that nucleation occurs at fixed conditions of temperature and humidity and time-independent). This research aims to better understand the formation process of TICs using a newly-developed ice nucleation parameterizations. For this purpose, we implement some parameterizations (2 approaches) into the Limited Area version of the Global Multiscale Environmental Model (GEM-LAM) and use them to simulate ice clouds observed during the Indirect and Semi-Direct Arctic Cloud (ISDAC) in Alaska. We use both approaches but special attention is focused on the new parameterizations of the singular approach. Simulation results of the TICs-2 observed on April 15th and 25th (polluted or acidic cases) and TICs-1 observed on April 5th (non-polluted cases) will be presented.
Uncertainty in Modeling Dust Mass Balance and Radiative Forcing from Size Parameterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Chun; Chen, Siyu; Leung, Lai-Yung R.
2013-11-05
This study examines the uncertainties in simulating mass balance and radiative forcing of mineral dust due to biases in the aerosol size parameterization. Simulations are conducted quasi-globally (180oW-180oE and 60oS-70oN) using the WRF24 Chem model with three different approaches to represent aerosol size distribution (8-bin, 4-bin, and 3-mode). The biases in the 3-mode or 4-bin approaches against a relatively more accurate 8-bin approach in simulating dust mass balance and radiative forcing are identified. Compared to the 8-bin approach, the 4-bin approach simulates similar but coarser size distributions of dust particles in the atmosphere, while the 3-mode pproach retains more finemore » dust particles but fewer coarse dust particles due to its prescribed og of each mode. Although the 3-mode approach yields up to 10 days longer dust mass lifetime over the remote oceanic regions than the 8-bin approach, the three size approaches produce similar dust mass lifetime (3.2 days to 3.5 days) on quasi-global average, reflecting that the global dust mass lifetime is mainly determined by the dust mass lifetime near the dust source regions. With the same global dust emission (~6000 Tg yr-1), the 8-bin approach produces a dust mass loading of 39 Tg, while the 4-bin and 3-mode approaches produce 3% (40.2 Tg) and 25% (49.1 Tg) higher dust mass loading, respectively. The difference in dust mass loading between the 8-bin approach and the 4-bin or 3-mode approaches has large spatial variations, with generally smaller relative difference (<10%) near the surface over the dust source regions. The three size approaches also result in significantly different dry and wet deposition fluxes and number concentrations of dust. The difference in dust aerosol optical depth (AOD) (a factor of 3) among the three size approaches is much larger than their difference (25%) in dust mass loading. Compared to the 8-bin approach, the 4-bin approach yields stronger dust absorptivity, while the 3-mode approach yields weaker dust absorptivity. Overall, on quasi-global average, the three size parameterizations result in a significant difference of a factor of 2~3 in dust surface cooling (-1.02~-2.87 W m-2) and atmospheric warming (0.39~0.96 W m-2) and in a tremendous difference of a factor of ~10 in dust TOA cooling (-0.24~-2.20 W m-2). An uncertainty of a factor of 2 is quantified in dust emission estimation due to the different size parameterizations. This study also highlights the uncertainties in modeling dust mass and number loading, deposition fluxes, and radiative forcing resulting from different size parameterizations, and motivates further investigation of the impact of size parameterizations on modeling dust impacts on air quality, climate, and ecosystem.« less
Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...
2015-06-30
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme.This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; ...
2015-12-01
Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysicsmore » scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. In conclusion, the new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
Emergence and Prevalence of Human Vector-Borne Diseases in Sink Vector Populations
Rascalou, Guilhem; Pontier, Dominique; Menu, Frédéric; Gourbière, Sébastien
2012-01-01
Vector-borne diseases represent a major public health concern in most tropical and subtropical areas, and an emerging threat for more developed countries. Our understanding of the ecology, evolution and control of these diseases relies predominantly on theory and data on pathogen transmission in large self-sustaining ‘source’ populations of vectors representative of highly endemic areas. However, there are numerous places where environmental conditions are less favourable to vector populations, but where immigration allows them to persist. We built an epidemiological model to investigate the dynamics of six major human vector borne-diseases in such non self-sustaining ‘sink’ vector populations. The model was parameterized through a review of the literature, and we performed extensive sensitivity analysis to look at the emergence and prevalence of the pathogen that could be encountered in these populations. Despite the low vector abundance in typical sink populations, all six human diseases were able to spread in 15–55% of cases after accidental introduction. The rate of spread was much more strongly influenced by vector longevity, immigration and feeding rates, than by transmission and virulence of the pathogen. Prevalence in humans remained lower than 5% for dengue, leishmaniasis and Japanese encephalitis, but substantially higher for diseases with longer duration of infection; malaria and the American and African trypanosomiasis. Vector-related parameters were again the key factors, although their influence was lower than on pathogen emergence. Our results emphasize the need for ecology and evolution to be thought in the context of metapopulations made of a mosaic of sink and source habitats, and to design vector control program not only targeting areas of high vector density, but working at a larger spatial scale. PMID:22629337
Agishev, Ravil; Comerón, Adolfo; Rodriguez, Alejandro; Sicard, Michaël
2014-05-20
In this paper, we show a renewed approach to the generalized methodology for atmospheric lidar assessment, which uses the dimensionless parameterization as a core component. It is based on a series of our previous works where the problem of universal parameterization over many lidar technologies were described and analyzed from different points of view. The modernized dimensionless parameterization concept applied to relatively new silicon photomultiplier detectors (SiPMs) and traditional photomultiplier (PMT) detectors for remote-sensing instruments allowed predicting the lidar receiver performance with sky background available. The renewed approach can be widely used to evaluate a broad range of lidar system capabilities for a variety of lidar remote-sensing applications as well as to serve as a basis for selection of appropriate lidar system parameters for a specific application. Such a modernized methodology provides a generalized, uniform, and objective approach for evaluation of a broad range of lidar types and systems (aerosol, Raman, DIAL) operating on different targets (backscatter or topographic) and under intense sky background conditions. It can be used within the lidar community to compare different lidar instruments.
Stochastic Convection Parameterizations
NASA Technical Reports Server (NTRS)
Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios
2012-01-01
computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts
Data error and highly parameterized groundwater models
Hill, M.C.
2008-01-01
Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.
An Novel Continuation Power Flow Method Based on Line Voltage Stability Index
NASA Astrophysics Data System (ADS)
Zhou, Jianfang; He, Yuqing; He, Hongbin; Jiang, Zhuohan
2018-01-01
An novel continuation power flow method based on line voltage stability index is proposed in this paper. Line voltage stability index is used to determine the selection of parameterized lines, and constantly updated with the change of load parameterized lines. The calculation stages of the continuation power flow decided by the angle changes of the prediction of development trend equation direction vector are proposed in this paper. And, an adaptive step length control strategy is used to calculate the next prediction direction and value according to different calculation stages. The proposed method is applied clear physical concept, and the high computing speed, also considering the local characteristics of voltage instability which can reflect the weak nodes and weak area in a power system. Due to more fully to calculate the PV curves, the proposed method has certain advantages on analysing the voltage stability margin to large-scale power grid.
Clustering Tree-structured Data on Manifold
Lu, Na; Miao, Hongyu
2016-01-01
Tree-structured data usually contain both topological and geometrical information, and are necessarily considered on manifold instead of Euclidean space for appropriate data parameterization and analysis. In this study, we propose a novel tree-structured data parameterization, called Topology-Attribute matrix (T-A matrix), so the data clustering task can be conducted on matrix manifold. We incorporate the structure constraints embedded in data into the non-negative matrix factorization method to determine meta-trees from the T-A matrix, and the signature vector of each single tree can then be extracted by meta-tree decomposition. The meta-tree space turns out to be a cone space, in which we explore the distance metric and implement the clustering algorithm based on the concepts like Fréchet mean. Finally, the T-A matrix based clustering (TAMBAC) framework is evaluated and compared using both simulated data and real retinal images to illus trate its efficiency and accuracy. PMID:26660696
Understanding uncertainty in temperature effects on vector-borne disease: a Bayesian approach
Johnson, Leah R.; Ben-Horin, Tal; Lafferty, Kevin D.; McNally, Amy; Mordecai, Erin A.; Paaijmans, Krijn P.; Pawar, Samraat; Ryan, Sadie J.
2015-01-01
Extrinsic environmental factors influence the distribution and population dynamics of many organisms, including insects that are of concern for human health and agriculture. This is particularly true for vector-borne infectious diseases like malaria, which is a major source of morbidity and mortality in humans. Understanding the mechanistic links between environment and population processes for these diseases is key to predicting the consequences of climate change on transmission and for developing effective interventions. An important measure of the intensity of disease transmission is the reproductive number R0. However, understanding the mechanisms linking R0 and temperature, an environmental factor driving disease risk, can be challenging because the data available for parameterization are often poor. To address this, we show how a Bayesian approach can help identify critical uncertainties in components of R0 and how this uncertainty is propagated into the estimate of R0. Most notably, we find that different parameters dominate the uncertainty at different temperature regimes: bite rate from 15°C to 25°C; fecundity across all temperatures, but especially ~25–32°C; mortality from 20°C to 30°C; parasite development rate at ~15–16°C and again at ~33–35°C. Focusing empirical studies on these parameters and corresponding temperature ranges would be the most efficient way to improve estimates of R0. While we focus on malaria, our methods apply to improving process-based models more generally, including epidemiological, physiological niche, and species distribution models.
Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration
Doherty, John E.; Hunt, Randall J.
2010-01-01
Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.
Newton Algorithms for Analytic Rotation: An Implicit Function Approach
ERIC Educational Resources Information Center
Boik, Robert J.
2008-01-01
In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…
Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
Non-perturbational surface-wave inversion: A Dix-type relation for surface waves
Haney, Matt; Tsai, Victor C.
2015-01-01
We extend the approach underlying the well-known Dix equation in reflection seismology to surface waves. Within the context of surface wave inversion, the Dix-type relation we derive for surface waves allows accurate depth profiles of shear-wave velocity to be constructed directly from phase velocity data, in contrast to perturbational methods. The depth profiles can subsequently be used as an initial model for nonlinear inversion. We provide examples of the Dix-type relation for under-parameterized and over-parameterized cases. In the under-parameterized case, we use the theory to estimate crustal thickness, crustal shear-wave velocity, and mantle shear-wave velocity across the Western U.S. from phase velocity maps measured at 8-, 20-, and 40-s periods. By adopting a thin-layer formalism and an over-parameterized model, we show how a regularized inversion based on the Dix-type relation yields smooth depth profiles of shear-wave velocity. In the process, we quantitatively demonstrate the depth sensitivity of surface-wave phase velocity as a function of frequency and the accuracy of the Dix-type relation. We apply the over-parameterized approach to a near-surface data set within the frequency band from 5 to 40 Hz and find overall agreement between the inverted model and the result of full nonlinear inversion.
NASA Technical Reports Server (NTRS)
Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean
1990-01-01
A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.
Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-07-01
Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.
Tumor Burden Analysis on Computed Tomography by Automated Liver and Tumor Segmentation
Linguraru, Marius George; Richbourg, William J.; Liu, Jianfei; Watt, Jeremy M.; Pamulapati, Vivek; Wang, Shijun; Summers, Ronald M.
2013-01-01
The paper presents the automated computation of hepatic tumor burden from abdominal CT images of diseased populations with images with inconsistent enhancement. The automated segmentation of livers is addressed first. A novel three-dimensional (3D) affine invariant shape parameterization is employed to compare local shape across organs. By generating a regular sampling of the organ's surface, this parameterization can be effectively used to compare features of a set of closed 3D surfaces point-to-point, while avoiding common problems with the parameterization of concave surfaces. From an initial segmentation of the livers, the areas of atypical local shape are determined using training sets. A geodesic active contour corrects locally the segmentations of the livers in abnormal images. Graph cuts segment the hepatic tumors using shape and enhancement constraints. Liver segmentation errors are reduced significantly and all tumors are detected. Finally, support vector machines and feature selection are employed to reduce the number of false tumor detections. The tumor detection true position fraction of 100% is achieved at 2.3 false positives/case and the tumor burden is estimated with 0.9% error. Results from the test data demonstrate the method's robustness to analyze livers from difficult clinical cases to allow the temporal monitoring of patients with hepatic cancer. PMID:22893379
Pattanayak, Sujata; Mohanty, U C; Osuri, Krishna K
2012-01-01
The present study is carried out to investigate the performance of different cumulus convection, planetary boundary layer, land surface processes, and microphysics parameterization schemes in the simulation of a very severe cyclonic storm (VSCS) Nargis (2008), developed in the central Bay of Bengal on 27 April 2008. For this purpose, the nonhydrostatic mesoscale model (NMM) dynamic core of weather research and forecasting (WRF) system is used. Model-simulated track positions and intensity in terms of minimum central mean sea level pressure (MSLP), maximum surface wind (10 m), and precipitation are verified with observations as provided by the India Meteorological Department (IMD) and Tropical Rainfall Measurement Mission (TRMM). The estimated optimum combination is reinvestigated with six different initial conditions of the same case to have better conclusion on the performance of WRF-NMM. A few more diagnostic fields like vertical velocity, vorticity, and heat fluxes are also evaluated. The results indicate that cumulus convection play an important role in the movement of the cyclone, and PBL has a crucial role in the intensification of the storm. The combination of Simplified Arakawa Schubert (SAS) convection, Yonsei University (YSU) PBL, NMM land surface, and Ferrier microphysics parameterization schemes in WRF-NMM give better track and intensity forecast with minimum vector displacement error.
NASA Technical Reports Server (NTRS)
Elsaesser, Gregory
2015-01-01
Cold pools are increasingly being recognized as important players in the evolution of both shallow and deep convection; hence, the incorporation of cold pool processes into a number of recently developed convective parameterizations. Unfortunately, observations serving to inform cold pool parameterization development are limited to select field programs and limited radar domains. However, a number of recent studies have noted that cold pools are often associated with arcs-lines of shallow clouds traversing 10 100 km in visible satellite imagery. Boundary layer thermodynamic perturbations are plausible at such scales, coincident with such mesoscale features. Atmospheric signatures of features at these spatial scales are potentially observable from satellites. In this presentation, we discuss recent work that uses multi-sensor, high-resolution satellite products for observing mesoscale wind vector fluctuations and boundary layer temperature depressions attributed to cold pools produced by antecedent convection. The relationship to subsequent convection as well as convective system longevity is discussed. As improvements in satellite technology occur and efforts to reduce noise in high-resolution orbital products progress, satellite pixel level (10 km) thermodynamic and dynamic (e.g. mesoscale convergence) parameters can increasingly serve as useful benchmarks for constraining convective parameterization development, including for regimes where organized convection contributes substantially to the cloud and rainfall climatology.
Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application
NASA Astrophysics Data System (ADS)
Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni
2018-06-01
Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in the same manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminate plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling) analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.
2013-01-01
The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.
Hosseinbor, A. Pasha; Chung, Moo K.; Koay, Cheng Guan; Schaefer, Stacey M.; van Reekum, Carien M.; Schmitz, Lara Peschke; Sutterer, Matt; Alexander, Andrew L.; Davidson, Richard J.
2015-01-01
Image-based parcellation of the brain often leads to multiple disconnected anatomical structures, which pose significant challenges for analyses of morphological shapes. Existing shape models, such as the widely used spherical harmonic (SPHARM) representation, assume topological invariance, so are unable to simultaneously parameterize multiple disjoint structures. In such a situation, SPHARM has to be applied separately to each individual structure. We present a novel surface parameterization technique using 4D hyperspherical harmonics in representing multiple disjoint objects as a single analytic function, terming it HyperSPHARM. The underlying idea behind Hyper-SPHARM is to stereographically project an entire collection of disjoint 3D objects onto the 4D hypersphere and subsequently simultaneously parameterize them with the 4D hyperspherical harmonics. Hence, HyperSPHARM allows for a holistic treatment of multiple disjoint objects, unlike SPHARM. In an imaging dataset of healthy adult human brains, we apply HyperSPHARM to the hippocampi and amygdalae. The HyperSPHARM representations are employed as a data smoothing technique, while the HyperSPHARM coefficients are utilized in a support vector machine setting for object classification. HyperSPHARM yields nearly identical results as SPHARM, as will be shown in the paper. Its key advantage over SPHARM lies computationally; Hyper-SPHARM possess greater computational efficiency than SPHARM because it can parameterize multiple disjoint structures using much fewer basis functions and stereographic projection obviates SPHARM's burdensome surface flattening. In addition, HyperSPHARM can handle any type of topology, unlike SPHARM, whose analysis is confined to topologically invariant structures. PMID:25828650
Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav
2007-01-01
The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
10 Ways to Improve the Representation of MCSs in Climate Models
NASA Astrophysics Data System (ADS)
Schumacher, C.
2017-12-01
1. The first way to improve the representation of mesoscale convective systems (MCSs) in global climate models (GCMs) is to recognize that MCSs are important to climate. That may be obvious to most of the people attending this session, but it cannot be taken for granted in the wider community. The fact that MCSs produce large amounts of the global rainfall and that they dramatically impact the atmosphere via transports of heat, moisture, and momentum must be continuously stressed. 2-4. There has traditionally been three approaches to representing MCSs and/or their impacts in GCMs. The first is to focus on improving cumulus parameterizations by implementing things like cold pools that are assumed to better organize convection. The second is to focus on including mesoscale processes in the cumulus parameterization such as mesoscale vertical motions. The third is to just buy your way out with higher resolution using techniques like super-parameterization or global cloud-resolving model runs. All of these approaches have their pros and cons, but none of them satisfactorily solve the MCS climate modeling problem. 5-10. Looking forward, there is active discussion and new ideas in the modeling community on how to better represent convective organization in models. A number of ideas are a dramatic shift from the traditional plume-based cumulus parameterizations of most GCMs, such as implementing mesoscale parmaterizations based on their physical impacts (e.g., via heating), on empirical relationships based on big data/machine learning, or on stochastic approaches. Regardless of the technique employed, smart evaluation processes using observations are paramount to refining and constraining the inevitable tunable parameters in any parameterization.
A new approach to ultrasonic elasticity imaging
NASA Astrophysics Data System (ADS)
Hoerig, Cameron; Ghaboussi, Jamshid; Fatemi, Mostafa; Insana, Michael F.
2016-04-01
Biomechanical properties of soft tissues can provide information regarding the local health status. Often the cells in pathological tissues can be found to form a stiff extracellular environment, which is a sensitive, early diagnostic indicator of disease. Quasi-static ultrasonic elasticity imaging provides a way to image the mechanical properties of tissues. Strain images provide a map of the relative tissue stiffness, but ambiguities and artifacts limit its diagnostic value. Accurately mapping intrinsic mechanical parameters of a region may increase diagnostic specificity. However, the inverse problem, whereby force and displacement estimates are used to estimate a constitutive matrix, is ill conditioned. Our method avoids many of the issues involved with solving the inverse problem, such as unknown boundary conditions and incomplete information about the stress field, by building an empirical model directly from measured data. Surface force and volumetric displacement data gathered during imaging are used in conjunction with the AutoProgressive method to teach artificial neural networks the stress-strain relationship of tissues. The Autoprogressive algorithm has been successfully used in many civil engineering applications and to estimate ocular pressure and corneal stiffness; here, we are expanding its use to any tissues imaged ultrasonically. We show that force-displacement data recorded with an ultrasound probe and displacements estimated at a few points in the imaged region can be used to estimate the full stress and strain vectors throughout an entire model while only assuming conservation laws. We will also demonstrate methods to parameterize the mechanical properties based on the stress-strain response of trained neural networks. This method is a fundamentally new approach to medical elasticity imaging that for the first time provides full stress and strain vectors from one set of observation data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.
1993-10-01
One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In themore » 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).« less
NASA Technical Reports Server (NTRS)
Tapiador, Francisco; Tao, Wei-Kuo; Angelis, Carlos F.; Martinez, Miguel A.; Cecilia Marcos; Antonio Rodriguez; Hou, Arthur; Jong Shi, Jain
2012-01-01
Ensembles of numerical model forecasts are of interest to operational early warning forecasters as the spread of the ensemble provides an indication of the uncertainty of the alerts, and the mean value is deemed to outperform the forecasts of the individual models. This paper explores two ensembles on a severe weather episode in Spain, aiming to ascertain the relative usefulness of each one. One ensemble uses sensible choices of physical parameterizations (precipitation microphysics, land surface physics, and cumulus physics) while the other follows a perturbed initial conditions approach. The results show that, depending on the parameterizations, large differences can be expected in terms of storm location, spatial structure of the precipitation field, and rain intensity. It is also found that the spread of the perturbed initial conditions ensemble is smaller than the dispersion due to physical parameterizations. This confirms that in severe weather situations operational forecasts should address moist physics deficiencies to realize the full benefits of the ensemble approach, in addition to optimizing initial conditions. The results also provide insights into differences in simulations arising from ensembles of weather models using several combinations of different physical parameterizations.
Exploratory High-Fidelity Aerostructural Optimization Using an Efficient Monolithic Solution Method
NASA Astrophysics Data System (ADS)
Zhang, Jenmy Zimi
This thesis is motivated by the desire to discover fuel efficient aircraft concepts through exploratory design. An optimization methodology based on tightly integrated high-fidelity aerostructural analysis is proposed, which has the flexibility, robustness, and efficiency to contribute to this goal. The present aerostructural optimization methodology uses an integrated geometry parameterization and mesh movement strategy, which was initially proposed for aerodynamic shape optimization. This integrated approach provides the optimizer with a large amount of geometric freedom for conducting exploratory design, while allowing for efficient and robust mesh movement in the presence of substantial shape changes. In extending this approach to aerostructural optimization, this thesis has addressed a number of important challenges. A structural mesh deformation strategy has been introduced to translate consistently the shape changes described by the geometry parameterization to the structural model. A three-field formulation of the discrete steady aerostructural residual couples the mesh movement equations with the three-dimensional Euler equations and a linear structural analysis. Gradients needed for optimization are computed with a three-field coupled adjoint approach. A number of investigations have been conducted to demonstrate the suitability and accuracy of the present methodology for use in aerostructural optimization involving substantial shape changes. Robustness and efficiency in the coupled solution algorithms is crucial to the success of an exploratory optimization. This thesis therefore also focuses on the design of an effective monolithic solution algorithm for the proposed methodology. This involves using a Newton-Krylov method for the aerostructural analysis and a preconditioned Krylov subspace method for the coupled adjoint solution. Several aspects of the monolithic solution method have been investigated. These include appropriate strategies for scaling and matrix-vector product evaluation, as well as block preconditioning techniques that preserve the modularity between subproblems. The monolithic solution method is applied to problems with varying degrees of fluid-structural coupling, as well as a wing span optimization study. The monolithic solution algorithm typically requires 20%-70% less computing time than its partitioned counterpart. This advantage increases with increasing wing flexibility. The performance of the monolithic solution method is also much less sensitive to the choice of the solution parameter.
NASA Technical Reports Server (NTRS)
Suarez, M. J.; Arakawa, A.; Randall, D. A.
1983-01-01
A planetary boundary layer (PBL) parameterization for general circulation models (GCMs) is presented. It uses a mixed-layer approach in which the PBL is assumed to be capped by discontinuities in the mean vertical profiles. Both clear and cloud-topped boundary layers are parameterized. Particular emphasis is placed on the formulation of the coupling between the PBL and both the free atmosphere and cumulus convection. For this purpose a modified sigma-coordinate is introduced in which the PBL top and the lower boundary are both coordinate surfaces. The use of a bulk PBL formulation with this coordinate is extensively discussed. Results are presented from a July simulation produced by the UCLA GCM. PBL-related variables are shown, to illustrate the various regimes the parameterization is capable of simulating.
Importance of Chemical Composition of Ice Nuclei on the Formation of Arctic Ice Clouds
NASA Astrophysics Data System (ADS)
Keita, Setigui Aboubacar; Girard, Eric
2016-09-01
Ice clouds play an important role in the Arctic weather and climate system but interactions between aerosols, clouds and radiation remain poorly understood. Consequently, it is essential to fully understand their properties and especially their formation process. Extensive measurements from ground-based sites and satellite remote sensing reveal the existence of two Types of Ice Clouds (TICs) in the Arctic during the polar night and early spring. TICs-1 are composed by non-precipitating small (radar-unseen) ice crystals of less than 30 μm in diameter. The second type, TICs-2, are detected by radar and are characterized by a low concentration of large precipitating ice crystals ice crystals (>30 μm). To explain these differences, we hypothesized that TIC-2 formation is linked to the acidification of aerosols, which inhibits the ice nucleating properties of ice nuclei (IN). As a result, the IN concentration is reduced in these regions, resulting to a lower concentration of larger ice crystals. Water vapor available for deposition being the same, these crystals reach a larger size. Current weather and climate models cannot simulate these different types of ice clouds. This problem is partly due to the parameterizations implemented for ice nucleation. Over the past 10 years, several parameterizations of homogeneous and heterogeneous ice nucleation on IN of different chemical compositions have been developed. These parameterizations are based on two approaches: stochastic (that is nucleation is a probabilistic process, which is time dependent) and singular (that is nucleation occurs at fixed conditions of temperature and humidity and time-independent). The best approach remains unclear. This research aims to better understand the formation process of Arctic TICs using recently developed ice nucleation parameterizations. For this purpose, we have implemented these ice nucleation parameterizations into the Limited Area version of the Global Multiscale Environmental Model (GEM-LAM) and use them to simulate ice clouds observed during the Indirect and Semi-Direct Aerosol Campaign (ISDAC) in Alaska. Simulation results of the TICs-2 observed on April 15th and 25th (acidic cases) and TICs-1 observed on April 5th (non-acidic cases) are presented. Our results show that the stochastic approach based on the classical nucleation theory with the appropriate contact angle is better. Parameterizations of ice nucleation based on the singular approach tend to overestimate the ice crystal concentration in TICs-1 and TICs-2. The classical nucleation theory using the appropriate contact angle is the best approach to use to simulate the ice clouds investigated in this research.
Computation at a coordinate singularity
NASA Astrophysics Data System (ADS)
Prusa, Joseph M.
2018-05-01
Coordinate singularities are sometimes encountered in computational problems. An important example involves global atmospheric models used for climate and weather prediction. Classical spherical coordinates can be used to parameterize the manifold - that is, generate a grid for the computational spherical shell domain. This particular parameterization offers significant benefits such as orthogonality and exact representation of curvature and connection (Christoffel) coefficients. But it also exhibits two polar singularities and at or near these points typical continuity/integral constraints on dependent fields and their derivatives are generally inadequate and lead to poor model performance and erroneous results. Other parameterizations have been developed that eliminate polar singularities, but problems of weaker singularities and enhanced grid noise compared to spherical coordinates (away from the poles) persist. In this study reparameterization invariance of geometric objects (scalars, vectors and the forms generated by their covariant derivatives) is utilized to generate asymptotic forms for dependent fields of interest valid in the neighborhood of a pole. The central concept is that such objects cannot be altered by the metric structure of a parameterization. The new boundary conditions enforce symmetries that are required for transformations of geometric objects. They are implemented in an implicit polar filter of a structured grid, nonhydrostatic global atmospheric model that is simulating idealized Held-Suarez flows. A series of test simulations using different configurations of the asymptotic boundary conditions are made, along with control simulations that use the default model numerics with no absorber, at three different grid sizes. Typically the test simulations are ∼ 20% faster in wall clock time than the control-resulting from a decrease in noise at the poles in all cases. In the control simulations adverse numerical effects from the polar singularity are observed to increase with grid resolution. In contrast, test simulations demonstrate robust polar behavior independent of grid resolution.
Gradient-based adaptation of general gaussian kernels.
Glasmachers, Tobias; Igel, Christian
2005-10-01
Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.
Pattanayak, Sujata; Mohanty, U. C.; Osuri, Krishna K.
2012-01-01
The present study is carried out to investigate the performance of different cumulus convection, planetary boundary layer, land surface processes, and microphysics parameterization schemes in the simulation of a very severe cyclonic storm (VSCS) Nargis (2008), developed in the central Bay of Bengal on 27 April 2008. For this purpose, the nonhydrostatic mesoscale model (NMM) dynamic core of weather research and forecasting (WRF) system is used. Model-simulated track positions and intensity in terms of minimum central mean sea level pressure (MSLP), maximum surface wind (10 m), and precipitation are verified with observations as provided by the India Meteorological Department (IMD) and Tropical Rainfall Measurement Mission (TRMM). The estimated optimum combination is reinvestigated with six different initial conditions of the same case to have better conclusion on the performance of WRF-NMM. A few more diagnostic fields like vertical velocity, vorticity, and heat fluxes are also evaluated. The results indicate that cumulus convection play an important role in the movement of the cyclone, and PBL has a crucial role in the intensification of the storm. The combination of Simplified Arakawa Schubert (SAS) convection, Yonsei University (YSU) PBL, NMM land surface, and Ferrier microphysics parameterization schemes in WRF-NMM give better track and intensity forecast with minimum vector displacement error. PMID:22701366
NASA Astrophysics Data System (ADS)
Neggers, Roel
2016-04-01
Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach), and iii) process-level evaluation at climate time-scales. The advantages and disadvantages of each approach will be identified and discussed, and some thoughts about possible future developments will be given.
NASA Astrophysics Data System (ADS)
Benioff, Paul
2015-05-01
The purpose of this paper is to put the description of number scaling and its effects on physics and geometry on a firmer foundation, and to make it more understandable. A main point is that two different concepts, number and number value are combined in the usual representations of number structures. This is valid as long as just one structure of each number type is being considered. It is not valid when different structures of each number type are being considered. Elements of base sets of number structures, considered by themselves, have no meaning. They acquire meaning or value as elements of a number structure. Fiber bundles over a space or space time manifold, M, are described. The fiber consists of a collection of many real or complex number structures and vector space structures. The structures are parameterized by a real or complex scaling factor, s. A vector space at a fiber level, s, has, as scalars, real or complex number structures at the same level. Connections are described that relate scalar and vector space structures at both neighbor M locations and at neighbor scaling levels. Scalar and vector structure valued fields are described and covariant derivatives of these fields are obtained. Two complex vector fields, each with one real and one imaginary field, appear, with one complex field associated with positions in M and the other with position dependent scaling factors. A derivation of the covariant derivative for scalar and vector valued fields gives the same vector fields. The derivation shows that the complex vector field associated with scaling fiber levels is the gradient of a complex scalar field. Use of these results in gauge theory shows that the imaginary part of the vector field associated with M positions acts like the electromagnetic field. The physical relevance of the other three fields, if any, is not known.
NASA Astrophysics Data System (ADS)
Määttänen, Anni; Merikanto, Joonas; Henschel, Henning; Duplissy, Jonathan; Makkonen, Risto; Ortega, Ismael K.; Vehkamäki, Hanna
2018-01-01
We have developed new parameterizations of electrically neutral homogeneous and ion-induced sulfuric acid-water particle formation for large ranges of environmental conditions, based on an improved model that has been validated against a particle formation rate data set produced by Cosmics Leaving OUtdoor Droplets (CLOUD) experiments at European Organization for Nuclear Research (CERN). The model uses a thermodynamically consistent version of the Classical Nucleation Theory normalized using quantum chemical data. Unlike the earlier parameterizations for H2SO4-H2O nucleation, the model is applicable to extreme dry conditions where the one-component sulfuric acid limit is approached. Parameterizations are presented for the critical cluster sulfuric acid mole fraction, the critical cluster radius, the total number of molecules in the critical cluster, and the particle formation rate. If the critical cluster contains only one sulfuric acid molecule, a simple formula for kinetic particle formation can be used: this threshold has also been parameterized. The parameterization for electrically neutral particle formation is valid for the following ranges: temperatures 165-400 K, sulfuric acid concentrations 104-1013 cm-3, and relative humidities 0.001-100%. The ion-induced particle formation parameterization is valid for temperatures 195-400 K, sulfuric acid concentrations 104-1016 cm-3, and relative humidities 10-5-100%. The new parameterizations are thus applicable for the full range of conditions in the Earth's atmosphere relevant for binary sulfuric acid-water particle formation, including both tropospheric and stratospheric conditions. They are also suitable for describing particle formation in the atmosphere of Venus.
NASA Astrophysics Data System (ADS)
Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.
2016-12-01
Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.
Multi-Scale Modeling and the Eddy-Diffusivity/Mass-Flux (EDMF) Parameterization
NASA Astrophysics Data System (ADS)
Teixeira, J.
2015-12-01
Turbulence and convection play a fundamental role in many key weather and climate science topics. Unfortunately, current atmospheric models cannot explicitly resolve most turbulent and convective flow. Because of this fact, turbulence and convection in the atmosphere has to be parameterized - i.e. equations describing the dynamical evolution of the statistical properties of turbulence and convection motions have to be devised. Recently a variety of different models have been developed that attempt at simulating the atmosphere using variable resolution. A key problem however is that parameterizations are in general not explicitly aware of the resolution - the scale awareness problem. In this context, we will present and discuss a specific approach, the Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, that not only is in itself a multi-scale parameterization but it is also particularly well suited to deal with the scale-awareness problems that plague current variable-resolution models. It does so by representing small-scale turbulence using a classic Eddy-Diffusivity (ED) method, and the larger-scale (boundary layer and tropospheric-scale) eddies as a variety of plumes using the Mass-Flux (MF) concept.
Experiments with a Regional Vector-Vorticity Model, and Comparison with Other Models
NASA Astrophysics Data System (ADS)
Konor, C. S.; Dazlich, D. A.; Jung, J.; Randall, D. A.
2017-12-01
The Vector-Vorticity Model (VVM) is an anelastic model with a unique dynamical core that predicts the three-dimensional vorticity instead of the three-dimensional momentum. The VVM is used in the CRMs of the Global Quasi-3D Multiscale Modeling Framework, which is discussed by Joon-Hee Jung and collaborators elsewhere in this session. We are updating the physics package of the VVM, replacing it with the physics package of the System for Atmosphere Modeling (SAM). The new physics package includes a double-moment microphysics, Mellor-Yamada turbulence, Monin-Obukov surface fluxes, and the RRTMG radiation parameterization. We briefly describe the VVM and show results from standard test cases, including TWP-ICE. We compare the results with those obtained using the earlier physics. We also show results from experiments on convection aggregation in radiative-convective equilibrium, and compare with those obtained using both SAM and the Regional Atmospheric Modeling System (RAMS).
Parameterized reduced-order models using hyper-dual numbers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fike, Jeffrey A.; Brake, Matthew Robert
2013-10-01
The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less
A note on: "A Gaussian-product stochastic Gent-McWilliams parameterization"
NASA Astrophysics Data System (ADS)
Jansen, Malte F.
2017-02-01
This note builds on a recent article by Grooms (2016), which introduces a new stochastic parameterization for eddy buoyancy fluxes. The closure proposed by Grooms accounts for the fact that eddy fluxes arise as the product of two approximately Gaussian variables, which in turn leads to a distinctly non-Gaussian distribution. The directionality of the stochastic eddy fluxes, however, remains somewhat ad-hoc and depends on the reference frame of the chosen coordinate system. This note presents a modification of the approach proposed by Grooms, which eliminates this shortcoming. Eddy fluxes are computed based on a stochastic mixing length model, which leads to a frame invariant formulation. As in the original closure proposed by Grooms, eddy fluxes are proportional to the product of two Gaussian variables, and the parameterization reduces to the Gent and McWilliams parameterization for the mean buyoancy fluxes.
NASA Astrophysics Data System (ADS)
Medellín, G.; Brinkkemper, J. A.; Torres-Freyermuth, A.; Appendini, C. M.; Mendoza, E. T.; Salles, P.
2016-01-01
We present a downscaling approach for the study of wave-induced extreme water levels at a location on a barrier island in Yucatán (Mexico). Wave information from a 30-year wave hindcast is validated with in situ measurements at 8 m water depth. The maximum dissimilarity algorithm is employed for the selection of 600 representative cases, encompassing different combinations of wave characteristics and tidal level. The selected cases are propagated from 8 m water depth to the shore using the coupling of a third-generation wave model and a phase-resolving non-hydrostatic nonlinear shallow-water equation model. Extreme wave run-up, R2%, is estimated for the simulated cases and can be further employed to reconstruct the 30-year time series using an interpolation algorithm. Downscaling results show run-up saturation during more energetic wave conditions and modulation owing to tides. The latter suggests that the R2% can be parameterized using a hyperbolic-like formulation with dependency on both wave height and tidal level. The new parametric formulation is in agreement with the downscaling results (r2 = 0.78), allowing a fast calculation of wave-induced extreme water levels at this location. Finally, an assessment of beach vulnerability to wave-induced extreme water levels is conducted at the study area by employing the two approaches (reconstruction/parameterization) and a storm impact scale. The 30-year extreme water level hindcast allows the calculation of beach vulnerability as a function of return periods. It is shown that the downscaling-derived parameterization provides reasonable results as compared with the numerical approach. This methodology can be extended to other locations and can be further improved by incorporating the storm surge contributions to the extreme water level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsons, Taylor; Guo, Yi; Veers, Paul
Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less
Analysis of sensitivity to different parameterization schemes for a subtropical cyclone
NASA Astrophysics Data System (ADS)
Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.
2018-05-01
A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.
Lightning Scaling Laws Revisited
NASA Technical Reports Server (NTRS)
Boccippio, D. J.; Arnold, James E. (Technical Monitor)
2000-01-01
Scaling laws relating storm electrical generator power (and hence lightning flash rate) to charge transport velocity and storm geometry were originally posed by Vonnegut (1963). These laws were later simplified to yield simple parameterizations for lightning based upon cloud top height, with separate parameterizations derived over land and ocean. It is demonstrated that the most recent ocean parameterization: (1) yields predictions of storm updraft velocity which appear inconsistent with observation, and (2) is formally inconsistent with the theory from which it purports to derive. Revised formulations consistent with Vonnegut's original framework are presented. These demonstrate that Vonnegut's theory is, to first order, consistent with observation. The implications of assuming that flash rate is set by the electrical generator power, rather than the electrical generator current, are examined. The two approaches yield significantly different predictions about the dependence of charge transfer per flash on storm dimensions, which should be empirically testable. The two approaches also differ significantly in their explanation of regional variability in lightning observations.
Modeling respiratory motion for reducing motion artifacts in 4D CT images.
Zhang, Yongbin; Yang, Jinzhong; Zhang, Lifei; Court, Laurence E; Balter, Peter A; Dong, Lei
2013-04-01
Four-dimensional computed tomography (4D CT) images have been recently adopted in radiation treatment planning for thoracic and abdominal cancers to explicitly define respiratory motion and anatomy deformation. However, significant image distortions (artifacts) exist in 4D CT images that may affect accurate tumor delineation and the shape representation of normal anatomy. In this study, the authors present a patient-specific respiratory motion model, based on principal component analysis (PCA) of motion vectors obtained from deformable image registration, with the main goal of reducing image artifacts caused by irregular motion during 4D CT acquisition. For a 4D CT image set of a specific patient, the authors calculated displacement vector fields relative to a reference phase, using an in-house deformable image registration method. The authors then used PCA to decompose each of the displacement vector fields into linear combinations of principal motion bases. The authors have demonstrated that the regular respiratory motion of a patient can be accurately represented by a subspace spanned by three principal motion bases and their projections. These projections were parameterized using a spline model to allow the reconstruction of the displacement vector fields at any given phase in a respiratory cycle. Finally, the displacement vector fields were used to deform the reference CT image to synthesize CT images at the selected phase with much reduced image artifacts. The authors evaluated the performance of the in-house deformable image registration method using benchmark datasets consisting of ten 4D CT sets annotated with 300 landmark pairs that were approved by physicians. The initial large discrepancies across the landmark pairs were significantly reduced after deformable registration, and the accuracy was similar to or better than that reported by state-of-the-art methods. The proposed motion model was quantitatively validated on 4D CT images of a phantom and a lung cancer patient by comparing the synthesized images and the original images at different phases. The synthesized images matched well with the original images. The motion model was used to reduce irregular motion artifacts in the 4D CT images of three lung cancer patients. Visual assessment indicated that the proposed approach could reduce severe image artifacts. The shape distortions around the diaphragm and tumor regions were mitigated in the synthesized 4D CT images. The authors have derived a mathematical model to represent the regular respiratory motion from a patient-specific 4D CT set and have demonstrated its application in reducing irregular motion artifacts in 4D CT images. The authors' approach can mitigate shape distortions of anatomy caused by irregular breathing motion during 4D CT acquisition.
Methods of testing parameterizations: Vertical ocean mixing
NASA Technical Reports Server (NTRS)
Tziperman, Eli
1992-01-01
The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.
Incommensurate crystallography without additional dimensions.
Kocian, Philippe
2013-07-01
It is shown that the Euclidean group of translations, when treated as a Lie group, generates translations not only in Euclidean space but on any space, curved or not. Translations are then not necessarily vectors (straight lines); they can be any curve compatible with the parameterization of the considered space. In particular, attention is drawn to the fact that one and only one finite and free module of the Lie algebra of the group of translations can generate both modulated and non-modulated lattices, the modulated character being given only by the parameterization of the space in which the lattice is generated. Moreover, it is shown that the diffraction pattern of a structure is directly linked to the action of that free and finite module. In the Fourier transform of a whole structure, the Fourier transform of the electron density of one unit cell (i.e. the structure factor) appears concretely, whether the structure is modulated or not. Thus, there exists a neat separation: the geometrical aspect on the one hand and the action of the group on the other, without requiring additional dimensions.
A Code Generation Approach for Auto-Vectorization in the Spade Compiler
NASA Astrophysics Data System (ADS)
Wang, Huayong; Andrade, Henrique; Gedik, Buğra; Wu, Kun-Lung
We describe an auto-vectorization approach for the Spade stream processing programming language, comprising two ideas. First, we provide support for vectors as a primitive data type. Second, we provide a C++ library with architecture-specific implementations of a large number of pre-vectorized operations as the means to support language extensions. We evaluate our approach with several stream processing operators, contrasting Spade's auto-vectorization with the native auto-vectorization provided by the GNU gcc and Intel icc compilers.
NASA Astrophysics Data System (ADS)
Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien
2006-03-01
Subdivision surfaces and parameterization are desirable for many algorithms that are commonly used in Medical Image Analysis. However, extracting an accurate surface and parameterization can be difficult for many anatomical objects of interest, due to noisy segmentations and the inherent variability of the object. The thin cartilages of the knee are an example of this, especially after damage is incurred from injuries or conditions like osteoarthritis. As a result, the cartilages can have different topologies or exist in multiple pieces. In this paper we present a topology preserving (genus 0) subdivision-based parametric deformable model that is used to extract the surfaces of the patella and tibial cartilages in the knee. These surfaces have minimal thickness in areas without cartilage. The algorithm inherently incorporates several desirable properties, including: shape based interpolation, sub-division remeshing and parameterization. To illustrate the usefulness of this approach, the surfaces and parameterizations of the patella cartilage are used to generate a 3D statistical shape model.
Whys and Hows of the Parameterized Interval Analyses: A Guide for the Perplexed
NASA Astrophysics Data System (ADS)
Elishakoff, I.
2013-10-01
Novel elements of the parameterized interval analysis developed in [1, 2] are emphasized in this response, to Professor E.D. Popova, or possibly to others who may be perplexed by the parameterized interval analysis. It is also shown that the overwhelming majority of comments by Popova [3] are based on a misreading of our paper [1]. Partial responsibility for this misreading can be attributed to the fact that explanations provided in [1] were laconic. These could have been more extensive in view of the novelty of our approach [1, 2]. It is our duty, therefore, to reiterate, in this response, the whys and hows of parameterization of intervals, introduced in [1] to incorporate the possibly available information on dependencies between various intervals describing the problem at hand. This possibility appears to have been discarded by the standard interval analysis, which may, as a result, lead to overdesign, leading to the possible divorce of engineers from the otherwise beautiful interval analysis.
Nicolas, Gaëlle; Chevalier, Véronique; Tantely, Luciano Michaël; Fontenille, Didier; Durand, Benoît
2014-12-01
Rift Valley fever (RVF) is a vector-borne zoonotic disease that causes high morbidity and mortality in ruminants. In 2008-2009, a RVF outbreak affected the whole Madagascar island, including the Anjozorobe district located in Madagascar highlands. An entomological survey showed the absence of Aedes among the potential RVF virus (RVFV) vector species identified in this area, and an overall low abundance of mosquitoes due to unfavorable climatic conditions during winter. No serological nor virological sign of infection was observed in wild terrestrial mammals of the area, suggesting an absence of wild RVF virus (RVFV) reservoir. However, a three years serological and virological follow-up in cattle showed a recurrent RVFV circulation. The objective of this study was to understand the key determinants of this unexpected recurrent transmission. To achieve this goal, a spatial deterministic discrete-time metapopulation model combined with cattle trade network was designed and parameterized to reproduce the local conditions using observational data collected in the area. Three scenarios that could explain the RVFV recurrent circulation in the area were analyzed: (i) RVFV overwintering thanks to a direct transmission between cattle when viraemic cows calve, vectors being absent during the winter, (ii) a low level vector-based circulation during winter thanks to a residual vector population, without direct transmission between cattle, (iii) combination of both above mentioned mechanisms. Multi-model inference methods resulted in a model incorporating both a low level RVFV winter vector-borne transmission and a direct transmission between animals when viraemic cows calve. Predictions satisfactorily reproduced field observations, 84% of cattle infections being attributed to vector-borne transmission, and 16% to direct transmission. These results appeared robust according to the sensitivity analysis. Interweaving between agricultural works in rice fields, seasonality of vector proliferation, and cattle exchange practices could be a key element for understanding RVFV circulation in this area of Madagascar highlands.
NASA Astrophysics Data System (ADS)
Jablonski, A.
2018-01-01
Growing availability of synchrotron facilities stimulates an interest in quantitative applications of hard X-ray photoemission spectroscopy (HAXPES) using linearly polarized radiation. An advantage of this approach is the possibility of continuous variation of radiation energy that makes it possible to control the sampling depth for a measurement. Quantitative applications are based on accurate and reliable theory relating the measured spectral features to needed characteristics of the surface region of solids. A major complication in the case of polarized radiation is an involved structure of the photoemission cross-section for hard X-rays. In the present work, details of the relevant formalism are described and algorithms implementing this formalism for different experimental configurations are proposed. The photoelectron signal intensity may be considerably affected by variation in the positioning of the polarization vector with respect to the surface plane. This information is critical for any quantitative application of HAXPES by polarized X-rays. Different quantitative applications based on photoelectrons with energies up to 10 keV are considered here: (i) determination of surface composition, (ii) estimation of sampling depth, and (iii) measurements of an overlayer thickness. Parameters facilitating these applications (mean escape depths, information depths, effective attenuation lengths) were calculated for a number of photoelectron lines in four elemental solids (Si, Cu, Ag and Au) in different experimental configurations and locations of the polarization vector. One of the considered configurations, with polarization vector located in a plane perpendicular to the surface, was recommended for quantitative applications of HAXPES. In this configurations, it was found that the considered parameters vary weakly in the range of photoelectron emission angles from normal emission to about 50° with respect to the surface normal. The averaged values of the mean escape depth and effective attenuation length were approximated with accurate predictive formulas. The predicted effective attenuation lengths were compared with published values; major discrepancies observed can be ascribed to a possibility of discontinuous structure of the deposited overlayer.
An empirical approach for estimating stress-coupling lengths for marine-terminating glaciers
Enderlin, Ellyn; Hamilton, Gordon S.; O'Neel, Shad; Bartholomaus, Timothy C.; Morlighem, Mathieu; Holt, John W.
2016-01-01
Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA) demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness) and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.
NASA Astrophysics Data System (ADS)
Zhou, Bing; Greenhalgh, S. A.
2011-10-01
2.5-D modeling and inversion techniques are much closer to reality than the simple and traditional 2-D seismic wave modeling and inversion. The sensitivity kernels required in full waveform seismic tomographic inversion are the Fréchet derivatives of the displacement vector with respect to the independent anisotropic model parameters of the subsurface. They give the sensitivity of the seismograms to changes in the model parameters. This paper applies two methods, called `the perturbation method' and `the matrix method', to derive the sensitivity kernels for 2.5-D seismic waveform inversion. We show that the two methods yield the same explicit expressions for the Fréchet derivatives using a constant-block model parameterization, and are available for both the line-source (2-D) and the point-source (2.5-D) cases. The method involves two Green's function vectors and their gradients, as well as the derivatives of the elastic modulus tensor with respect to the independent model parameters. The two Green's function vectors are the responses of the displacement vector to the two directed unit vectors located at the source and geophone positions, respectively; they can be generally obtained by numerical methods. The gradients of the Green's function vectors may be approximated in the same manner as the differential computations in the forward modeling. The derivatives of the elastic modulus tensor with respect to the independent model parameters can be obtained analytically, dependent on the class of medium anisotropy. Explicit expressions are given for two special cases—isotropic and tilted transversely isotropic (TTI) media. Numerical examples are given for the latter case, which involves five independent elastic moduli (or Thomsen parameters) plus one angle defining the symmetry axis.
Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian
2017-01-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469
Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen
2017-06-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.
Modeling, simulation, and analysis of optical remote sensing systems
NASA Technical Reports Server (NTRS)
Kerekes, John Paul; Landgrebe, David A.
1989-01-01
Remote Sensing of the Earth's resources from space-based sensors has evolved in the past 20 years from a scientific experiment to a commonly used technological tool. The scientific applications and engineering aspects of remote sensing systems have been studied extensively. However, most of these studies have been aimed at understanding individual aspects of the remote sensing process while relatively few have studied their interrelations. A motivation for studying these interrelationships has arisen with the advent of highly sophisticated configurable sensors as part of the Earth Observing System (EOS) proposed by NASA for the 1990's. Two approaches to investigating remote sensing systems are developed. In one approach, detailed models of the scene, the sensor, and the processing aspects of the system are implemented in a discrete simulation. This approach is useful in creating simulated images with desired characteristics for use in sensor or processing algorithm development. A less complete, but computationally simpler method based on a parametric model of the system is also developed. In this analytical model the various informational classes are parameterized by their spectral mean vector and covariance matrix. These class statistics are modified by models for the atmosphere, the sensor, and processing algorithms and an estimate made of the resulting classification accuracy among the informational classes. Application of these models is made to the study of the proposed High Resolution Imaging Spectrometer (HRIS). The interrelationships among observational conditions, sensor effects, and processing choices are investigated with several interesting results.
Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction
NASA Technical Reports Server (NTRS)
Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.
2013-01-01
The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.
NASA Astrophysics Data System (ADS)
Freitas, S.; Grell, G. A.; Molod, A.
2017-12-01
We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization (Grell and Freitas, 2014) is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Scale dependence for deep convection is implemented either through using the method described by Arakawa et al (2011), or through lateral spreading of the subsidence terms. Aerosol effects are included though the dependence of autoconversion and evaporation on the CCN number concentration.Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, congestus, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Also, a beta-pdf is employed now to represent the normalized mass flux profile. This opens up an additional venue to apply stochasticism in the scheme.
Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology
NASA Astrophysics Data System (ADS)
Jin, Z.; Azzari, G.; Lobell, D. B.
2016-12-01
Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.
USDA-ARS?s Scientific Manuscript database
Given a time series of potential evapotranspiration and rainfall data, there are at least two approaches for estimating vertical percolation rates. One approach involves solving Richards' equation (RE) with a plant uptake model. An alternative approach involves applying a simple soil moisture accoun...
NASA Astrophysics Data System (ADS)
Subramanian, Aneesh C.; Palmer, Tim N.
2017-06-01
Stochastic schemes to represent model uncertainty in the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system has helped improve its probabilistic forecast skill over the past decade by both improving its reliability and reducing the ensemble mean error. The largest uncertainties in the model arise from the model physics parameterizations. In the tropics, the parameterization of moist convection presents a major challenge for the accurate prediction of weather and climate. Superparameterization is a promising alternative strategy for including the effects of moist convection through explicit turbulent fluxes calculated from a cloud-resolving model (CRM) embedded within a global climate model (GCM). In this paper, we compare the impact of initial random perturbations in embedded CRMs, within the ECMWF ensemble prediction system, with stochastically perturbed physical tendency (SPPT) scheme as a way to represent model uncertainty in medium-range tropical weather forecasts. We especially focus on forecasts of tropical convection and dynamics during MJO events in October-November 2011. These are well-studied events for MJO dynamics as they were also heavily observed during the DYNAMO field campaign. We show that a multiscale ensemble modeling approach helps improve forecasts of certain aspects of tropical convection during the MJO events, while it also tends to deteriorate certain large-scale dynamic fields with respect to stochastically perturbed physical tendencies approach that is used operationally at ECMWF.
Jirousková, Zuzana; Vareková, Radka Svobodová; Vanek, Jakub; Koca, Jaroslav
2009-05-01
The electronegativity equalization method (EEM) was developed by Mortier et al. as a semiempirical method based on the density-functional theory. After parameterization, in which EEM parameters A(i), B(i), and adjusting factor kappa are obtained, this approach can be used for calculation of average electronegativity and charge distribution in a molecule. The aim of this work is to perform the EEM parameterization using the Merz-Kollman-Singh (MK) charge distribution scheme obtained from B3LYP/6-31G* and HF/6-31G* calculations. To achieve this goal, we selected a set of 380 organic molecules from the Cambridge Structural Database (CSD) and used the methodology, which was recently successfully applied to EEM parameterization to calculate the HF/STO-3G Mulliken charges on large sets of molecules. In the case of B3LYP/6-31G* MK charges, we have improved the EEM parameters for already parameterized elements, specifically C, H, N, O, and F. Moreover, EEM parameters for S, Br, Cl, and Zn, which have not as yet been parameterized for this level of theory and basis set, we also developed. In the case of HF/6-31G* MK charges, we have developed the EEM parameters for C, H, N, O, S, Br, Cl, F, and Zn that have not been parameterized for this level of theory and basis set so far. The obtained EEM parameters were verified by a previously developed validation procedure and used for the charge calculation on a different set of 116 organic molecules from the CSD. The calculated EEM charges are in a very good agreement with the quantum mechanically obtained ab initio charges. 2008 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaboud, M.; Aad, G.; Abbott, B.
2017-07-21
The production of a Z boson and a photon in association with a high-mass dijet system is studied using 20.2 fb -1 of proton-proton collision data at a centre-of-mass energy ofmore » $$\\sqrt{s}$$ = 8 TeV recorded with the ATLAS detector in 2012 at the Large Hadron Collider. Final states with a photon and a Z boson decaying into a pair of either electrons, muons, or neutrinos are analysed. Electroweak and total pp → Zγjj cross-sections are extracted in two fiducial regions with different sensitivities to electroweak production processes. Quartic couplings of vector bosons are studied in regions of phase space with an enhanced contribution from pure electroweak production, sensitive to vector-boson scattering processes VV → Zγ. Finally, no deviations from Standard Model predictions are observed and constraints are placed on anomalous couplings parameterized by higher-dimensional operators using effective field theory.« less
Aaboud, M.; Aad, G.; Abbott, B.; ...
2017-07-21
The production of a Z boson and a photon in association with a high-mass dijet system is studied using 20.2 fb -1 of proton-proton collision data at a centre-of-mass energy ofmore » $$\\sqrt{s}$$ = 8 TeV recorded with the ATLAS detector in 2012 at the Large Hadron Collider. Final states with a photon and a Z boson decaying into a pair of either electrons, muons, or neutrinos are analysed. Electroweak and total pp → Zγjj cross-sections are extracted in two fiducial regions with different sensitivities to electroweak production processes. Quartic couplings of vector bosons are studied in regions of phase space with an enhanced contribution from pure electroweak production, sensitive to vector-boson scattering processes VV → Zγ. Finally, no deviations from Standard Model predictions are observed and constraints are placed on anomalous couplings parameterized by higher-dimensional operators using effective field theory.« less
Barbu, Corentin; Dumonteil, Eric; Gourbière, Sébastien
2011-01-01
Background Chagas disease is a major neglected tropical disease with deep socio-economical effects throughout Central and South America. Vector control programs have consistently reduced domestic populations of triatomine vectors, but non-domiciliated vectors still have to be controlled efficiently. Designing control strategies targeting these vectors is challenging, as it requires a quantitative description of the spatio-temporal dynamics of village infestation, which can only be gained from combinations of extensive field studies and spatial population dynamic modelling. Methodology/Principal Findings A spatially explicit population dynamic model was combined with a two-year field study of T. dimidiata infestation dynamics in the village of Teya, Mexico. The parameterized model fitted and predicted accurately both intra-annual variation and the spatial gradient in vector abundance. Five different control strategies were then applied in concentric rings to mimic spatial design targeting the periphery of the village, where vectors were most abundant. Indoor insecticide spraying and insect screens reduced vector abundance by up to 80% (when applied to the whole village), and half of this effect was obtained when control was applied only to the 33% of households closest to the village periphery. Peri-domicile cleaning was able to eliminate up to 60% of the vectors, but at the periphery of the village it has a low effect, as it is ineffective against sylvatic insects. The use of lethal traps and the management of house attractiveness provided similar levels of control. However this required either house attractiveness to be null, or ≥5 lethal traps, at least as attractive as houses, to be installed in each household. Conclusion/Significance Insecticide and insect screens used in houses at the periphery of the village can contribute to reduce house infestation in more central untreated zones. However, this beneficial effect remains insufficient to allow for a unique spatially targeted strategy to offer protection to all households. Most efficiently, control should combine the use of insect screens in outer zones to reduce infestation by both sylvatic and peri-domiciliated vectors, and cleaning of peri-domicile in the centre of the village where sylvatic vectors are absent. The design of such spatially mixed strategies of control offers a promising avenue to reduce the economic cost associated with the control of non-domiciliated vectors. PMID:21610862
A Survey of Phase Variable Candidates of Human Locomotion
Villarreal, Dario J.; Gregg, Robert D.
2014-01-01
Studies show that the human nervous system is able to parameterize gait cycle phase using sensory feedback. In the field of bipedal robots, the concept of a phase variable has been successfully used to mimic this behavior by parameterizing the gait cycle in a time-independent manner. This approach has been applied to control a powered transfemoral prosthetic leg, but the proposed phase variable was limited to the stance period of the prosthesis only. In order to achieve a more robust controller, we attempt to find a new phase variable that fully parameterizes the gait cycle of a prosthetic leg. The angle with respect to a global reference frame at the hip is able to monotonically parameterize both the stance and swing periods of the gait cycle. This survey looks at multiple phase variable candidates involving the hip angle with respect to a global reference frame across multiple tasks including level-ground walking, running, and stair negotiation. In particular, we propose a novel phase variable candidate that monotonically parameterizes the whole gait cycle across all tasks, and does so particularly well across level-ground walking. In addition to furthering the design of robust robotic prosthetic leg controllers, this survey could help neuroscientists and physicians study human locomotion across tasks from a time-independent perspective. PMID:25570873
Adenoviral Vector Immunity: Its Implications and circumvention strategies
Ahi, Yadvinder S.; Bangari, Dinesh S.; Mittal, Suresh K.
2014-01-01
Adenoviral (Ad) vectors have emerged as a promising gene delivery platform for a variety of therapeutic and vaccine purposes during last two decades. However, the presence of preexisting Ad immunity and the rapid development of Ad vector immunity still pose significant challenges to the clinical use of these vectors. Innate inflammatory response following Ad vector administration may lead to systemic toxicity, drastically limit vector transduction efficiency and significantly abbreviate the duration of transgene expression. Currently, a number of approaches are being extensively pursued to overcome these drawbacks by strategies that target either the host or the Ad vector. In addition, significant progress has been made in the development of novel Ad vectors based on less prevalent human Ad serotypes and nonhuman Ad. This review provides an update on our current understanding of immune responses to Ad vectors and delineates various approaches for eluding Ad vector immunity. Approaches targeting the host and those targeting the vector are discussed in light of their promises and limitations. PMID:21453277
NASA Astrophysics Data System (ADS)
Hansen, S. K.; Berkowitz, B.
2014-12-01
Recently, we developed an alternative CTRW formulation which uses a "latching" upscaling scheme to rigorously map continuous or fine-scale stochastic solute motion onto discrete transitions on an arbitrarily coarse lattice (with spacing potentially on the meter scale or more). This approach enables model simplification, among many other things. Under advection, for example, we see that many relevant anomalous transport problems may be mapped into 1D, with latching to a sequence of successive, uniformly spaced planes. On this formulation (which we term RP-CTRW), the spatial transition vector may generally be made deterministic, with CTRW waiting time distributions encapsulating all the stochastic behavior. We demonstrate the excellent performance of this technique alongside Pareto-distributed waiting times in explaining experiments across a variety of scales using only two degrees of freedom. An interesting new application of the RP-CTRW technique is the analysis of radial (push-pull) tracer tests. Given modern computational power, random walk simulations are a natural fit for the inverse problem of inferring subsurface parameters from push-pull test data, and we propose them as an alternative to the classical type curve approach. In particular, we explore the visibility of heterogeneity through non-Fickian behavior in push-pull tests, and illustrate the ability of a radial RP-CTRW technique to encapsulate this behavior using a sparse parameterization which has predictive value.
Dommergues, Laure; Zumbo, Betty; Cardinale, Eric
2015-01-01
Rift Valley fever (RVF) is a zoonotic vector-borne disease causing abortion storms in cattle and human epidemics in Africa. Our aim was to evaluate RVF persistence in a seasonal and isolated population and to apply it to Mayotte Island (Indian Ocean), where the virus was still silently circulating four years after its last known introduction in 2007. We proposed a stochastic model to estimate RVF persistence over several years and under four seasonal patterns of vector abundance. Firstly, the model predicted a wide range of virus spread patterns, from obligate persistence in a constant or tropical environment (without needing vertical transmission or reintroduction) to frequent extinctions in a drier climate. We then identified for each scenario of seasonality the parameters that most influenced prediction variations. Persistence was sensitive to vector lifespan and biting rate in a tropical climate, and to host viraemia duration and vector lifespan in a drier climate. The first epizootic peak was primarily sensitive to viraemia duration and thus likely to be controlled by vaccination, whereas subsequent peaks were sensitive to vector lifespan and biting rate in a tropical climate, and to host birth rate and viraemia duration in arid climates. Finally, we parameterized the model according to Mayotte known environment. Mosquito captures estimated the abundance of eight potential RVF vectors. Review of RVF competence studies on these species allowed adjusting transmission probabilities per bite. Ruminant serological data since 2004 and three new cross-sectional seroprevalence studies are presented. Transmission rates had to be divided by more than five to best fit observed data. Five years after introduction, RVF persisted in more than 10% of the simulations, even under this scenario of low transmission. Hence, active surveillance must be maintained to better understand the risk related to RVF persistence and to prevent new introductions. PMID:26147799
Transverse momentum dependent (TMD) parton distribution functions: Status and prospects*
Angeles-Martinez, R.; Bacchetta, A.; Balitsky, Ian I.; ...
2015-01-01
In this study, we review transverse momentum dependent (TMD) parton distribution functions, their application to topical issues in high-energy physics phenomenology, and their theoretical connections with QCD resummation, evolution and factorization theorems. We illustrate the use of TMDs via examples of multi-scale problems in hadronic collisions. These include transverse momentum q T spectra of Higgs and vector bosons for low q T, and azimuthal correlations in the production of multiple jets associated with heavy bosons at large jet masses. We discuss computational tools for TMDs, and present the application of a new tool, TMD LIB, to parton density fits andmore » parameterizations.« less
Spielman, Stephanie J; Wilke, Claus O
2016-11-01
The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Modeling of the Wegener Bergeron Findeisen process—implications for aerosol indirect effects
NASA Astrophysics Data System (ADS)
Storelvmo, T.; Kristjánsson, J. E.; Lohmann, U.; Iversen, T.; Kirkevåg, A.; Seland, Ø.
2008-10-01
A new parameterization of the Wegener-Bergeron-Findeisen (WBF) process has been developed, and implemented in the general circulation model CAM-Oslo. The new parameterization scheme has important implications for the process of phase transition in mixed-phase clouds. The new treatment of the WBF process replaces a previous formulation, in which the onset of the WBF effect depended on a threshold value of the mixing ratio of cloud ice. As no observational guidance for such a threshold value exists, the previous treatment added uncertainty to estimates of aerosol effects on mixed-phase clouds. The new scheme takes subgrid variability into account when simulating the WBF process, allowing for smoother phase transitions in mixed-phase clouds compared to the previous approach. The new parameterization yields a model state which gives reasonable agreement with observed quantities, allowing for calculations of aerosol effects on mixed-phase clouds involving a reduced number of tunable parameters. Furthermore, we find a significant sensitivity to perturbations in ice nuclei concentrations with the new parameterization, which leads to a reversal of the traditional cloud lifetime effect.
A Thermal Infrared Radiation Parameterization for Atmospheric Studies
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)
2001-01-01
This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.
Lu, Chunsong; Liu, Yangang; Zhang, Guang J.; ...
2016-02-01
This work examines the relationships of entrainment rate to vertical velocity, buoyancy, and turbulent dissipation rate by applying stepwise principal component regression to observational data from shallow cumulus clouds collected during the Routine AAF [Atmospheric Radiation Measurement (ARM) Aerial Facility] Clouds with Low Optical Water Depths (CLOWD) Optical Radiative Observations (RACORO) field campaign over the ARM Southern Great Plains (SGP) site near Lamont, Oklahoma. The cumulus clouds during the RACORO campaign simulated using a large eddy simulation (LES) model are also examined with the same approach. The analysis shows that a combination of multiple variables can better represent entrainment ratemore » in both the observations and LES than any single-variable fitting. Three commonly used parameterizations are also tested on the individual cloud scale. A new parameterization is therefore presented that relates entrainment rate to vertical velocity, buoyancy and dissipation rate; the effects of treating clouds as ensembles and humid shells surrounding cumulus clouds on the new parameterization are discussed. Physical mechanisms underlying the relationships of entrainment rate to vertical velocity, buoyancy and dissipation rate are also explored.« less
NASA Astrophysics Data System (ADS)
Oh, D.; Noh, Y.; Hoffmann, F.; Raasch, S.
2017-12-01
Lagrangian cloud model (LCM) is a fundamentally new approach of cloud simulation, in which the flow field is simulated by large eddy simulation and droplets are treated as Lagrangian particles undergoing cloud microphysics. LCM enables us to investigate raindrop formation and examine the parameterization of cloud microphysics directly by tracking the history of individual Lagrangian droplets simulated by LCM. Analysis of the magnitude of raindrop formation and the background physical conditions at the moment at which every Lagrangian droplet grows from cloud droplets to raindrops in a shallow cumulus cloud reveals how and under which condition raindrops are formed. It also provides information how autoconversion and accretion appear and evolve within a cloud, and how they are affected by various factors such as cloud water mixing ratio, rain water mixing ratio, aerosol concentration, drop size distribution, and dissipation rate. Based on these results, the parameterizations of autoconversion and accretion, such as Kessler (1969), Tripoli and Cotton (1980), Beheng (1994), and Kharioutdonov and Kogan (2000), are examined, and the modifications to improve the parameterizations are proposed.
NASA Astrophysics Data System (ADS)
Charles, T. K.; Paganin, D. M.; Dowd, R. T.
2016-08-01
Intrinsic emittance is often the limiting factor for brightness in fourth generation light sources and as such, a good understanding of the factors affecting intrinsic emittance is essential in order to be able to decrease it. Here we present a parameterization model describing the proportional increase in emittance induced by cathode surface roughness. One major benefit behind the parameterization approach presented here is that it takes the complexity of a Monte Carlo model and reduces the results to a straight-forward empirical model. The resulting models describe the proportional increase in transverse momentum introduced by surface roughness, and are applicable to various metal types, photon wavelengths, applied electric fields, and cathode surface terrains. The analysis includes the increase in emittance due to changes in the electric field induced by roughness as well as the increase in transverse momentum resultant from the spatially varying surface normal. We also compare the results of the Parameterization Model to an Analytical Model which employs various approximations to produce a more compact expression with the cost of a reduction in accuracy.
Dommert, M; Reginatto, M; Zboril, M; Fiedler, F; Helmbrecht, S; Enghardt, W; Lutz, B
2017-11-28
Bonner sphere measurements are typically analyzed using unfolding codes. It is well known that it is difficult to get reliable estimates of uncertainties for standard unfolding procedures. An alternative approach is to analyze the data using Bayesian parameter estimation. This method provides reliable estimates of the uncertainties of neutron spectra leading to rigorous estimates of uncertainties of the dose. We extend previous Bayesian approaches and apply the method to stray neutrons in proton therapy environments by introducing a new parameterized model which describes the main features of the expected neutron spectra. The parameterization is based on information that is available from measurements and detailed Monte Carlo simulations. The validity of this approach has been validated with results of an experiment using Bonner spheres carried out at the experimental hall of the OncoRay proton therapy facility in Dresden. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Tang, Gao; Jiang, FanHuag; Li, JunFeng
2015-11-01
Near-Earth asteroids have gained a lot of interest and the development in low-thrust propulsion technology makes complex deep space exploration missions possible. A mission from low-Earth orbit using low-thrust electric propulsion system to rendezvous with near-Earth asteroid and bring sample back is investigated. By dividing the mission into five segments, the complex mission is solved separately. Then different methods are used to find optimal trajectories for every segment. Multiple revolutions around the Earth and multiple Moon gravity assists are used to decrease the fuel consumption to escape from the Earth. To avoid possible numerical difficulty of indirect methods, a direct method to parameterize the switching moment and direction of thrust vector is proposed. To maximize the mass of sample, optimal control theory and homotopic approach are applied to find the optimal trajectory. Direct methods of finding proper time to brake the spacecraft using Moon gravity assist are also proposed. Practical techniques including both direct and indirect methods are investigated to optimize trajectories for different segments and they can be easily extended to other missions and more precise dynamic model.
NASA Astrophysics Data System (ADS)
Huang, Melin; Huang, Bormin; Huang, Allen H.-L.
2015-10-01
The schemes of cumulus parameterization are responsible for the sub-grid-scale effects of convective and/or shallow clouds, and intended to represent vertical fluxes due to unresolved updrafts and downdrafts and compensating motion outside the clouds. Some schemes additionally provide cloud and precipitation field tendencies in the convective column, and momentum tendencies due to convective transport of momentum. The schemes all provide the convective component of surface rainfall. Betts-Miller-Janjic (BMJ) is one scheme to fulfill such purposes in the weather research and forecast (WRF) model. National Centers for Environmental Prediction (NCEP) has tried to optimize the BMJ scheme for operational application. As there are no interactions among horizontal grid points, this scheme is very suitable for parallel computation. With the advantage of Intel Xeon Phi Many Integrated Core (MIC) architecture, efficient parallelization and vectorization essentials, it allows us to optimize the BMJ scheme. If compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670, the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.4x and 17.0x, respectively.
The BGS magnetic field candidate models for the 12th generation IGRF
NASA Astrophysics Data System (ADS)
Hamilton, Brian; Ridley, Victoria A.; Beggan, Ciarán D.; Macmillan, Susan
2015-05-01
We describe the candidate models submitted by the British Geological Survey for the 12th generation International Geomagnetic Reference Field. These models are extracted from a spherical harmonic `parent model' derived from vector and scalar magnetic field data from satellite and observatory sources. These data cover the period 2009.0 to 2014.7 and include measurements from the recently launched European Space Agency (ESA) Swarm satellite constellation. The parent model's internal field time dependence for degrees 1 to 13 is represented by order 6 B-splines with knots at yearly intervals. The parent model's degree 1 external field time dependence is described by periodic functions for the annual and semi-annual signals and by dependence on the 20-min Vector Magnetic Disturbance index. Signals induced by these external fields are also parameterized. Satellite data are weighted by spatial density and by two different noise estimators: (a) by standard deviation along segments of the satellite track and (b) a larger-scale noise estimator defined in terms of a measure of vector activity at the geographically closest magnetic observatories to the sample point. Forecasting of the magnetic field secular variation beyond the span of data is by advection of the main field using core surface flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nesterenko, A. V.
The dispersive approach to QCD, which properly embodies the intrinsically nonperturbative constraints originating in the kinematic restrictions on relevant physical processes and extends the applicability range of perturbation theory towards the infrared domain, is briefly overviewed. The study of OPAL (update 2012) and ALEPH (update 2014) experimental data on inclusive τ lepton hadronic decay in vector and axial-vector channels within dispersive approach is presented.
NASA Astrophysics Data System (ADS)
Tomassini, Lorenzo; Field, Paul R.; Honnert, Rachel; Malardel, Sylvie; McTaggart-Cowan, Ron; Saitou, Kei; Noda, Akira T.; Seifert, Axel
2017-03-01
A stratocumulus-to-cumulus transition as observed in a cold air outbreak over the North Atlantic Ocean is compared in global climate and numerical weather prediction models and a large-eddy simulation model as part of the Working Group on Numerical Experimentation "Grey Zone" project. The focus of the project is to investigate to what degree current convection and boundary layer parameterizations behave in a scale-adaptive manner in situations where the model resolution approaches the scale of convection. Global model simulations were performed at a wide range of resolutions, with convective parameterizations turned on and off. The models successfully simulate the transition between the observed boundary layer structures, from a well-mixed stratocumulus to a deeper, partly decoupled cumulus boundary layer. There are indications that surface fluxes are generally underestimated. The amount of both cloud liquid water and cloud ice, and likely precipitation, are under-predicted, suggesting deficiencies in the strength of vertical mixing in shear-dominated boundary layers. But also regulation by precipitation and mixed-phase cloud microphysical processes play an important role in the case. With convection parameterizations switched on, the profiles of atmospheric liquid water and cloud ice are essentially resolution-insensitive. This, however, does not imply that convection parameterizations are scale-aware. Even at the highest resolutions considered here, simulations with convective parameterizations do not converge toward the results of convection-off experiments. Convection and boundary layer parameterizations strongly interact, suggesting the need for a unified treatment of convective and turbulent mixing when addressing scale-adaptivity.
Shrinkage Degree in $L_{2}$ -Rescale Boosting for Regression.
Xu, Lin; Lin, Shaobo; Wang, Yao; Xu, Zongben
2017-08-01
L 2 -rescale boosting ( L 2 -RBoosting) is a variant of L 2 -Boosting, which can essentially improve the generalization performance of L 2 -Boosting. The key feature of L 2 -RBoosting lies in introducing a shrinkage degree to rescale the ensemble estimate in each iteration. Thus, the shrinkage degree determines the performance of L 2 -RBoosting. The aim of this paper is to develop a concrete analysis concerning how to determine the shrinkage degree in L 2 -RBoosting. We propose two feasible ways to select the shrinkage degree. The first one is to parameterize the shrinkage degree and the other one is to develop a data-driven approach. After rigorously analyzing the importance of the shrinkage degree in L 2 -RBoosting, we compare the pros and cons of the proposed methods. We find that although these approaches can reach the same learning rates, the structure of the final estimator of the parameterized approach is better, which sometimes yields a better generalization capability when the number of sample is finite. With this, we recommend to parameterize the shrinkage degree of L 2 -RBoosting. We also present an adaptive parameter-selection strategy for shrinkage degree and verify its feasibility through both theoretical analysis and numerical verification. The obtained results enhance the understanding of L 2 -RBoosting and give guidance on how to use it for regression tasks.
Biodiversity can help prevent malaria outbreaks in tropical forests.
Laporta, Gabriel Zorello; Lopez de Prado, Paulo Inácio Knegt; Kraenkel, Roberto André; Coutinho, Renato Mendes; Sallum, Maria Anice Mureb
2013-01-01
Plasmodium vivax is a widely distributed, neglected parasite that can cause malaria and death in tropical areas. It is associated with an estimated 80-300 million cases of malaria worldwide. Brazilian tropical rain forests encompass host- and vector-rich communities, in which two hypothetical mechanisms could play a role in the dynamics of malaria transmission. The first mechanism is the dilution effect caused by presence of wild warm-blooded animals, which can act as dead-end hosts to Plasmodium parasites. The second is diffuse mosquito vector competition, in which vector and non-vector mosquito species compete for blood feeding upon a defensive host. Considering that the World Health Organization Malaria Eradication Research Agenda calls for novel strategies to eliminate malaria transmission locally, we used mathematical modeling to assess those two mechanisms in a pristine tropical rain forest, where the primary vector is present but malaria is absent. The Ross-Macdonald model and a biodiversity-oriented model were parameterized using newly collected data and data from the literature. The basic reproduction number ([Formula: see text]) estimated employing Ross-Macdonald model indicated that malaria cases occur in the study location. However, no malaria cases have been reported since 1980. In contrast, the biodiversity-oriented model corroborated the absence of malaria transmission. In addition, the diffuse competition mechanism was negatively correlated with the risk of malaria transmission, which suggests a protective effect provided by the forest ecosystem. There is a non-linear, unimodal correlation between the mechanism of dead-end transmission of parasites and the risk of malaria transmission, suggesting a protective effect only under certain circumstances (e.g., a high abundance of wild warm-blooded animals). To achieve biological conservation and to eliminate Plasmodium parasites in human populations, the World Health Organization Malaria Eradication Research Agenda should take biodiversity issues into consideration.
Robustness of Hierarchical Modeling of Skill Association in Cognitive Diagnosis Models
ERIC Educational Resources Information Center
Templin, Jonathan L.; Henson, Robert A.; Templin, Sara E.; Roussos, Louis
2008-01-01
Several types of parameterizations of attribute correlations in cognitive diagnosis models use the reduced reparameterized unified model. The general approach presumes an unconstrained correlation matrix with K(K - 1)/2 parameters, whereas the higher order approach postulates K parameters, imposing a unidimensional structure on the correlation…
Improved Stratospheric Temperature Retrievals for Climate Reanalysis
NASA Technical Reports Server (NTRS)
Rokke, L.; Joiner, J.
1999-01-01
The Data Assimilation Office (DAO) is embarking on plans to generate a twenty year reanalysis data set of climatic atmospheric variables. One of the focus points will be in the evaluation of the dynamics of the stratosphere. The Stratospheric Sounding Unit (SSU), flown as part of the TIROS Operational Vertical Sounder (TOVS), is one of the primary stratospheric temperature sensors flown consistently throughout the reanalysis period. Seven unique sensors made the measurements over time, with individual instrument characteristics that need to be addressed. The stratospheric temperatures being assimilated across satellite platforms will profoundly impact the reanalysis dynamical fields. To attempt to quantify aspects of instrument and retrieval bias we are carefully collecting and analyzing all available information on the sensors, their instrument anomalies, forward model errors and retrieval biases. For the retrieval of stratospheric temperatures, we adapted the minimum variance approach of Jazwinski (1970) and Rodgers (1976) and applied it to the SSU soundings. In our algorithm, the state vector contains an initial guess of temperature from a model six hour forecast provided by the Goddard EOS Data Assimilation System (GEOS/DAS). This is combined with an a priori covariance matrix, a forward model parameterization, and specifications of instrument noise characteristics. A quasi-Newtonian iteration is used to obtain convergence of the retrieved state to the measurement vector. This algorithm also enables us to analyze and address the systematic errors associated with the unique characteristics of the cell pressures on the individual SSU instruments and the resolving power of the instruments to vertical gradients in the stratosphere. The preliminary results of the improved retrievals and their assimilation as well as baseline calculations of bias and rms error between the NESDIS operational product and col-located ground measurements will be presented.
A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction
NASA Astrophysics Data System (ADS)
Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur
2009-07-01
For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).
Distributed parameterization of complex terrain
NASA Astrophysics Data System (ADS)
Band, Lawrence E.
1991-03-01
This paper addresses the incorporation of high resolution topography, soils and vegetation information into the simulation of land surface processes in atmospheric circulation models (ACM). Recent work has concentrated on detailed representation of one-dimensional exchange processes, implicitly assuming surface homogeneity over the atmospheric grid cell. Two approaches that could be taken to incorporate heterogeneity are the integration of a surface model over distributed, discrete portions of the landscape, or over a distribution function of the model parameters. However, the computational burden and parameter intensive nature of current land surface models in ACM limits the number of independent model runs and parameterizations that are feasible to accomplish for operational purposes. Therefore, simplications in the representation of the vertical exchange processes may be necessary to incorporate the effects of landscape variability and horizontal divergence of energy and water. The strategy is then to trade off the detail and rigor of point exchange calculations for the ability to repeat those calculations over extensive, complex terrain. It is clear the parameterization process for this approach must be automated such that large spatial databases collected from remotely sensed images, digital terrain models and digital maps can be efficiently summarized and transformed into the appropriate parameter sets. Ideally, the landscape should be partitioned into surface units that maximize between unit variance while minimizing within unit variance, although it is recognized that some level of surface heterogeneity will be retained at all scales. Therefore, the geographic data processing necessary to automate the distributed parameterization should be able to estimate or predict parameter distributional information within each surface unit.
NASA Astrophysics Data System (ADS)
Rosolem, R.; Rahman, M.; Kollet, S. J.; Wagener, T.
2017-12-01
Understanding the impacts of land cover and climate changes on terrestrial hydrometeorology is important across a range of spatial and temporal scales. Earth System Models (ESMs) provide a robust platform for evaluating these impacts. However, current ESMs lack the representation of key hydrological processes (e.g., preferential water flow, and direct interactions with aquifers) in general. The typical "free drainage" conceptualization of land models can misrepresent the magnitude of those interactions, consequently affecting the exchange of energy and water at the surface as well as estimates of groundwater recharge. Recent studies show the benefits of explicitly simulating the interactions between subsurface and surface processes in similar models. However, such parameterizations are often computationally demanding resulting in limited application for large/global-scale studies. Here, we take a different approach in developing a novel parameterization for groundwater dynamics. Instead of directly adding another complex process to an established land model, we examine a set of comprehensive experimental scenarios using a very robust and establish three-dimensional hydrological model to develop a simpler parameterization that represents the aquifer to land surface interactions. The main goal of our developed parameterization is to simultaneously maximize the computational gain (i.e., "efficiency") while minimizing simulation errors in comparison to the full 3D model (i.e., "robustness") to allow for easy implementation in ESMs globally. Our study focuses primarily on understanding both the dynamics for groundwater recharge and discharge, respectively. Preliminary results show that our proposed approach significantly reduced the computational demand while model deviations from the full 3D model are considered to be small for these processes.
NASA Astrophysics Data System (ADS)
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
2018-03-01
Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^'), and velocity-impedance-II (α″, β″ and I_S^'). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. The heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson's ratios, can be identified clearly with the inverted isotropic-elastic parameters.
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
2018-03-06
We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less
NASA Astrophysics Data System (ADS)
Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.
2009-10-01
A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be introduced in large-scale models, such as ship exhausts, provided that the plume life cycle, the type of emissions, and the major reactions involved in the nonlinear chemical systems can be determined with sufficient accuracy.
NASA Astrophysics Data System (ADS)
Basarab, B.; Fuchs, B.; Rutledge, S. A.
2013-12-01
Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare to observed flash rates. For the 6 June storm, a preliminary analysis of aircraft observations of storm inflow and outflow is presented in order to place flash rates (and other lightning statistics) in the context of storm chemistry. An approach to a possibly improved LNOx parameterization scheme using different lightning metrics such as flash area will be discussed.
Dynamically consistent parameterization of mesoscale eddies. Part III: Deterministic approach
NASA Astrophysics Data System (ADS)
Berloff, Pavel
2018-07-01
This work continues development of dynamically consistent parameterizations for representing mesoscale eddy effects in non-eddy-resolving and eddy-permitting ocean circulation models and focuses on the classical double-gyre problem, in which the main dynamic eddy effects maintain eastward jet extension of the western boundary currents and its adjacent recirculation zones via eddy backscatter mechanism. Despite its fundamental importance, this mechanism remains poorly understood, and in this paper we, first, study it and, then, propose and test its novel parameterization. We start by decomposing the reference eddy-resolving flow solution into the large-scale and eddy components defined by spatial filtering, rather than by the Reynolds decomposition. Next, we find that the eastward jet and its recirculations are robustly present not only in the large-scale flow itself, but also in the rectified time-mean eddies, and in the transient rectified eddy component, which consists of highly anisotropic ribbons of the opposite-sign potential vorticity anomalies straddling the instantaneous eastward jet core and being responsible for its continuous amplification. The transient rectified component is separated from the flow by a novel remapping method. We hypothesize that the above three components of the eastward jet are ultimately driven by the small-scale transient eddy forcing via the eddy backscatter mechanism, rather than by the mean eddy forcing and large-scale nonlinearities. We verify this hypothesis by progressively turning down the backscatter and observing the induced flow anomalies. The backscatter analysis leads us to formulating the key eddy parameterization hypothesis: in an eddy-permitting model at least partially resolved eddy backscatter can be significantly amplified to improve the flow solution. Such amplification is a simple and novel eddy parameterization framework implemented here in terms of local, deterministic flow roughening controlled by single parameter. We test the parameterization skills in an hierarchy of non-eddy-resolving and eddy-permitting modifications of the original model and demonstrate, that indeed it can be highly efficient for restoring the eastward jet extension and its adjacent recirculation zones. The new deterministic parameterization framework not only combines remarkable simplicity with good performance but also is dynamically transparent, therefore, it provides a powerful alternative to the common eddy diffusion and emerging stochastic parameterizations.
NASA Astrophysics Data System (ADS)
Grell, G. A.; Freitas, S. R.; Olson, J.; Bela, M.
2017-12-01
We will start by providing a summary of the latest cumulus parameterization modeling efforts at NOAA's Earth System Research Laboratory (ESRL) will be presented on both regional and global scales. The physics package includes a scale-aware parameterization of subgrid cloudiness feedback to radiation (coupled PBL, microphysics, radiation, shallow and congestus type convection), the stochastic Grell-Freitas (GF) scale- and aerosol-aware convective parameterization, and an aerosol aware microphysics package. GF is based on a stochastic approach originally implemented by Grell and Devenyi (2002) and described in more detail in Grell and Freitas (2014, ACP). It was expanded to include PDF's for vertical mass flux, as well as modifications to improve the diurnal cycle. This physics package will be used on different scales, spanning global to cloud resolving, to look at the impact on scalar transport and numerical weather prediction.
Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE
NASA Astrophysics Data System (ADS)
Schneider, Uwe; Hälg, Roger A.; Baiocco, Giorgio; Lomax, Tony
2016-08-01
The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has been extended using the developed parameterizations in order to calculate the neutron energy, quality factor and RBE.
Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE.
Schneider, Uwe; Hälg, Roger A; Baiocco, Giorgio; Lomax, Tony
2016-08-21
The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has been extended using the developed parameterizations in order to calculate the neutron energy, quality factor and RBE.
Saa, Pedro; Nielsen, Lars K.
2015-01-01
Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric) mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP) capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol), a transition region (-2> ΔGr >-20 kJ/mol) and a constant elasticity region (ΔGr <-20 kJ/mol). We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only the kinetic behaviour of these enzymes, but it also provided insights about the particular features underpinning the observed kinetics. Overall, this framework will enable systematic parameterization and sampling of enzymatic reactions. PMID:25874556
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuang, Zhiming; Gentine, Pierre
Over the duration of this project, we have made the following advances. 1) We have developed a novel approach to obtain a Lagrangian view of convection from high-resolution numerical model through Lagrangian tracking. This approach nicely complements the more traditionally used Eulerian statistics. We have applied this approach to a range of problem. 2) We have looked into improving and extending our parameterizations based on stochastically entraining parcels, developed previously for shallow convection. 3) This grant also supported our effort on a paper where we compared cumulus parameterizations and cloud resolving models in terms of their linear response functions. Thismore » work will help the community to better evaluate and develop cumulus parameterization. 4) We have applied Lagrangian tracking to shallow convection, deep convection with and without convective organization to better characterize their dynamics and the transition between them. 5) We have devised a novel way of using Lagrangian to identify cold pools, an area identified as of great interest by the ASR community. Our algorithm has a number of advantages and in particular can handle merging cold pools more gracefully than existing techniques. 6) We demonstrated that we can, for the first time, correctly reproduce both the diurnal and seasonal cycle of the hydrologic cycle in the Amazon using a strategy that explicitly represents convection but parameterizes large-scale circulation. In addition we showed that the main cause of the wet season is the presence of an early morning fog, which insulate the surface from top of the atmosphere shortwave radiation. In essence this fog makes the day shorter because radiation cannot penetrate to the surface in the early morning. This is why all fluxes are reduced in the wet season compared to the dry season. 7) We have investigated the life cycle of cold pools and the role of surface diabatic heating. We show that surface heating can kill cold pols and reduce the number of large cold pools and the organization of convection. The effect is quite dramatic over land where the entire distribution of cold pools is modified, and the cold pools are much warmer and more humid with surface diabatic heating below the cold pools. The PI and the co-PI continue to work together on parameterization of cold pools.« less
Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.
Kamesh, Reddi; Rani, K Yamuna
2016-09-01
A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Parameterized reduced order models from a single mesh using hyper-dual numbers
NASA Astrophysics Data System (ADS)
Brake, M. R. W.; Fike, J. A.; Topping, S. D.
2016-06-01
In order to assess the predicted performance of a manufactured system, analysts must consider random variations (both geometric and material) in the development of a model, instead of a single deterministic model of an idealized geometry with idealized material properties. The incorporation of random geometric variations, however, potentially could necessitate the development of thousands of nearly identical solid geometries that must be meshed and separately analyzed, which would require an impractical number of man-hours to complete. This research advances a recent approach to uncertainty quantification by developing parameterized reduced order models. These parameterizations are based upon Taylor series expansions of the system's matrices about the ideal geometry, and a component mode synthesis representation for each linear substructure is used to form an efficient basis with which to study the system. The numerical derivatives required for the Taylor series expansions are obtained via hyper-dual numbers, and are compared to parameterized models constructed with finite difference formulations. The advantage of using hyper-dual numbers is two-fold: accuracy of the derivatives to machine precision, and the need to only generate a single mesh of the system of interest. The theory is applied to a stepped beam system in order to demonstrate proof of concept. The results demonstrate that the hyper-dual number multivariate parameterization of geometric variations, which largely are neglected in the literature, are accurate for both sensitivity and optimization studies. As model and mesh generation can constitute the greatest expense of time in analyzing a system, the foundation to create a parameterized reduced order model based off of a single mesh is expected to reduce dramatically the necessary time to analyze multiple realizations of a component's possible geometry.
NASA Technical Reports Server (NTRS)
Oshman, Yaakov; Markley, Landis
1998-01-01
A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.
Parameterization of single-scattering properties of snow
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Kokhanovsky, Alexander; Guyot, Gwennole; Jourdan, Olivier; Nousiainen, Timo
2015-04-01
Snow consists of non-spherical ice grains of various shapes and sizes, which are surrounded by air and sometimes covered by films of liquid water. Still, in many studies, homogeneous spherical snow grains have been assumed in radiative transfer calculations, due to the convenience of using Mie theory. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat scattering phase function typical of deformed non-spherical particles, this is still a rather ad-hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ=0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function as functions of the size parameter and the real and imaginary parts of the refractive index. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons with spheres and distorted Koch fractals. Further evaluation and validation of the proposed approach against (e.g.) bidirectional reflectance and polarization measurements for snow is planned. At any rate, it seems safe to assume that the OHC selected here provides a substantially better basis for representing the single-scattering properties of snow than spheres do. Moreover, the parameterizations developed here are analytic and simple to use, and they can also be applied to the treatment of dirty snow following (e.g.) the approach of Kokhanovsky (The Cryosphere, 7, 1325-1331, doi:10.5194/tc-7-1325-2013, 2013). This should make them an attractive option for use in radiative transfer applications involving snow.
Data-driven parameterization of the generalized Langevin equation
Lei, Huan; Baker, Nathan A.; Li, Xiantao
2016-11-29
We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.
Estimating top-of-atmosphere thermal infrared radiance using MERRA-2 atmospheric data
NASA Astrophysics Data System (ADS)
Kleynhans, Tania; Montanaro, Matthew; Gerace, Aaron; Kanan, Christopher
2017-05-01
Thermal infrared satellite images have been widely used in environmental studies. However, satellites have limited temporal resolution, e.g., 16 day Landsat or 1 to 2 day Terra MODIS. This paper investigates the use of the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) reanalysis data product, produced by NASA's Global Modeling and Assimilation Office (GMAO) to predict global topof-atmosphere (TOA) thermal infrared radiance. The high temporal resolution of the MERRA-2 data product presents opportunities for novel research and applications. Various methods were applied to estimate TOA radiance from MERRA-2 variables namely (1) a parameterized physics based method, (2) Linear regression models and (3) non-linear Support Vector Regression. Model prediction accuracy was evaluated using temporally and spatially coincident Moderate Resolution Imaging Spectroradiometer (MODIS) thermal infrared data as reference data. This research found that Support Vector Regression with a radial basis function kernel produced the lowest error rates. Sources of errors are discussed and defined. Further research is currently being conducted to train deep learning models to predict TOA thermal radiance
NASA Astrophysics Data System (ADS)
Yoshida, Yuki; Karakida, Ryo; Okada, Masato; Amari, Shun-ichi
2017-04-01
Weight normalization, a newly proposed optimization method for neural networks by Salimans and Kingma (2016), decomposes the weight vector of a neural network into a radial length and a direction vector, and the decomposed parameters follow their steepest descent update. They reported that learning with the weight normalization achieves better converging speed in several tasks including image recognition and reinforcement learning than learning with the conventional parameterization. However, it remains theoretically uncovered how the weight normalization improves the converging speed. In this study, we applied a statistical mechanical technique to analyze on-line learning in single layer linear and nonlinear perceptrons with weight normalization. By deriving order parameters of the learning dynamics, we confirmed quantitatively that weight normalization realizes fast converging speed by automatically tuning the effective learning rate, regardless of the nonlinearity of the neural network. This property is realized when the initial value of the radial length is near the global minimum; therefore, our theory suggests that it is important to choose the initial value of the radial length appropriately when using weight normalization.
Mixing Efficiency in the Ocean.
Gregg, M C; D'Asaro, E A; Riley, J J; Kunze, E
2018-01-03
Mixing efficiency is the ratio of the net change in potential energy to the energy expended in producing the mixing. Parameterizations of efficiency and of related mixing coefficients are needed to estimate diapycnal diffusivity from measurements of the turbulent dissipation rate. Comparing diffusivities from microstructure profiling with those inferred from the thickening rate of four simultaneous tracer releases has verified, within observational accuracy, 0.2 as the mixing coefficient over a 30-fold range of diapycnal diffusivities. Although some mixing coefficients can be estimated from pycnocline measurements, at present mixing efficiency must be obtained from channel flows, laboratory experiments, and numerical simulations. Reviewing the different approaches demonstrates that estimates and parameterizations for mixing efficiency and coefficients are not converging beyond the at-sea comparisons with tracer releases, leading to recommendations for a community approach to address this important issue.
Efficient statistical mapping of avian count data
Royle, J. Andrew; Wikle, C.K.
2005-01-01
We develop a spatial modeling framework for count data that is efficient to implement in high-dimensional prediction problems. We consider spectral parameterizations for the spatially varying mean of a Poisson model. The spectral parameterization of the spatial process is very computationally efficient, enabling effective estimation and prediction in large problems using Markov chain Monte Carlo techniques. We apply this model to creating avian relative abundance maps from North American Breeding Bird Survey (BBS) data. Variation in the ability of observers to count birds is modeled as spatially independent noise, resulting in over-dispersion relative to the Poisson assumption. This approach represents an improvement over existing approaches used for spatial modeling of BBS data which are either inefficient for continental scale modeling and prediction or fail to accommodate important distributional features of count data thus leading to inaccurate accounting of prediction uncertainty.
Mixing Efficiency in the Ocean
NASA Astrophysics Data System (ADS)
Gregg, M. C.; D'Asaro, E. A.; Riley, J. J.; Kunze, E.
2018-01-01
Mixing efficiency is the ratio of the net change in potential energy to the energy expended in producing the mixing. Parameterizations of efficiency and of related mixing coefficients are needed to estimate diapycnal diffusivity from measurements of the turbulent dissipation rate. Comparing diffusivities from microstructure profiling with those inferred from the thickening rate of four simultaneous tracer releases has verified, within observational accuracy, 0.2 as the mixing coefficient over a 30-fold range of diapycnal diffusivities. Although some mixing coefficients can be estimated from pycnocline measurements, at present mixing efficiency must be obtained from channel flows, laboratory experiments, and numerical simulations. Reviewing the different approaches demonstrates that estimates and parameterizations for mixing efficiency and coefficients are not converging beyond the at-sea comparisons with tracer releases, leading to recommendations for a community approach to address this important issue.
NASA Astrophysics Data System (ADS)
Xu, Gang; Li, Ming; Mourrain, Bernard; Rabczuk, Timon; Xu, Jinlan; Bordas, Stéphane P. A.
2018-01-01
In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries consisting of a set of B-spline curves. Instead of forming the computational domain by a simple boundary, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including B\\'ezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior B\\'ezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C1=G1-continuity constraints on the interface of neighboring B\\'ezier patches with respect to each quad in the quadrangulation, the high-quality B\\'ezier patch parameterization is obtained by a C1-constrained local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.
An interactive local flattening operator to support digital investigations on artwork surfaces.
Pietroni, Nico; Massimiliano, Corsini; Cignoni, Paolo; Scopigno, Roberto
2011-12-01
Analyzing either high-frequency shape detail or any other 2D fields (scalar or vector) embedded over a 3D geometry is a complex task, since detaching the detail from the overall shape can be tricky. An alternative approach is to move to the 2D space, resolving shape reasoning to easier image processing techniques. In this paper we propose a novel framework for the analysis of 2D information distributed over 3D geometry, based on a locally smooth parametrization technique that allows us to treat local 3D data in terms of image content. The proposed approach has been implemented as a sketch-based system that allows to design with a few gestures a set of (possibly overlapping) parameterizations of rectangular portions of the surface. We demonstrate that, due to the locality of the parametrization, the distortion is under an acceptable threshold, while discontinuities can be avoided since the parametrized geometry is always homeomorphic to a disk. We show the effectiveness of the proposed technique to solve specific Cultural Heritage (CH) tasks: the analysis of chisel marks over the surface of a unfinished sculpture and the local comparison of multiple photographs mapped over the surface of an artwork. For this very difficult task, we believe that our framework and the corresponding tool are the first steps toward a computer-based shape reasoning system, able to support CH scholars with a medium they are more used to. © 2011 IEEE
Two Approaches to Calibration in Metrology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark
2014-04-01
Inferring mathematical relationships with quantified uncertainty from measurement data is common to computational science and metrology. Sufficient knowledge of measurement process noise enables Bayesian inference. Otherwise, an alternative approach is required, here termed compartmentalized inference, because collection of uncertain data and model inference occur independently. Bayesian parameterized model inference is compared to a Bayesian-compatible compartmentalized approach for ISO-GUM compliant calibration problems in renewable energy metrology. In either approach, model evidence can help reduce model discrepancy.
Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.
Hack, C Eric
2006-04-17
Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.
Lamberton, Poppy H L; Cheke, Robert A; Walker, Martin; Winskill, Peter; Crainey, J Lee; Boakye, Daniel A; Osei-Atweneboana, Mike Y; Tirados, Iñaki; Wilson, Michael D; Tetteh-Kumah, Anthony; Otoo, Sampson; Post, Rory J; Basañez, María-Gloria
2016-08-05
Vector-biting behaviour is important for vector-borne disease (VBD) epidemiology. The proportion of blood meals taken on humans (the human blood index, HBI), is a component of the biting rate per vector on humans in VBD transmission models. Humans are the definitive host of Onchocerca volvulus, but the simuliid vectors feed on a range of animals and HBI is a key indicator of the potential for human onchocerciasis transmission. Ghana has a diversity of Simulium damnosum complex members, which are likely to vary in their HBIs, an important consideration for parameterization of onchocerciasis control and elimination models. Host-seeking and ovipositing S. damnosum (sensu lato) (s.l.) were collected from seven villages in four Ghanaian regions. Taxa were morphologically and molecularly identified. Blood meals from individually stored blackfly abdomens were used for DNA profiling, to identify previous host choice. Household, domestic animal, wild mammal and bird surveys were performed to estimate the density and diversity of potential blood hosts of blackflies. A total of 11,107 abdomens of simuliid females (which would have obtained blood meal(s) previously) were tested, with blood meals successfully amplified in 3,772 (34 %). A single-host species was identified in 2,857 (75.7 %) of the blood meals, of which 2,162 (75.7 %) were human. Simulium soubrense Beffa form, S. squamosum C and S. sanctipauli Pra form were the most anthropophagic (HBI = 0.92, 0.86 and 0.70, respectively); S. squamosum E, S. yahense and S. damnosum (sensu stricto) (s.s.)/S. sirbanum were the most zoophagic (HBI = 0.44, 0.53 and 0.63, respectively). The degree of anthropophagy decreased (but not statistically significantly) with increasing ratio of non-human/human blood hosts. Vector to human ratios ranged from 139 to 1,198 blackflies/person. DNA profiling can successfully identify blood meals from host-seeking and ovipositing blackflies. Host choice varies according to sibling species, season and capture site/method. There was no evidence that HBI is vector and/or host density dependent. Transmission breakpoints will vary among locations due to differing cytospecies compositions and vector abundances.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Moroz, I.; Palmer, T.
2015-12-01
It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.
Strategy for long-term 3D cloud-resolving simulations over the ARM SGP site and preliminary results
NASA Astrophysics Data System (ADS)
Lin, W.; Liu, Y.; Song, H.; Endo, S.
2011-12-01
Parametric representations of cloud/precipitation processes continue having to be adopted in climate simulations with increasingly higher spatial resolution or with emerging adaptive mesh framework; and it is only becoming more critical that such parameterizations have to be scale aware. Continuous cloud measurements at DOE's ARM sites have provided a strong observational basis for novel cloud parameterization research at various scales. Despite significant progress in our observational ability, there are important cloud-scale physical and dynamical quantities that are either not currently observable or insufficiently sampled. To complement the long-term ARM measurements, we have explored an optimal strategy to carry out long-term 3-D cloud-resolving simulations over the ARM SGP site using Weather Research and Forecasting (WRF) model with multi-domain nesting. The factors that are considered to have important influences on the simulated cloud fields include domain size, spatial resolution, model top, forcing data set, model physics and the growth of model errors. The hydrometeor advection that may play a significant role in hydrological process within the observational domain but is often lacking, and the limitations due to the constraint of domain-wide uniform forcing in conventional cloud system-resolving model simulations, are at least partly accounted for in our approach. Conventional and probabilistic verification approaches are employed first for selected cases to optimize the model's capability of faithfully reproducing the observed mean and statistical distributions of cloud-scale quantities. This then forms the basis of our setup for long-term cloud-resolving simulations over the ARM SGP site. The model results will facilitate parameterization research, as well as understanding and dissecting parameterization deficiencies in climate models.
Gsflow-py: An integrated hydrologic model development tool
NASA Astrophysics Data System (ADS)
Gardner, M.; Niswonger, R. G.; Morton, C.; Henson, W.; Huntington, J. L.
2017-12-01
Integrated hydrologic modeling encompasses a vast number of processes and specifications, variable in time and space, and development of model datasets can be arduous. Model input construction techniques have not been formalized or made easily reproducible. Creating the input files for integrated hydrologic models (IHM) requires complex GIS processing of raster and vector datasets from various sources. Developing stream network topology that is consistent with the model resolution digital elevation model is important for robust simulation of surface water and groundwater exchanges. Distribution of meteorologic parameters over the model domain is difficult in complex terrain at the model resolution scale, but is necessary to drive realistic simulations. Historically, development of input data for IHM models has required extensive GIS and computer programming expertise which has restricted the use of IHMs to research groups with available financial, human, and technical resources. Here we present a series of Python scripts that provide a formalized technique for the parameterization and development of integrated hydrologic model inputs for GSFLOW. With some modifications, this process could be applied to any regular grid hydrologic model. This Python toolkit automates many of the necessary and laborious processes of parameterization, including stream network development and cascade routing, land coverages, and meteorological distribution over the model domain.
NASA Astrophysics Data System (ADS)
Garrett, T. J.; Alva, S.; Glenn, I. B.; Krueger, S. K.
2015-12-01
There are two possible approaches for parameterizing sub-grid cloud dynamics in a coarser grid model. The most common is to use a fine scale model to explicitly resolve the mechanistic details of clouds to the best extent possible, and then to parameterize these behaviors cloud state for the coarser grid. A second is to invoke physical intuition and some very general theoretical principles from equilibrium statistical mechanics. This approach avoids any requirement to resolve time-dependent processes in order to arrive at a suitable solution. The second approach is widely used elsewhere in the atmospheric sciences: for example the Planck function for blackbody radiation is derived this way, where no mention is made of the complexities of modeling a large ensemble of time-dependent radiation-dipole interactions in order to obtain the "grid-scale" spectrum of thermal emission by the blackbody as a whole. We find that this statistical approach may be equally suitable for modeling convective clouds. Specifically, we make the physical argument that the dissipation of buoyant energy in convective clouds is done through mixing across a cloud perimeter. From thermodynamic reasoning, one might then anticipate that vertically stacked isentropic surfaces are characterized by a power law dlnN/dlnP = -1, where N(P) is the number clouds of perimeter P. In a Giga-LES simulation of convective clouds within a 100 km square domain we find that such a power law does appear to characterize simulated cloud perimeters along isentropes, provided a sufficient cloudy sample. The suggestion is that it may be possible to parameterize certain important aspects of cloud state without appealing to computationally expensive dynamic simulations.
Multiclass Reduced-Set Support Vector Machines
NASA Technical Reports Server (NTRS)
Tang, Benyang; Mazzoni, Dominic
2006-01-01
There are well-established methods for reducing the number of support vectors in a trained binary support vector machine, often with minimal impact on accuracy. We show how reduced-set methods can be applied to multiclass SVMs made up of several binary SVMs, with significantly better results than reducing each binary SVM independently. Our approach is based on Burges' approach that constructs each reduced-set vector as the pre-image of a vector in kernel space, but we extend this by recomputing the SVM weights and bias optimally using the original SVM objective function. This leads to greater accuracy for a binary reduced-set SVM, and also allows vectors to be 'shared' between multiple binary SVMs for greater multiclass accuracy with fewer reduced-set vectors. We also propose computing pre-images using differential evolution, which we have found to be more robust than gradient descent alone. We show experimental results on a variety of problems and find that this new approach is consistently better than previous multiclass reduced-set methods, sometimes with a dramatic difference.
Barnard, Annette-Christi; Nijhof, Ard M.; Fick, Wilma; Stutzer, Christian; Maritz-Olivier, Christine
2012-01-01
The availability of genome sequencing data in combination with knowledge of expressed genes via transcriptome and proteome data has greatly advanced our understanding of arthropod vectors of disease. Not only have we gained insight into vector biology, but also into their respective vector-pathogen interactions. By combining the strengths of postgenomic databases and reverse genetic approaches such as RNAi, the numbers of available drug and vaccine targets, as well as number of transgenes for subsequent transgenic or paratransgenic approaches, have expanded. These are now paving the way for in-field control strategies of vectors and their pathogens. Basic scientific questions, such as understanding the basic components of the vector RNAi machinery, is vital, as this allows for the transfer of basic RNAi machinery components into RNAi-deficient vectors, thereby expanding the genetic toolbox of these RNAi-deficient vectors and pathogens. In this review, we focus on the current knowledge of arthropod vector RNAi machinery and the impact of RNAi on understanding vector biology and vector-pathogen interactions for which vector genomic data is available on VectorBase. PMID:24705082
A vector space model approach to identify genetically related diseases.
Sarkar, Indra Neil
2012-01-01
The relationship between diseases and their causative genes can be complex, especially in the case of polygenic diseases. Further exacerbating the challenges in their study is that many genes may be causally related to multiple diseases. This study explored the relationship between diseases through the adaptation of an approach pioneered in the context of information retrieval: vector space models. A vector space model approach was developed that bridges gene disease knowledge inferred across three knowledge bases: Online Mendelian Inheritance in Man, GenBank, and Medline. The approach was then used to identify potentially related diseases for two target diseases: Alzheimer disease and Prader-Willi Syndrome. In the case of both Alzheimer Disease and Prader-Willi Syndrome, a set of plausible diseases were identified that may warrant further exploration. This study furthers seminal work by Swanson, et al. that demonstrated the potential for mining literature for putative correlations. Using a vector space modeling approach, information from both biomedical literature and genomic resources (like GenBank) can be combined towards identification of putative correlations of interest. To this end, the relevance of the predicted diseases of interest in this study using the vector space modeling approach were validated based on supporting literature. The results of this study suggest that a vector space model approach may be a useful means to identify potential relationships between complex diseases, and thereby enable the coordination of gene-based findings across multiple complex diseases.
Attitude Estimation or Quaternion Estimation?
NASA Technical Reports Server (NTRS)
Markley, F. Landis
2003-01-01
The attitude of spacecraft is represented by a 3x3 orthogonal matrix with unity determinant, which belongs to the three-dimensional special orthogonal group SO(3). The fact that all three-parameter representations of SO(3) are singular or discontinuous for certain attitudes has led to the use of higher-dimensional nonsingular parameterizations, especially the four-component quaternion. In attitude estimation, we are faced with the alternatives of using an attitude representation that is either singular or redundant. Estimation procedures fall into three broad classes. The first estimates a three-dimensional representation of attitude deviations from a reference attitude parameterized by a higher-dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. The second class, which estimates a higher-dimensional representation subject to enough constraints to leave only three degrees of freedom, is difficult to formulate and apply consistently. The third class estimates a representation of SO(3) with more than three dimensions, treating the parameters as independent. We refer to the most common member of this class as quaternion estimation, to contrast it with attitude estimation. We analyze the first and third of these approaches in the context of an extended Kalman filter with simplified kinematics and measurement models.
NASA Astrophysics Data System (ADS)
Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.
2018-01-01
This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.
Building integral projection models: a user's guide
Rees, Mark; Childs, Dylan Z; Ellner, Stephen P; Coulson, Tim
2014-01-01
In order to understand how changes in individual performance (growth, survival or reproduction) influence population dynamics and evolution, ecologists are increasingly using parameterized mathematical models. For continuously structured populations, where some continuous measure of individual state influences growth, survival or reproduction, integral projection models (IPMs) are commonly used. We provide a detailed description of the steps involved in constructing an IPM, explaining how to: (i) translate your study system into an IPM; (ii) implement your IPM; and (iii) diagnose potential problems with your IPM. We emphasize how the study organism's life cycle, and the timing of censuses, together determine the structure of the IPM kernel and important aspects of the statistical analysis used to parameterize an IPM using data on marked individuals. An IPM based on population studies of Soay sheep is used to illustrate the complete process of constructing, implementing and evaluating an IPM fitted to sample data. We then look at very general approaches to parameterizing an IPM, using a wide range of statistical techniques (e.g. maximum likelihood methods, generalized additive models, nonparametric kernel density estimators). Methods for selecting models for parameterizing IPMs are briefly discussed. We conclude with key recommendations and a brief overview of applications that extend the basic model. The online Supporting Information provides commented R code for all our analyses. PMID:24219157
Local Minima Free Parameterized Appearance Models
Nguyen, Minh Hoai; De la Torre, Fernando
2010-01-01
Parameterized Appearance Models (PAMs) (e.g. Eigentracking, Active Appearance Models, Morphable Models) are commonly used to model the appearance and shape variation of objects in images. While PAMs have numerous advantages relative to alternate approaches, they have at least two drawbacks. First, they are especially prone to local minima in the fitting process. Second, often few if any of the local minima of the cost function correspond to acceptable solutions. To solve these problems, this paper proposes a method to learn a cost function by explicitly optimizing that the local minima occur at and only at the places corresponding to the correct fitting parameters. To the best of our knowledge, this is the first paper to address the problem of learning a cost function to explicitly model local properties of the error surface to fit PAMs. Synthetic and real examples show improvement in alignment performance in comparison with traditional approaches. PMID:21804750
Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.
Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo
2016-09-01
In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.
NASA Technical Reports Server (NTRS)
Havelund, Klaus
2014-01-01
We present a form of automaton, referred to as data automata, suited for monitoring sequences of data-carrying events, for example emitted by an executing software system. This form of automata allows states to be parameterized with data, forming named records, which are stored in an efficiently indexed data structure, a form of database. This very explicit approach differs from other automaton-based monitoring approaches. Data automata are also characterized by allowing transition conditions to refer to other parameterized states, and by allowing transitions sequences. The presented automaton concept is inspired by rule-based systems, especially the Rete algorithm, which is one of the well-established algorithms for executing rule-based systems. We present an optimized external DSL for data automata, as well as a comparable unoptimized internal DSL (API) in the Scala programming language, in order to compare the two solutions. An evaluation compares these two solutions to several other monitoring systems.
Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers
NASA Technical Reports Server (NTRS)
Crassidis, John L.; Vadali, Srinivas R.; Markley, F. Landis
1999-01-01
An optimal control approach using variable-structure (sliding-mode) tracking for large angle spacecraft maneuvers is presented. The approach expands upon a previously derived regulation result using a quaternion parameterization for the kinematic equations of motion. This parameterization is used since it is free of singularities. The main contribution of this paper is the utilization of a simple term in the control law that produces a maneuver to the reference attitude trajectory in the shortest distance. Also, a multiplicative error quaternion between the desired and actual attitude is used to derive the control law. Sliding-mode switching surfaces are derived using an optimal-control analysis. Control laws are given using either external torque commands or reaction wheel commands. Global asymptotic stability is shown for both cases using a Lyapunov analysis. Simulation results are shown which use the new control strategy to stabilize the motion of the Microwave Anisotropy Probe spacecraft.
Use of machine learning methods to reduce predictive error of groundwater models.
Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal
2014-01-01
Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.
Parametric analysis for matched pair survival data.
Manatunga, A K; Oakes, D
1999-12-01
Hougaard's (1986) bivariate Weibull distribution with positive stable frailties is applied to matched pairs survival data when either or both components of the pair may be censored and covariate vectors may be of arbitrary fixed length. When there is no censoring, we quantify the corresponding gain in Fisher information over a fixed-effects analysis. With the appropriate parameterization, the results take a simple algebraic form. An alternative marginal ("independence working model") approach to estimation is also considered. This method ignores the correlation between the two survival times in the derivation of the estimator, but provides a valid estimate of standard error. It is shown that when both the correlation between the two survival times is high, and the ratio of the within-pair variability to the between-pair variability of the covariates is high, the fixed-effects analysis captures most of the information about the regression coefficient but the independence working model does badly. When the correlation is low, and/or most of the variability of the covariates occurs between pairs, the reverse is true. The random effects model is applied to data on skin grafts, and on loss of visual acuity among diabetics. In conclusion some extensions of the methods are indicated and they are placed in a wider context of Generalized Estimation Equation methodology.
Sustainable thresholds for cooperative epidemiological models.
Barrios, Edwin; Gajardo, Pedro; Vasilieva, Olga
2018-05-22
In this paper, we introduce a method for computing sustainable thresholds for controlled cooperative models described by a system of ordinary differential equations, a property shared by a wide class of compartmental models in epidemiology. The set of sustainable thresholds refers to constraints (e.g., maximal "allowable" number of human infections; maximal "affordable" budget for disease prevention, diagnosis and treatments; etc.), parameterized by thresholds, that can be sustained by applying an admissible control strategy starting at the given initial state and lasting the whole period of the control intervention. This set, determined by the initial state of the dynamical system, virtually provides useful information for more efficient (or cost-effective) decision-making by exhibiting the trade-offs between different types of constraints and allowing the user to assess future outcomes of control measures on transient behavior of the dynamical system. In order to accentuate the originality of our approach and to reveal its potential significance in real-life applications, we present an example relying on the 2013 dengue outbreak in Cali, Colombia, where we compute the set of sustainable thresholds (in terms of the maximal "affordable" budget and the maximal "allowable" levels of active infections among human and vector populations) that could be sustained during the epidemic outbreak. Copyright © 2018 Elsevier Inc. All rights reserved.
An algorithm for deriving core magnetic field models from the Swarm data set
NASA Astrophysics Data System (ADS)
Rother, Martin; Lesur, Vincent; Schachtschneider, Reyko
2013-11-01
In view of an optimal exploitation of the Swarm data set, we have prepared and tested software dedicated to the determination of accurate core magnetic field models and of the Euler angles between the magnetic sensors and the satellite reference frame. The dedicated core field model estimation is derived directly from the GFZ Reference Internal Magnetic Model (GRIMM) inversion and modeling family. The data selection techniques and the model parameterizations are similar to what were used for the derivation of the second (Lesur et al., 2010) and third versions of GRIMM, although the usage of observatory data is not planned in the framework of the application to Swarm. The regularization technique applied during the inversion process smoothes the magnetic field model in time. The algorithm to estimate the Euler angles is also derived from the CHAMP studies. The inversion scheme includes Euler angle determination with a quaternion representation for describing the rotations. It has been built to handle possible weak time variations of these angles. The modeling approach and software have been initially validated on a simple, noise-free, synthetic data set and on CHAMP vector magnetic field measurements. We present results of test runs applied to the synthetic Swarm test data set.
A volumetric conformal mapping approach for clustering white matter fibers in the brain
Gupta, Vikash; Prasad, Gautam; Thompson, Paul
2017-01-01
The human brain may be considered as a genus-0 shape, topologically equivalent to a sphere. Various methods have been used in the past to transform the brain surface to that of a sphere using harmonic energy minimization methods used for cortical surface matching. However, very few methods have studied volumetric parameterization of the brain using a spherical embedding. Volumetric parameterization is typically used for complicated geometric problems like shape matching, morphing and isogeometric analysis. Using conformal mapping techniques, we can establish a bijective mapping between the brain and the topologically equivalent sphere. Our hypothesis is that shape analysis problems are simplified when the shape is defined in an intrinsic coordinate system. Our goal is to establish such a coordinate system for the brain. The efficacy of the method is demonstrated with a white matter clustering problem. Initial results show promise for future investigation in these parameterization technique and its application to other problems related to computational anatomy like registration and segmentation. PMID:29177252
Parameterized examination in econometrics
NASA Astrophysics Data System (ADS)
Malinova, Anna; Kyurkchiev, Vesselin; Spasov, Georgi
2018-01-01
The paper presents a parameterization of basic types of exam questions in Econometrics. This algorithm is used to automate and facilitate the process of examination, assessment and self-preparation of a large number of students. The proposed parameterization of testing questions reduces the time required to author tests and course assignments. It enables tutors to generate a large number of different but equivalent dynamic questions (with dynamic answers) on a certain topic, which are automatically assessed. The presented methods are implemented in DisPeL (Distributed Platform for e-Learning) and provide questions in the areas of filtering and smoothing of time-series data, forecasting, building and analysis of single-equation econometric models. Questions also cover elasticity, average and marginal characteristics, product and cost functions, measurement of monopoly power, supply, demand and equilibrium price, consumer and product surplus, etc. Several approaches are used to enable the required numerical computations in DisPeL - integration of third-party mathematical libraries, developing our own procedures from scratch, and wrapping our legacy math codes in order to modernize and reuse them.
On constraining pilot point calibration with regularization in PEST
Fienen, M.N.; Muffels, C.T.; Hunt, R.J.
2009-01-01
Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehmann, Benjamin V.; Mao, Yao -Yuan; Becker, Matthew R.
Empirical methods for connecting galaxies to their dark matter halos have become essential for interpreting measurements of the spatial statistics of galaxies. In this work, we present a novel approach for parameterizing the degree of concentration dependence in the abundance matching method. Furthermore, this new parameterization provides a smooth interpolation between two commonly used matching proxies: the peak halo mass and the peak halo maximal circular velocity. This parameterization controls the amount of dependence of galaxy luminosity on halo concentration at a fixed halo mass. Effectively this interpolation scheme enables abundance matching models to have adjustable assembly bias in the resulting galaxy catalogs. With the newmore » $$400\\,\\mathrm{Mpc}\\,{h}^{-1}$$ DarkSky Simulation, whose larger volume provides lower sample variance, we further show that low-redshift two-point clustering and satellite fraction measurements from SDSS can already provide a joint constraint on this concentration dependence and the scatter within the abundance matching framework.« less
Lehmann, Benjamin V.; Mao, Yao -Yuan; Becker, Matthew R.; ...
2016-12-28
Empirical methods for connecting galaxies to their dark matter halos have become essential for interpreting measurements of the spatial statistics of galaxies. In this work, we present a novel approach for parameterizing the degree of concentration dependence in the abundance matching method. Furthermore, this new parameterization provides a smooth interpolation between two commonly used matching proxies: the peak halo mass and the peak halo maximal circular velocity. This parameterization controls the amount of dependence of galaxy luminosity on halo concentration at a fixed halo mass. Effectively this interpolation scheme enables abundance matching models to have adjustable assembly bias in the resulting galaxy catalogs. With the newmore » $$400\\,\\mathrm{Mpc}\\,{h}^{-1}$$ DarkSky Simulation, whose larger volume provides lower sample variance, we further show that low-redshift two-point clustering and satellite fraction measurements from SDSS can already provide a joint constraint on this concentration dependence and the scatter within the abundance matching framework.« less
NASA Astrophysics Data System (ADS)
Alzubadi, A. A.
2015-06-01
Nuclear many-body system is usually described by a mean-field built upon a nucleon-nucleon effective interaction. In this work, we investigate ground state properties of the sulfur isotopes covering a wide range from the line of stability up to the dripline region (30-44S). For this purpose the Hartree-Fock mean field theory in coordinate space with a Skyrme parameterization SkM* has been utilized. In particular, we calculate the nuclear charge, neutrons, protons, mass densities, the associated radii, neutron skin thickness and binding energy. The charge form factors have been also investigated using SkM*, SkO, SkE, SLy4 and Skxs15 Skyrme parameterizations and the results obtained using the theoretical approach are compared with the available experimental data. To investigate the potential energy surface as a function of the quadrupole deformation for isotopic sulfur chains, Skyrme-Hartree-Fock-Bogoliubov theory has been adopted with SLy4 parameterization.
Gladish, James C; Duncan, Donald D
2017-01-20
Herein, we discuss the remote assessment of the subwavelength organizational structure of a medium. Specifically, we use spectral imaging polarimetry, as the vector nature of polarized light enables it to interact with optical anisotropies within a medium, while the spectral aspect of polarization is sensitive to small-scale structure. The ability to image these effects allows for inference of spatial structural organization parameters. This work describes a methodology for revealing structural organization by exploiting the Stokes/Mueller formalism and by utilizing measurements from a spectral imaging polarimeter constructed from liquid crystal variable retarders and a liquid crystal tunable filter. We provide results to validate the system and then show results from measurements on a mineral sample.
The seasonal-cycle climate model
NASA Technical Reports Server (NTRS)
Marx, L.; Randall, D. A.
1981-01-01
The seasonal cycle run which will become the control run for the comparison with runs utilizing codes and parameterizations developed by outside investigators is discussed. The climate model currently exists in two parallel versions: one running on the Amdahl and the other running on the CYBER 203. These two versions are as nearly identical as machine capability and the requirement for high speed performance will allow. Developmental changes are made on the Amdahl/CMS version for ease of testing and rapidity of turnaround. The changes are subsequently incorporated into the CYBER 203 version using vectorization techniques where speed improvement can be realized. The 400 day seasonal cycle run serves as a control run for both medium and long range climate forecasts alsensitivity studies.
Parameterizing the Transport Pathways for Cell Invasion in Complex Scaffold Architectures
Ashworth, Jennifer C.; Mehr, Marco; Buxton, Paul G.; Best, Serena M.
2016-01-01
Interconnecting pathways through porous tissue engineering scaffolds play a vital role in determining nutrient supply, cell invasion, and tissue ingrowth. However, the global use of the term “interconnectivity” often fails to describe the transport characteristics of these pathways, giving no clear indication of their potential to support tissue synthesis. This article uses new experimental data to provide a critical analysis of reported methods for the description of scaffold transport pathways, ranging from qualitative image analysis to thorough structural parameterization using X-ray Micro-Computed Tomography. In the collagen scaffolds tested in this study, it was found that the proportion of pore space perceived to be accessible dramatically changed depending on the chosen method of analysis. Measurements of % interconnectivity as defined in this manner varied as a function of direction and connection size, and also showed a dependence on measurement length scale. As an alternative, a method for transport pathway parameterization was investigated, using percolation theory to calculate the diameter of the largest sphere that can travel to infinite distance through a scaffold in a specified direction. As proof of principle, this approach was used to investigate the invasion behavior of primary fibroblasts in response to independent changes in pore wall alignment and pore space accessibility, parameterized using the percolation diameter. The result was that both properties played a distinct role in determining fibroblast invasion efficiency. This example therefore demonstrates the potential of the percolation diameter as a method of transport pathway parameterization, to provide key structural criteria for application-based scaffold design. PMID:26888449
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
A feature selection approach towards progressive vector transmission over the Internet
NASA Astrophysics Data System (ADS)
Miao, Ru; Song, Jia; Feng, Min
2017-09-01
WebGIS has been applied for visualizing and sharing geospatial information popularly over the Internet. In order to improve the efficiency of the client applications, the web-based progressive vector transmission approach is proposed. Important features should be selected and transferred firstly, and the methods for measuring the importance of features should be further considered in the progressive transmission. However, studies on progressive transmission for large-volume vector data have mostly focused on map generalization in the field of cartography, but rarely discussed on the selection of geographic features quantitatively. This paper applies information theory for measuring the feature importance of vector maps. A measurement model for the amount of information of vector features is defined based upon the amount of information for dealing with feature selection issues. The measurement model involves geometry factor, spatial distribution factor and thematic attribute factor. Moreover, a real-time transport protocol (RTP)-based progressive transmission method is then presented to improve the transmission of vector data. To clearly demonstrate the essential methodology and key techniques, a prototype for web-based progressive vector transmission is presented, and an experiment of progressive selection and transmission for vector features is conducted. The experimental results indicate that our approach clearly improves the performance and end-user experience of delivering and manipulating large vector data over the Internet.
Okia, Michael; Okui, Peter; Lugemwa, Myers; Govere, John M; Katamba, Vincent; Rwakimari, John B; Mpeka, Betty; Chanda, Emmanuel
2016-04-14
Integrated vector management (IVM) is the recommended approach for controlling some vector-borne diseases (VBD). In the face of current challenges to disease vector control, IVM is vital to achieve national targets set for VBD control. Though global efforts, especially for combating malaria, now focus on elimination and eradication, IVM remains useful for Uganda which is principally still in the control phase of the malaria continuum. This paper outlines the processes undertaken to consolidate tactical planning and implementation frameworks for IVM in Uganda. The Uganda National Malaria Control Programme with its efforts to implement an IVM approach to vector control was the 'case' for this study. Integrated management of malaria vectors in Uganda remained an underdeveloped component of malaria control policy. In 2012, knowledge and perceptions of malaria vector control policy and IVM were assessed, and recommendations for a specific IVM policy were made. In 2014, a thorough vector control needs assessment (VCNA) was conducted according to WHO recommendations. The findings of the VCNA informed the development of the national IVM strategic guidelines. Information sources for this study included all available data and accessible archived documentary records on VBD control in Uganda. The literature was reviewed and adapted to the local context and translated into the consolidated tactical framework. WHO recommends implementation of IVM as the main strategy to vector control and has encouraged member states to adopt the approach. However, many VBD-endemic countries lack IVM policy frameworks to guide implementation of the approach. In Uganda most VBD coexists and could be managed more effectively if done in tandem. In order to successfully control malaria and other VBD and move towards their elimination, the country needs to scale up proven and effective vector control interventions and also learn from the experience of other countries. The IVM strategy is important in consolidating inter-sectoral collaboration and coordination and providing the tactical direction for effective deployment of vector control interventions along the five key elements of the approach and to align them with contemporary epidemiology of VBD in the country. Uganda has successfully established an evidence-based IVM approach and consolidated strategic planning and operational frameworks for VBD control. However, operating implementation arrangements as outlined in the national strategic guidelines for IVM and managing insecticide resistance, as well as improving vector surveillance, are imperative. In addition, strengthened information, education and communication/behaviour change and communication, collaboration and coordination will be crucial in scaling up and using vector control interventions.
The Vector-Ballot Approach for Online Voting Procedures
NASA Astrophysics Data System (ADS)
Kiayias, Aggelos; Yung, Moti
Looking at current cryptographic-based e-voting protocols, one can distinguish three basic design paradigms (or approaches): (a) Mix-Networks based, (b) Homomorphic Encryption based, and (c) Blind Signatures based. Each of the three possesses different advantages and disadvantages w.r.t. the basic properties of (i) efficient tallying, (ii) universal verifiability, and (iii) allowing write-in ballot capability (in addition to predetermined candidates). In fact, none of the approaches results in a scheme that simultaneously achieves all three. This is unfortunate, since the three basic properties are crucial for efficiency, integrity and versatility (flexibility), respectively. Further, one can argue that a serious business offering of voting technology should offer a flexible technology that achieves various election goals with a single user interface. This motivates our goal, which is to suggest a new "vector-ballot" based approach for secret-ballot e-voting that is based on three new notions: Provably Consistent Vector Ballot Encodings, Shrink-and-Mix Networks and Punch-Hole-Vector-Ballots. At the heart of our approach is the combination of mix networks and homomorphic encryption under a single user interface; given this, it is rather surprising that it achieves much more than any of the previous approaches for e-voting achieved in terms of the basic properties. Our approach is presented in two generic designs called "homomorphic vector-ballots with write-in votes" and "multi-candidate punch-hole vector-ballots"; both of our designs can be instantiated over any homomorphic encryption function.
NASA Astrophysics Data System (ADS)
McFarquhar, G. M.; Finlon, J.; Um, J.; Nesbitt, S. W.; Borque, P.; Chase, R.; Wu, W.; Morrison, H.; Poellot, M.
2017-12-01
Parameterizations of fall speed-dimension (V-D), mass (m)-D and projected area (A)-D relationships are needed for development of model parameterization and remote sensing retrieval schemes. An approach for deriving such relations is discussed here that improves upon previously developed schemes in the following aspects: 1) surfaces are used to characterize uncertainties in derived coefficients; 2) all derived relations are internally consistent; and 3) multiple bulk measures are used to derive parameter coefficients. In this study, data collected by two-dimensional optical array probes (OAPs) installed on the University of North Dakota Citation aircraft during the Mid-Latitude Continental Convective Clouds Experiment (MC3E) and during the Olympic Mountains Experiment (OLYMPEX) are used in conjunction with data from a Nevzorov total water content (TWC) probe and ground-based radar data at S-band to test a novel approach that determines m-D relationships for a variety of environments. A surface of equally realizable a and b coefficients, where m=aDb, in (a,b) phase space is determined using a technique that minimizes the chi-squared difference between both the TWC and radar reflectivity Z derived from the size distributions measured by the OAPs and those directly measured by a TWC probe and radar, accepting as valid all coefficients within a specified tolerance of the minimum chi-squared difference. Because both A and perimeter P can be directly measured by OAPs, coefficients characterizing these relationships are derived using only one bulk parameter constraint derived from the appropriate images. Because terminal velocity parameterizations depend on both A and m, V-D relations can be derived from these self-consistent relations. Using this approach, changes in parameters associated with varying environmental conditions and varying aerosol amounts and compositions can be isolated from changes associated with statistical noise or measurement errors. The applicability of the derived coefficients for a stochastic framework that employs an observationally-constrained dataset to account for coefficient variability within microphysics parameterization schemes is discussed.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
Madsen, Kristoffer H; Ewald, Lars; Siebner, Hartwig R; Thielscher, Axel
2015-01-01
Field calculations for transcranial magnetic stimulation (TMS) are increasingly implemented online in neuronavigation systems and in more realistic offline approaches based on finite-element methods. They are often based on simplified and/or non-validated models of the magnetic vector potential of the TMS coils. To develop an approach to reconstruct the magnetic vector potential based on automated measurements. We implemented a setup that simultaneously measures the three components of the magnetic field with high spatial resolution. This is complemented by a novel approach to determine the magnetic vector potential via volume integration of the measured field. The integration approach reproduces the vector potential with very good accuracy. The vector potential distribution of a standard figure-of-eight shaped coil determined with our setup corresponds well with that calculated using a model reconstructed from x-ray images. The setup can supply validated models for existing and newly appearing TMS coils. Copyright © 2015 Elsevier Inc. All rights reserved.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction.
Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc
2017-11-01
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...
2017-07-03
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
Biodiversity Can Help Prevent Malaria Outbreaks in Tropical Forests
Laporta, Gabriel Zorello; de Prado, Paulo Inácio Knegt Lopez; Kraenkel, Roberto André; Coutinho, Renato Mendes; Sallum, Maria Anice Mureb
2013-01-01
Background Plasmodium vivax is a widely distributed, neglected parasite that can cause malaria and death in tropical areas. It is associated with an estimated 80–300 million cases of malaria worldwide. Brazilian tropical rain forests encompass host- and vector-rich communities, in which two hypothetical mechanisms could play a role in the dynamics of malaria transmission. The first mechanism is the dilution effect caused by presence of wild warm-blooded animals, which can act as dead-end hosts to Plasmodium parasites. The second is diffuse mosquito vector competition, in which vector and non-vector mosquito species compete for blood feeding upon a defensive host. Considering that the World Health Organization Malaria Eradication Research Agenda calls for novel strategies to eliminate malaria transmission locally, we used mathematical modeling to assess those two mechanisms in a pristine tropical rain forest, where the primary vector is present but malaria is absent. Methodology/Principal Findings The Ross–Macdonald model and a biodiversity-oriented model were parameterized using newly collected data and data from the literature. The basic reproduction number () estimated employing Ross–Macdonald model indicated that malaria cases occur in the study location. However, no malaria cases have been reported since 1980. In contrast, the biodiversity-oriented model corroborated the absence of malaria transmission. In addition, the diffuse competition mechanism was negatively correlated with the risk of malaria transmission, which suggests a protective effect provided by the forest ecosystem. There is a non-linear, unimodal correlation between the mechanism of dead-end transmission of parasites and the risk of malaria transmission, suggesting a protective effect only under certain circumstances (e.g., a high abundance of wild warm-blooded animals). Conclusions/Significance To achieve biological conservation and to eliminate Plasmodium parasites in human populations, the World Health Organization Malaria Eradication Research Agenda should take biodiversity issues into consideration. PMID:23556023
Robust support vector regression networks for function approximation with outliers.
Chuang, Chen-Chia; Su, Shun-Feng; Jeng, Jin-Tsong; Hsiao, Chih-Ching
2002-01-01
Support vector regression (SVR) employs the support vector machine (SVM) to tackle problems of function approximation and regression estimation. SVR has been shown to have good robust properties against noise. When the parameters used in SVR are improperly selected, overfitting phenomena may still occur. However, the selection of various parameters is not straightforward. Besides, in SVR, outliers may also possibly be taken as support vectors. Such an inclusion of outliers in support vectors may lead to seriously overfitting phenomena. In this paper, a novel regression approach, termed as the robust support vector regression (RSVR) network, is proposed to enhance the robust capability of SVR. In the approach, traditional robust learning approaches are employed to improve the learning performance for any selected parameters. From the simulation results, our RSVR can always improve the performance of the learned systems for all cases. Besides, it can be found that even the training lasted for a long period, the testing errors would not go up. In other words, the overfitting phenomenon is indeed suppressed.
Global Transport Networks and Infectious Disease Spread
Tatem, A.J.; Rogers, D.J.; Hay, S.I.
2011-01-01
Air, sea and land transport networks continue to expand in reach, speed of travel and volume of passengers and goods carried. Pathogens and their vectors can now move further, faster and in greater numbers than ever before. Three important consequences of global transport network expansion are infectious disease pandemics, vector invasion events and vector-borne pathogen importation. This review briefly examines some of the important historical examples of these disease and vector movements, such as the global influenza pandemics, the devastating Anopheles gambiae invasion of Brazil and the recent increases in imported Plasmodium falciparum malaria cases. We then outline potential approaches for future studies of disease movement, focussing on vector invasion and vector-borne disease importation. Such approaches allow us to explore the potential implications of international air travel, shipping routes and other methods of transport on global pathogen and vector traffic. PMID:16647974
Querying databases of trajectories of differential equations: Data structures for trajectories
NASA Technical Reports Server (NTRS)
Grossman, Robert
1989-01-01
One approach to qualitative reasoning about dynamical systems is to extract qualitative information by searching or making queries on databases containing very large numbers of trajectories. The efficiency of such queries depends crucially upon finding an appropriate data structure for trajectories of dynamical systems. Suppose that a large number of parameterized trajectories gamma of a dynamical system evolving in R sup N are stored in a database. Let Eta is contained in set R sup N denote a parameterized path in Euclidean Space, and let the Euclidean Norm denote a norm on the space of paths. A data structure is defined to represent trajectories of dynamical systems, and an algorithm is sketched which answers queries.
Dynamic Forms. Part 1: Functions
NASA Technical Reports Server (NTRS)
Meyer, George; Smith, G. Allan
1993-01-01
The formalism of dynamic forms is developed as a means for organizing and systematizing the design control systems. The formalism allows the designer to easily compute derivatives to various orders of large composite functions that occur in flight-control design. Such functions involve many function-of-a-function calls that may be nested to many levels. The component functions may be multiaxis, nonlinear, and they may include rotation transformations. A dynamic form is defined as a variable together with its time derivatives up to some fixed but arbitrary order. The variable may be a scalar, a vector, a matrix, a direction cosine matrix, Euler angles, or Euler parameters. Algorithms for standard elementary functions and operations of scalar dynamic forms are developed first. Then vector and matrix operations and transformations between parameterization of rotations are developed in the next level in the hierarchy. Commonly occurring algorithms in control-system design, including inversion of pure feedback systems, are developed in the third level. A large-angle, three-axis attitude servo and other examples are included to illustrate the effectiveness of the developed formalism. All algorithms were implemented in FORTRAN code. Practical experience shows that the proposed formalism may significantly improve the productivity of the design and coding process.
Xu, Ji; Zhong, Yi; Wang, Shengming; Lu, Yunqing; Wan, Hongdan; Jiang, Jian; Wang, Jin
2015-10-19
Sub-wavelength focusing of cylindrical vector beams (CVBs) has attracted great attention due to the specific physical effects and the applications in many areas. More powerful, flexible and effective ways to modulate the focus transversally and also longitudinally are always being pursued. In this paper, cylindrically symmetric lens composed of negative-index one-dimensional photonic crystal is proposed to make a breakthrough. By revealing the relationship between focal length and the exit surface shape of the lens, a quite simple and effective principle of designing the lens structure is presented to realize specific focus modulation. Plano-concave lenses are parameterized to modulate the focal length and the number of focuses. An axicon constructed by one-dimensional photonic crystal is proposed for the first time to obtain a large depth of focus and an optical needle focal field with almost a theoretical minimum FWHM of 0.362λ is achieved under radially polarized incident light. Because of the almost identical negative refractive index for TE and TM polarization states, all the modulation methods can be applied for any arbitrary polarized CVBs. This work offers a promising methodology for designing negative-index lenses in related application areas.
NASA Astrophysics Data System (ADS)
Costache, G. N.; Gavat, I.
2004-09-01
Along with the aggressive growing of the amount of digital data available (text, audio samples, digital photos and digital movies joined all in the multimedia domain) the need for classification, recognition and retrieval of this kind of data became very important. In this paper will be presented a system structure to handle multimedia data based on a recognition perspective. The main processing steps realized for the interesting multimedia objects are: first, the parameterization, by analysis, in order to obtain a description based on features, forming the parameter vector; second, a classification, generally with a hierarchical structure to make the necessary decisions. For audio signals, both speech and music, the derived perceptual features are the melcepstral (MFCC) and the perceptual linear predictive (PLP) coefficients. For images, the derived features are the geometric parameters of the speaker mouth. The hierarchical classifier consists generally in a clustering stage, based on the Kohonnen Self-Organizing Maps (SOM) and a final stage, based on a powerful classification algorithm called Support Vector Machines (SVM). The system, in specific variants, is applied with good results in two tasks: the first, is a bimodal speech recognition which uses features obtained from speech signal fused to features obtained from speaker's image and the second is a music retrieval from large music database.
A simple map-based localization strategy using range measurements
NASA Astrophysics Data System (ADS)
Moore, Kevin L.; Kutiyanawala, Aliasgar; Chandrasekharan, Madhumita
2005-05-01
In this paper we present a map-based approach to localization. We consider indoor navigation in known environments based on the idea of a "vector cloud" by observing that any point in a building has an associated vector defining its distance to the key structural components (e.g., walls, ceilings, etc.) of the building in any direction. Given a building blueprint we can derive the "ideal" vector cloud at any point in space. Then, given measurements from sensors on the robot we can compare the measured vector cloud to the possible vector clouds cataloged from the blueprint, thus determining location. We present algorithms for implementing this approach to localization, using the Hamming norm, the 1-norm, and the 2-norm. The effectiveness of the approach is verified by experiments on a 2-D testbed using a mobile robot with a 360° laser range-finder and through simulation analysis of robustness.
Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone
NASA Astrophysics Data System (ADS)
Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.
2017-12-01
The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.
NASA Astrophysics Data System (ADS)
Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.
2017-11-01
Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.
Explicit Global Simulation of Gravity Waves up to the Lower Thermosphere
NASA Astrophysics Data System (ADS)
Becker, E.
2016-12-01
At least for short-term simulations, middle atmosphere general circulation models (GCMs) can be run with sufficiently high resolution in order to describe a good part of the gravity wave spectrum explicitly. Nevertheless, the parameterization of unresolved dynamical scales remains an issue, especially when the scales of parameterized gravity waves (GWs) and resolved GWs become comparable. In addition, turbulent diffusion must always be parameterized along with other subgrid-scale dynamics. A practical solution to the combined closure problem for GWs and turbulent diffusion is to dispense with a parameterization of GWs, apply a high spatial resolution, and to represent the unresolved scales by a macro-turbulent diffusion scheme that gives rise to wave damping in a self-consistent fashion. This is the approach of a few GCMs that extend from the surface to the lower thermosphere and simulate a realistic GW drag and summer-to-winter-pole residual circulation in the upper mesosphere. In this study we describe a new version of the Kuehlungsborn Mechanistic general Circulation Model (KMCM), which includes explicit (though idealized) computations of radiative transfer and the tropospheric moisture cycle. Particular emphasis is spent on 1) the turbulent diffusion scheme, 2) the attenuation of resolved GWs at critical levels, 3) the generation of GWs in the middle atmosphere from body forces, and 4) GW-tidal interactions (including the energy deposition of GWs and tides).
Building integral projection models: a user's guide.
Rees, Mark; Childs, Dylan Z; Ellner, Stephen P
2014-05-01
In order to understand how changes in individual performance (growth, survival or reproduction) influence population dynamics and evolution, ecologists are increasingly using parameterized mathematical models. For continuously structured populations, where some continuous measure of individual state influences growth, survival or reproduction, integral projection models (IPMs) are commonly used. We provide a detailed description of the steps involved in constructing an IPM, explaining how to: (i) translate your study system into an IPM; (ii) implement your IPM; and (iii) diagnose potential problems with your IPM. We emphasize how the study organism's life cycle, and the timing of censuses, together determine the structure of the IPM kernel and important aspects of the statistical analysis used to parameterize an IPM using data on marked individuals. An IPM based on population studies of Soay sheep is used to illustrate the complete process of constructing, implementing and evaluating an IPM fitted to sample data. We then look at very general approaches to parameterizing an IPM, using a wide range of statistical techniques (e.g. maximum likelihood methods, generalized additive models, nonparametric kernel density estimators). Methods for selecting models for parameterizing IPMs are briefly discussed. We conclude with key recommendations and a brief overview of applications that extend the basic model. The online Supporting Information provides commented R code for all our analyses. © 2014 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Samuel S. P.
2013-09-01
The long-range goal of several past and current projects in our DOE-supported research has been the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data, and the implementation and testing of these parameterizations in global models. The main objective of the present project being reported on here has been to develop and apply advanced statistical techniques, including Bayesian posterior estimates, to diagnose and evaluate features of both observed and simulated clouds. The research carried out under this project has been novel in two important ways. The first is that it is a key stepmore » in the development of practical stochastic cloud-radiation parameterizations, a new category of parameterizations that offers great promise for overcoming many shortcomings of conventional schemes. The second is that this work has brought powerful new tools to bear on the problem, because it has been an interdisciplinary collaboration between a meteorologist with long experience in ARM research (Somerville) and a mathematician who is an expert on a class of advanced statistical techniques that are well-suited for diagnosing model cloud simulations using ARM observations (Shen). The motivation and long-term goal underlying this work is the utilization of stochastic radiative transfer theory (Lane-Veron and Somerville, 2004; Lane et al., 2002) to develop a new class of parametric representations of cloud-radiation interactions and closely related processes for atmospheric models. The theoretical advantage of the stochastic approach is that it can accurately calculate the radiative heating rates through a broken cloud layer without requiring an exact description of the cloud geometry.« less
Expressive map design: OGC SLD/SE++ extension for expressive map styles
NASA Astrophysics Data System (ADS)
Christophe, Sidonie; Duménieu, Bertrand; Masse, Antoine; Hoarau, Charlotte; Ory, Jérémie; Brédif, Mathieu; Lecordix, François; Mellado, Nicolas; Turbet, Jérémie; Loi, Hugo; Hurtut, Thomas; Vanderhaeghe, David; Vergne, Romain; Thollot, Joëlle
2018-05-01
In the context of custom map design, handling more artistic and expressive tools has been identified as a carto-graphic need, in order to design stylized and expressive maps. Based on previous works on style formalization, an approach for specifying the map style has been proposed and experimented for particular use cases. A first step deals with the analysis of inspiration sources, in order to extract `what does make the style of the source', i.e. the salient visual characteristics to be automatically reproduced (textures, spatial arrangements, linear stylization, etc.). In a second step, in order to mimic and generate those visual characteristics, existing and innovative rendering techniques have been implemented in our GIS engine, thus extending the capabilities to generate expressive renderings. Therefore, an extension of the existing cartographic pipeline has been proposed based on the following aspects: 1- extension of the symbolization specifications OGC SLD/SE in order to provide a formalism to specify and reference expressive rendering methods; 2- separate the specification of each rendering method and its parameterization, as metadata. The main contribution has been described in (Christophe et al. 2016). In this paper, we focus firstly on the extension of the cartographic pipeline (SLD++ and metadata) and secondly on map design capabilities which have been experimented on various topographic styles: old cartographic styles (Cassini), artistic styles (watercolor, impressionism, Japanese print), hybrid topographic styles (ortho-imagery & vector data) and finally abstract and photo-realist styles for the geovisualization of costal area. The genericity and interoperability of our approach are promising and have already been tested for 3D visualization.
Effectiveness of feature and classifier algorithms in character recognition systems
NASA Astrophysics Data System (ADS)
Wilson, Charles L.
1993-04-01
At the first Census Optical Character Recognition Systems Conference, NIST generated accuracy data for more than character recognition systems. Most systems were tested on the recognition of isolated digits and upper and lower case alphabetic characters. The recognition experiments were performed on sample sizes of 58,000 digits, and 12,000 upper and lower case alphabetic characters. The algorithms used by the 26 conference participants included rule-based methods, image-based methods, statistical methods, and neural networks. The neural network methods included Multi-Layer Perceptron's, Learned Vector Quantitization, Neocognitrons, and cascaded neural networks. In this paper 11 different systems are compared using correlations between the answers of different systems, comparing the decrease in error rate as a function of confidence of recognition, and comparing the writer dependence of recognition. This comparison shows that methods that used different algorithms for feature extraction and recognition performed with very high levels of correlation. This is true for neural network systems, hybrid systems, and statistically based systems, and leads to the conclusion that neural networks have not yet demonstrated a clear superiority to more conventional statistical methods. Comparison of these results with the models of Vapnick (for estimation problems), MacKay (for Bayesian statistical models), Moody (for effective parameterization), and Boltzmann models (for information content) demonstrate that as the limits of training data variance are approached, all classifier systems have similar statistical properties. The limiting condition can only be approached for sufficiently rich feature sets because the accuracy limit is controlled by the available information content of the training set, which must pass through the feature extraction process prior to classification.
Measurement and partitioning of evapotranspiration for application to vadose zone studies
USDA-ARS?s Scientific Manuscript database
Partitioning evapotranspiration (ET) into its constituent components, soil evaporation (E) and plant transpiration (T), is important for vadose zone studies because E and T are often parameterized separately. However, partitioning ET is challenging, and many longstanding approaches have significant ...
Predictions of Bedforms in Tidal Inlets and River Mouths
2016-07-31
that community modeling environment. APPROACH Bedforms are ubiquitous in unconsolidated sediments . They act as roughness elements, altering the...flow and creating feedback between the bed and the flow and, in doing so, they are intimately tied to erosion, transport and deposition of sediments ...With this approach, grain-scale sediment transport is parameterized with simple rules to drive bedform-scale dynamics. Gallagher (2011) developed a
Word-level recognition of multifont Arabic text using a feature vector matching approach
NASA Astrophysics Data System (ADS)
Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III
1996-03-01
Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.
Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan
2012-05-15
Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less
NASA Technical Reports Server (NTRS)
Schwemmer, Geary K.; Miller, David O.
2005-01-01
Clouds have a powerful influence on atmospheric radiative transfer and hence are crucial to understanding and interpreting the exchange of radiation between the Earth's surface, the atmosphere, and space. Because clouds are highly variable in space, time and physical makeup, it is important to be able to observe them in three dimensions (3-D) with sufficient resolution that the data can be used to generate and validate parameterizations of cloud fields at the resolution scale of global climate models (GCMs). Simulation of photon transport in three dimensionally inhomogeneous cloud fields show that spatial inhomogeneities tend to decrease cloud reflection and absorption and increase direct and diffuse transmission, Therefore it is an important task to characterize cloud spatial structures in three dimensions on the scale of GCM grid elements. In order to validate cloud parameterizations that represent the ensemble, or mean and variance of cloud properties within a GCM grid element, measurements of the parameters must be obtained on a much finer scale so that the statistics on those measurements are truly representative. High spatial sampling resolution is required, on the order of 1 km or less. Since the radiation fields respond almost instantaneously to changes in the cloud field, and clouds changes occur on scales of seconds and less when viewed on scales of approximately 100m, the temporal resolution of cloud properties should be measured and characterized on second time scales. GCM time steps are typically on the order of an hour, but in order to obtain sufficient statistical representations of cloud properties in the parameterizations that are used as model inputs, averaged values of cloud properties should be calculated on time scales on the order of 10-100 s. The Holographic Airborne Rotating Lidar Instrument Experiment (HARLIE) provides exceptional temporal (100 ms) and spatial (30 m) resolution measurements of aerosol and cloud backscatter in three dimensions. HARLIE was used in a ground-based configuration in several recent field campaigns. Principal data products include aerosol backscatter profiles, boundary layer heights, entrainment zone thickness, cloud fraction as a function of altitude and horizontal wind vector profiles based on correlating the motions of clouds and aerosol structures across portions of the scan. Comparisons will be made between various cloud detecting instruments to develop a baseline performance metric.
Parameterization of wind turbine impacts on hydrodynamics and sediment transport
NASA Astrophysics Data System (ADS)
Rivier, Aurélie; Bennis, Anne-Claire; Pinon, Grégory; Magar, Vanesa; Gross, Markus
2016-10-01
Monopile foundations of offshore wind turbines modify the hydrodynamics and sediment transport at local and regional scales. The aim of this work is to assess these modifications and to parameterize them in a regional model. In the present study, this is achieved through a regional circulation model, coupled with a sediment transport module, using two approaches. One approach is to explicitly model the monopiles in the mesh as dry cells, and the other is to parameterize them by adding a drag force term to the momentum and turbulence equations. Idealised cases are run using hydrodynamical conditions and sediment grain sizes typical from the area located off Courseulles-sur-Mer (Normandy, France), where an offshore windfarm is under planning, to assess the capacity of the model to reproduce the effect of the monopile on the environment. Then, the model is applied to a real configuration on an area including the future offshore windfarm of Courseulles-sur-Mer. Four monopiles are represented in the model using both approaches, and modifications of the hydrodynamics and sediment transport are assessed over a tidal cycle. In relation to local hydrodynamic effects, it is observed that currents increase at the side of the monopile and decrease in front of and downstream of the monopile. In relation to sediment transport effect, the results show that resuspension and erosion occur around the monopile in locations where the current speed increases due to the monopile presence, and sediments deposit downstream where the bed shear stress is lower. During the tidal cycle, wakes downstream of the monopile reach the following monopile and modify the velocity magnitude and suspended sediment concentration patterns around the second monopile.
NASA Astrophysics Data System (ADS)
Park, Kyoung-Duck; Raschke, Markus B.
2018-05-01
Controlling the propagation and polarization vectors in linear and nonlinear optical spectroscopy enables to probe the anisotropy of optical responses providing structural symmetry selective contrast in optical imaging. Here we present a novel tilted antenna-tip approach to control the optical vector-field by breaking the axial symmetry of the nano-probe in tip-enhanced near-field microscopy. This gives rise to a localized plasmonic antenna effect with significantly enhanced optical field vectors with control of both \\textit{in-plane} and \\textit{out-of-plane} components. We use the resulting vector-field specificity in the symmetry selective nonlinear optical response of second-harmonic generation (SHG) for a generalized approach to optical nano-crystallography and -imaging. In tip-enhanced SHG imaging of monolayer MoS$_2$ films and single-crystalline ferroelectric YMnO$_3$, we reveal nano-crystallographic details of domain boundaries and domain topology with enhanced sensitivity and nanoscale spatial resolution. The approach is applicable to any anisotropic linear and nonlinear optical response, and provides for optical nano-crystallographic imaging of molecular or quantum materials.
A FORTRAN program for multivariate survival analysis on the personal computer.
Mulder, P G
1988-01-01
In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.
A Solar Radiation Parameterization for Atmospheric Studies. Volume 15
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J. (Editor)
1999-01-01
The solar radiation parameterization (CLIRAD-SW) developed at the Goddard Climate and Radiation Branch for application to atmospheric models are described. It includes the absorption by water vapor, O3, O2, CO2, clouds, and aerosols and the scattering by clouds, aerosols, and gases. Depending upon the nature of absorption, different approaches are applied to different absorbers. In the ultraviolet and visible regions, the spectrum is divided into 8 bands, and single O3 absorption coefficient and Rayleigh scattering coefficient are used for each band. In the infrared, the spectrum is divided into 3 bands, and the k-distribution method is applied for water vapor absorption. The flux reduction due to O2 is derived from a simple function, while the flux reduction due to CO2 is derived from precomputed tables. Cloud single-scattering properties are parameterized, separately for liquid drops and ice, as functions of water amount and effective particle size. A maximum-random approximation is adopted for the overlapping of clouds at different heights. Fluxes are computed using the Delta-Eddington approximation.
Bayesian parameter estimation for nonlinear modelling of biological pathways.
Ghasemi, Omid; Lindsey, Merry L; Yang, Tianyi; Nguyen, Nguyen; Huang, Yufei; Jin, Yu-Fang
2011-01-01
The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC) method. We applied this approach to the biological pathways involved in the left ventricle (LV) response to myocardial infarction (MI) and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly parameterized dynamic systems. Our proposed Bayesian algorithm successfully estimated parameters in nonlinear mathematical models for biological pathways. This method can be further extended to high order systems and thus provides a useful tool to analyze biological dynamics and extract information using temporal data.
NASA Astrophysics Data System (ADS)
Raju, P. V. S.; Potty, Jayaraman; Mohanty, U. C.
2011-09-01
Comprehensive sensitivity analyses on physical parameterization schemes of Weather Research Forecast (WRF-ARW core) model have been carried out for the prediction of track and intensity of tropical cyclones by taking the example of cyclone Nargis, which formed over the Bay of Bengal and hit Myanmar on 02 May 2008, causing widespread damages in terms of human and economic losses. The model performances are also evaluated with different initial conditions of 12 h intervals starting from the cyclogenesis to the near landfall time. The initial and boundary conditions for all the model simulations are drawn from the global operational analysis and forecast products of National Center for Environmental Prediction (NCEP-GFS) available for the public at 1° lon/lat resolution. The results of the sensitivity analyses indicate that a combination of non-local parabolic type exchange coefficient PBL scheme of Yonsei University (YSU), deep and shallow convection scheme with mass flux approach for cumulus parameterization (Kain-Fritsch), and NCEP operational cloud microphysics scheme with diagnostic mixed phase processes (Ferrier), predicts better track and intensity as compared against the Joint Typhoon Warning Center (JTWC) estimates. Further, the final choice of the physical parameterization schemes selected from the above sensitivity experiments is used for model integration with different initial conditions. The results reveal that the cyclone track, intensity and time of landfall are well simulated by the model with an average intensity error of about 8 hPa, maximum wind error of 12 m s-1and track error of 77 km. The simulations also show that the landfall time error and intensity error are decreasing with delayed initial condition, suggesting that the model forecast is more dependable when the cyclone approaches the coast. The distribution and intensity of rainfall are also well simulated by the model and comparable with the TRMM estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less
None, None
2016-11-21
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less
Adly, Amr A.; Abd-El-Hafiz, Salwa K.
2012-01-01
Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446
Versatile generation of optical vector fields and vector beams using a non-interferometric approach.
Tripathi, Santosh; Toussaint, Kimani C
2012-05-07
We present a versatile, non-interferometric method for generating vector fields and vector beams which can produce all the states of polarization represented on a higher-order Poincaré sphere. The versatility and non-interferometric nature of this method is expected to enable exploration of various exotic properties of vector fields and vector beams. To illustrate this, we study the propagation properties of some vector fields and find that, in general, propagation alters both their intensity and polarization distribution, and more interestingly, converts some vector fields into vector beams. In the article, we also suggest a modified Jones vector formalism to represent vector fields and vector beams.
Selection and parameterization of cortical neurons for neuroprosthetic control.
Wahnoun, Remy; He, Jiping; Helms Tillery, Stephen I
2006-06-01
When designing neuroprosthetic interfaces for motor function, it is crucial to have a system that can extract reliable information from available neural signals and produce an output suitable for real life applications. Systems designed to date have relied on establishing a relationship between neural discharge patterns in motor cortical areas and limb movement, an approach not suitable for patients who require such implants but who are unable to provide proper motor behavior to initially tune the system. We describe here a method that allows rapid tuning of a population vector-based system for neural control without arm movements. We trained highly motivated primates to observe a 3D center-out task as the computer played it very slowly. Based on only 10-12 s of neuronal activity observed in M1 and PMd, we generated an initial mapping between neural activity and device motion that the animal could successfully use for neuroprosthetic control. Subsequent tunings of the parameters led to improvements in control, but the initial selection of neurons and estimated preferred direction for those cells remained stable throughout the remainder of the day. Using this system, we have observed that the contribution of individual neurons to the overall control of the system is very heterogeneous. We thus derived a novel measure of unit quality and an indexing scheme that allowed us to rate each neuron's contribution to the overall control. In offline tests, we found that fewer than half of the units made positive contributions to the performance. We tested this experimentally by having the animals control the neuroprosthetic system using only the 20 best neurons. We found that performance in this case was better than when the entire set of available neurons was used. Based on these results, we believe that, with careful task design, it is feasible to parameterize control systems without any overt behaviors and that subsequent control system design will be enhanced with cautious unit selection. These improvements can lead to systems demanding lower bandwidth and computational power, and will pave the way for more feasible clinical systems.
Vectorized Jiles-Atherton hysteresis model
NASA Astrophysics Data System (ADS)
Szymański, Grzegorz; Waszak, Michał
2004-01-01
This paper deals with vector hysteresis modeling. A vector model consisting of individual Jiles-Atherton components placed along principal axes is proposed. The cross-axis coupling ensures general vector model properties. Minor loops are obtained using scaling method. The model is intended for efficient finite element method computations defined in terms of magnetic vector potential. Numerical efficiency is ensured by differential susceptibility approach.
NASA Astrophysics Data System (ADS)
Dipankar, A.; Stevens, B. B.; Zängl, G.; Pondkule, M.; Brdar, S.
2014-12-01
The effect of clouds on large scale dynamics is represented in climate models through parameterization of various processes, of which the parameterization of shallow and deep convection are particularly uncertain. The atmospheric boundary layer, which controls the coupling to the surface, and which defines the scale of shallow convection, is typically 1 km in depth. Thus, simulations on a O(100 m) grid largely obviate the need for such parameterizations. By crossing this threshold of O(100m) grid resolution one can begin thinking of large-eddy simulation (LES), wherein the sub-grid scale parameterization have a sounder theoretical foundation. Substantial initiatives have been taken internationally to approach this threshold. For example, Miura et al., 2007 and Mirakawa et al., 2014 approach this threshold by doing global simulations, with (gradually) decreasing grid resolution, to understand the effect of cloud-resolving scales on the general circulation. Our strategy, on the other hand, is to take a big leap forward by fixing the resolution at O(100 m), and gradually increasing the domain size. We believe that breaking this threshold would greatly help in improving the parameterization schemes and reducing the uncertainty in climate predictions. To take this forward, the German Federal Ministry of Education and Research has initiated a project on HD(CP)2 that aims for a limited area LES at resolution O(100 m) using the new unified modeling system ICON (Zängl et al., 2014). In the talk, results from the HD(CP)2 evaluation simulation will be shown that targets high resolution simulation over a small domain around Jülich, Germany. This site is chosen because high resolution HD(CP)2 Observational Prototype Experiment took place in this region from 1.04.2013 to 31.05.2013, in order to critically evaluate the model. Nesting capabilities of ICON is used to gradually increase the resolution from the outermost domain, which is forced from the COSMO-DE data, to the innermost and finest resolution domain centered around Jülich (see Fig. 1 top panel). Furthermore, detailed analyses of the simulation results against the observation data will be presented. A reprsentative figure showing time series of column integrated water vapor (IWV) for both model and observation on 24.04.2013 is shown in bottom panel of Fig. 1.
An efficient approach to ARMA modeling of biological systems with multiple inputs and delays
NASA Technical Reports Server (NTRS)
Perrott, M. H.; Cohen, R. J.
1996-01-01
This paper presents a new approach to AutoRegressive Moving Average (ARMA or ARX) modeling which automatically seeks the best model order to represent investigated linear, time invariant systems using their input/output data. The algorithm seeks the ARMA parameterization which accounts for variability in the output of the system due to input activity and contains the fewest number of parameters required to do so. The unique characteristics of the proposed system identification algorithm are its simplicity and efficiency in handling systems with delays and multiple inputs. We present results of applying the algorithm to simulated data and experimental biological data In addition, a technique for assessing the error associated with the impulse responses calculated from estimated ARMA parameterizations is presented. The mapping from ARMA coefficients to impulse response estimates is nonlinear, which complicates any effort to construct confidence bounds for the obtained impulse responses. Here a method for obtaining a linearization of this mapping is derived, which leads to a simple procedure to approximate the confidence bounds.
Loupa, G; Rapsomanikis, S; Trepekli, A; Kourtidis, K
2016-01-15
Energy flux parameterization was effected for the city of Athens, Greece, by utilizing two approaches, the Local-Scale Urban Meteorological Parameterization Scheme (LUMPS) and the Bulk Approach (BA). In situ acquired data are used to validate the algorithms of these schemes and derive coefficients applicable to the study area. Model results from these corrected algorithms are compared with literature results for coefficients applicable to other cities and their varying construction materials. Asphalt and concrete surfaces, canyons and anthropogenic heat releases were found to be the key characteristics of the city center that sustain the elevated surface and air temperatures, under hot, sunny and dry weather, during the Mediterranean summer. A relationship between storage heat flux plus anthropogenic energy flux and temperatures (surface and lower atmosphere) is presented, that results in understanding of the interplay between temperatures, anthropogenic energy releases and the city characteristics under the Urban Heat Island conditions.
New perspectives in tracing vector-borne interaction networks.
Gómez-Díaz, Elena; Figuerola, Jordi
2010-10-01
Disentangling trophic interaction networks in vector-borne systems has important implications in epidemiological and evolutionary studies. Molecular methods based on bloodmeal typing in vectors have been increasingly used to identify hosts. Although most molecular approaches benefit from good specificity and sensitivity, their temporal resolution is limited by the often rapid digestion of blood, and mixed bloodmeals still remain a challenge for bloodmeal identification in multi-host vector systems. Stable isotope analyses represent a novel complementary tool that can overcome some of these problems. The utility of these methods using examples from different vector-borne systems are discussed and the extents to which they are complementary and versatile are highlighted. There are excellent opportunities for progress in the study of vector-borne transmission networks resulting from the integration of both molecular and stable isotope approaches. Copyright © 2010 Elsevier Ltd. All rights reserved.
Segmentation of discrete vector fields.
Li, Hongyu; Chen, Wenbin; Shen, I-Fan
2006-01-01
In this paper, we propose an approach for 2D discrete vector field segmentation based on the Green function and normalized cut. The method is inspired by discrete Hodge Decomposition such that a discrete vector field can be broken down into three simpler components, namely, curl-free, divergence-free, and harmonic components. We show that the Green Function Method (GFM) can be used to approximate the curl-free and the divergence-free components to achieve our goal of the vector field segmentation. The final segmentation curves that represent the boundaries of the influence region of singularities are obtained from the optimal vector field segmentations. These curves are composed of piecewise smooth contours or streamlines. Our method is applicable to both linear and nonlinear discrete vector fields. Experiments show that the segmentations obtained using our approach essentially agree with human perceptual judgement.
NASA Astrophysics Data System (ADS)
Erazo, Kalil; Nagarajaiah, Satish
2017-06-01
In this paper an offline approach for output-only Bayesian identification of stochastic nonlinear systems is presented. The approach is based on a re-parameterization of the joint posterior distribution of the parameters that define a postulated state-space stochastic model class. In the re-parameterization the state predictive distribution is included, marginalized, and estimated recursively in a state estimation step using an unscented Kalman filter, bypassing state augmentation as required by existing online methods. In applications expectations of functions of the parameters are of interest, which requires the evaluation of potentially high-dimensional integrals; Markov chain Monte Carlo is adopted to sample the posterior distribution and estimate the expectations. The proposed approach is suitable for nonlinear systems subjected to non-stationary inputs whose realization is unknown, and that are modeled as stochastic processes. Numerical verification and experimental validation examples illustrate the effectiveness and advantages of the approach, including: (i) an increased numerical stability with respect to augmented-state unscented Kalman filtering, avoiding divergence of the estimates when the forcing input is unmeasured; (ii) the ability to handle arbitrary prior and posterior distributions. The experimental validation of the approach is conducted using data from a large-scale structure tested on a shake table. It is shown that the approach is robust to inherent modeling errors in the description of the system and forcing input, providing accurate prediction of the dynamic response when the excitation history is unknown.
Impacts of Light Use Efficiency and fPAR Parameterization on Gross Primary Production Modeling
NASA Technical Reports Server (NTRS)
Cheng, Yen-Ben; Zhang, Qingyuan; Lyapustin, Alexei I.; Wang, Yujie; Middleton, Elizabeth M.
2014-01-01
This study examines the impact of parameterization of two variables, light use efficiency (LUE) and the fraction of absorbed photosynthetically active radiation (fPAR or fAPAR), on gross primary production(GPP) modeling. Carbon sequestration by terrestrial plants is a key factor to a comprehensive under-standing of the carbon budget at global scale. In this context, accurate measurements and estimates of GPP will allow us to achieve improved carbon monitoring and to quantitatively assess impacts from cli-mate changes and human activities. Spaceborne remote sensing observations can provide a variety of land surface parameterizations for modeling photosynthetic activities at various spatial and temporal scales. This study utilizes a simple GPP model based on LUE concept and different land surface parameterizations to evaluate the model and monitor GPP. Two maize-soybean rotation fields in Nebraska, USA and the Bartlett Experimental Forest in New Hampshire, USA were selected for study. Tower-based eddy-covariance carbon exchange and PAR measurements were collected from the FLUXNET Synthesis Dataset. For the model parameterization, we utilized different values of LUE and the fPAR derived from various algorithms. We adapted the approach and parameters from the MODIS MOD17 Biome Properties Look-Up Table (BPLUT) to derive LUE. We also used a site-specific analytic approach with tower-based Net Ecosystem Exchange (NEE) and PAR to estimate maximum potential LUE (LUEmax) to derive LUE. For the fPAR parameter, the MODIS MOD15A2 fPAR product was used. We also utilized fAPAR chl, a parameter accounting for the fAPAR linked to the chlorophyll-containing canopy fraction. fAPAR chl was obtained by inversion of a radiative transfer model, which used the MODIS-based reflectances in bands 1-7 produced by Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm. fAPAR chl exhibited seasonal dynamics more similar with the flux tower based GPP than MOD15A2 fPAR, especially in the spring and fall at the agricultural sites. When using the MODIS MOD17-based parameters to estimate LUE, fAPAR chl generated better agreements with GPP (r2= 0.79-0.91) than MOD15A2 fPAR (r2= 0.57-0.84).However, underestimations of GPP were also observed, especially for the crop fields. When applying the site-specific LUE max value to estimate in situ LUE, the magnitude of estimated GPP was closer to in situ GPP; this method produced a slight overestimation for the MOD15A2 fPAR at the Bartlett forest. This study highlights the importance of accurate land surface parameterizations to achieve reliable carbon monitoring capabilities from remote sensing information.
NASA Astrophysics Data System (ADS)
Serva, Federico; Cagnazzo, Chiara; Riccio, Angelo
2016-04-01
The effects of the propagation and breaking of atmospheric gravity waves have long been considered crucial for their impact on the circulation, especially in the stratosphere and mesosphere, between heights of 10 and 110 km. These waves, that in the Earth's atmosphere originate from surface orography (OGWs) or from transient (nonorographic) phenomena such as fronts and convective processes (NOGWs), have horizontal wavelengths between 10 and 1000 km, vertical wavelengths of several km, and frequencies spanning from minutes to hours. Orographic and nonorographic GWs must be accounted for in climate models to obtain a realistic simulation of the stratosphere in both hemispheres, since they can have a substantial impact on circulation and temperature, hence an important role in ozone chemistry for chemistry-climate models. Several types of parameterization are currently employed in models, differing in the formulation and for the values assigned to parameters, but the common aim is to quantify the effect of wave breaking on large-scale wind and temperature patterns. In the last decade, both global observations from satellite-borne instruments and the outputs of very high resolution climate models provided insight on the variability and properties of gravity wave field, and these results can be used to constrain some of the empirical parameters present in most parameterization scheme. A feature of the NOGW forcing that clearly emerges is the intermittency, linked with the nature of the sources: this property is absent in the majority of the models, in which NOGW parameterizations are uncoupled with other atmospheric phenomena, leading to results which display lower variability compared to observations. In this work, we analyze the climate simulated in AMIP runs of the MAECHAM5 model, which uses the Hines NOGW parameterization and with a fine vertical resolution suitable to capture the effects of wave-mean flow interaction. We compare the results obtained with two version of the model, the default and a new stochastic version, in which the value of the perturbation field at launching level is not constant and uniform, but extracted at each time-step and grid-point from a given PDF. With this approach we are trying to add further variability to the effects given by the deterministic NOGW parameterization: the impact on the simulated climate will be assessed focusing on the Quasi-Biennial Oscillation of the equatorial stratosphere (known to be driven also by gravity waves) and on the variability of the mid-to-high latitudes atmosphere. The different characteristics of the circulation will be compared with recent reanalysis products in order to determine the advantages of the stochastic approach over the traditional deterministic scheme.
Enhancing vector shoreline data using a data fusion approach
NASA Astrophysics Data System (ADS)
Carlotto, Mark; Nebrich, Mark; DeMichele, David
2017-05-01
Vector shoreline (VSL) data is potentially useful in ATR systems that distinguish between objects on land or water. Unfortunately available data such as the NOAA 1:250,000 World Vector Shoreline and NGA Prototype Global Shoreline data cannot be used by themselves to make a land/water determination because of the manner in which the data are compiled. We describe a data fusion approach for creating labeled VSL data using test points from Global 30 Arc-Second Elevation (GTOPO30) data to determine the direction of vector segments; i.e., whether they are in clockwise or counterclockwise order. We show consistently labeled VSL data be used to easily determine whether a point is on land or water using a vector cross product test.
NASA Technical Reports Server (NTRS)
Randall, David A.
1990-01-01
A bulk planetary boundary layer (PBL) model was developed with a simple internal vertical structure and a simple second-order closure, designed for use as a PBL parameterization in a large-scale model. The model allows the mean fields to vary with height within the PBL, and so must address the vertical profiles of the turbulent fluxes, going beyond the usual mixed-layer assumption that the fluxes of conservative variables are linear with height. This is accomplished using the same convective mass flux approach that has also been used in cumulus parameterizations. The purpose is to show that such a mass flux model can include, in a single framework, the compensating subsidence concept, downgradient mixing, and well-mixed layers.
New Approaches to Parameterizing Convection
NASA Technical Reports Server (NTRS)
Randall, David A.; Lappen, Cara-Lyn
1999-01-01
Many general circulation models (GCMs) currently use separate schemes for planetary boundary layer (PBL) processes, shallow and deep cumulus (Cu) convection, and stratiform clouds. The conventional distinctions. among these processes are somewhat arbitrary. For example, in the stratocumulus-to-cumulus transition region, stratocumulus clouds break up into a combination of shallow cumulus and broken stratocumulus. Shallow cumulus clouds may be considered to reside completely within the PBL, or they may be regarded as starting in the PBL but terminating above it. Deeper cumulus clouds often originate within the PBL with also can originate aloft. To the extent that our models separately parameterize physical processes which interact strongly on small space and time scales, the currently fashionable practice of modularization may be doing more harm than good.
We present results from a study testing the new boundary layer parameterization method, the canopy drag approach (DA) which is designed to explicitly simulate the effects of buildings, street and tree canopies on the dynamic, thermodynamic structure and dispersion fields in urban...
Contrasting Causatives: A Minimalist Approach
ERIC Educational Resources Information Center
Tubino Blanco, Mercedes
2010-01-01
This dissertation explores the mechanisms behind the linguistic expression of causation in English, Hiaki (Uto-Aztecan) and Spanish. Pylkkanen's (2002, 2008) analysis of causatives as dependent on the parameterization of the functional head v[subscript CAUSE] is chosen as a point of departure. The studies conducted in this dissertation confirm…
Okumu, Fredros O; Kiware, Samson S; Moore, Sarah J; Killeen, Gerry F
2013-01-16
Indoor residual insecticide spraying (IRS) and long-lasting insecticide treated nets (LLINs) are commonly used together even though evidence that such combinations confer greater protection against malaria than either method alone is inconsistent. A deterministic model of mosquito life cycle processes was adapted to allow parameterization with results from experimental hut trials of various combinations of untreated nets or LLINs (Olyset, PermaNet 2.0, Icon Life nets) with IRS (pirimiphos methyl, lambda cyhalothrin, DDT), in a setting where vector populations are dominated by Anopheles arabiensis, so that community level impact upon malaria transmission at high coverage could be predicted. Intact untreated nets alone provide equivalent personal protection to all three LLINs. Relative to IRS plus untreated nets, community level protection is slightly higher when Olyset or PermaNet 2.0 nets are added onto IRS with pirimiphos methyl or lambda cyhalothrin but not DDT, and when Icon Life nets supplement any of the IRS insecticides. Adding IRS onto any net modestly enhances communal protection when pirimiphos methyl is sprayed, while spraying lambda cyhalothrin enhances protection for untreated nets but not LLINs. Addition of DDT reduces communal protection when added to LLINs. Where transmission is mediated primarily by An. arabiensis, adding IRS to high LLIN coverage provides only modest incremental benefit (e.g. when an organophosphate like pirimiphos methyl is used), but can be redundant (e.g. when a pyrethroid like lambda cyhalothin is used) or even regressive (e.g. when DDT is used for the IRS). Relative to IRS plus untreated nets, supplementing IRS with LLINs will only modestly improve community protection. Beyond the physical protection that intact nets provide, additional protection against transmission by An. arabiensis conferred by insecticides will be remarkably small, regardless of whether they are delivered as LLINs or IRS. The insecticidal action of LLINs and IRS probably already approaches their absolute limit of potential impact upon this persistent vector so personal protection of nets should be enhanced by improving the physical integrity and durability. Combining LLINs and non-pyrethroid IRS in residual transmission systems may nevertheless be justified as a means to manage insecticide resistance and prevent potential rebound of not only An. arabiensis, but also more potent, vulnerable and historically important species such as Anopheles gambiae and Anopheles funestus.
Rate determination from vector observations
NASA Technical Reports Server (NTRS)
Weiss, Jerold L.
1993-01-01
Vector observations are a common class of attitude data provided by a wide variety of attitude sensors. Attitude determination from vector observations is a well-understood process and numerous algorithms such as the TRIAD algorithm exist. These algorithms require measurement of the line of site (LOS) vector to reference objects and knowledge of the LOS directions in some predetermined reference frame. Once attitude is determined, it is a simple matter to synthesize vehicle rate using some form of lead-lag filter, and then, use it for vehicle stabilization. Many situations arise, however, in which rate knowledge is required but knowledge of the nominal LOS directions are not available. This paper presents two methods for determining spacecraft angular rates from vector observations without a priori knowledge of the vector directions. The first approach uses an extended Kalman filter with a spacecraft dynamic model and a kinematic model representing the motion of the observed LOS vectors. The second approach uses a 'differential' TRIAD algorithm to compute the incremental direction cosine matrix, from which vehicle rate is then derived.
NASA Astrophysics Data System (ADS)
Luo, Ya-Zhong; Zhang, Jin; Li, Hai-yang; Tang, Guo-Jin
2010-08-01
In this paper, a new optimization approach combining primer vector theory and evolutionary algorithms for fuel-optimal non-linear impulsive rendezvous is proposed. The optimization approach is designed to seek the optimal number of impulses as well as the optimal impulse vectors. In this optimization approach, adding a midcourse impulse is determined by an interactive method, i.e. observing the primer-magnitude time history. An improved version of simulated annealing is employed to optimize the rendezvous trajectory with the fixed-number of impulses. This interactive approach is evaluated by three test cases: coplanar circle-to-circle rendezvous, same-circle rendezvous and non-coplanar rendezvous. The results show that the interactive approach is effective and efficient in fuel-optimal non-linear rendezvous design. It can guarantee solutions, which satisfy the Lawden's necessary optimality conditions.
Tear fluid proteomics multimarkers for diabetic retinopathy screening
2013-01-01
Background The aim of the project was to develop a novel method for diabetic retinopathy screening based on the examination of tear fluid biomarker changes. In order to evaluate the usability of protein biomarkers for pre-screening purposes several different approaches were used, including machine learning algorithms. Methods All persons involved in the study had diabetes. Diabetic retinopathy (DR) was diagnosed by capturing 7-field fundus images, evaluated by two independent ophthalmologists. 165 eyes were examined (from 119 patients), 55 were diagnosed healthy and 110 images showed signs of DR. Tear samples were taken from all eyes and state-of-the-art nano-HPLC coupled ESI-MS/MS mass spectrometry protein identification was performed on all samples. Applicability of protein biomarkers was evaluated by six different optimally parameterized machine learning algorithms: Support Vector Machine, Recursive Partitioning, Random Forest, Naive Bayes, Logistic Regression, K-Nearest Neighbor. Results Out of the six investigated machine learning algorithms the result of Recursive Partitioning proved to be the most accurate. The performance of the system realizing the above algorithm reached 74% sensitivity and 48% specificity. Conclusions Protein biomarkers selected and classified with machine learning algorithms alone are at present not recommended for screening purposes because of low specificity and sensitivity values. This tool can be potentially used to improve the results of image processing methods as a complementary tool in automatic or semiautomatic systems. PMID:23919537
Li, Xianfeng; Murthy, N. Sanjeeva; Becker, Matthew L.; Latour, Robert A.
2016-01-01
A multiscale modeling approach is presented for the efficient construction of an equilibrated all-atom model of a cross-linked poly(ethylene glycol) (PEG)-based hydrogel using the all-atom polymer consistent force field (PCFF). The final equilibrated all-atom model was built with a systematic simulation toolset consisting of three consecutive parts: (1) building a global cross-linked PEG-chain network at experimentally determined cross-link density using an on-lattice Monte Carlo method based on the bond fluctuation model, (2) recovering the local molecular structure of the network by transitioning from the lattice model to an off-lattice coarse-grained (CG) model parameterized from PCFF, followed by equilibration using high performance molecular dynamics methods, and (3) recovering the atomistic structure of the network by reverse mapping from the equilibrated CG structure, hydrating the structure with explicitly represented water, followed by final equilibration using PCFF parameterization. The developed three-stage modeling approach has application to a wide range of other complex macromolecular hydrogel systems, including the integration of peptide, protein, and/or drug molecules as side-chains within the hydrogel network for the incorporation of bioactivity for tissue engineering, regenerative medicine, and drug delivery applications. PMID:27013229
NASA Astrophysics Data System (ADS)
Sanyal, Tanmoy; Shell, M. Scott
2016-07-01
Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one at which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.
Pion, Kaon, Proton and Antiproton Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.
2008-01-01
Inclusive pion, kaon, proton, and antiproton production from proton-proton collisions is studied at a variety of proton energies. Various available parameterizations of Lorentz-invariant differential cross sections as a function of transverse momentum and rapidity are compared with experimental data. The Badhwar and Alper parameterizations are moderately satisfactory for charged pion production. The Badhwar parameterization provides the best fit for charged kaon production. For proton production, the Alper parameterization is best, and for antiproton production the Carey parameterization works best. However, no parameterization is able to fully account for all the data.
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
Rotating electrical machines: Poynting flow
NASA Astrophysics Data System (ADS)
Donaghy-Spargo, C.
2017-09-01
This paper presents a complementary approach to the traditional Lorentz and Faraday approaches that are typically adopted in the classroom when teaching the fundamentals of electrical machines—motors and generators. The approach adopted is based upon the Poynting vector, which illustrates the ‘flow’ of electromagnetic energy. It is shown through simple vector analysis that the energy-flux density flow approach can provide insight into the operation of electrical machines and it is also shown that the results are in agreement with conventional Maxwell stress-based theory. The advantage of this approach is its complementary completion of the physical picture regarding the electromechanical energy conversion process—it is also a means of maintaining student interest in this subject and as an unconventional application of the Poynting vector during normal study of electromagnetism.
Pedotransfer functions in Earth system science: challenges and perspectives
NASA Astrophysics Data System (ADS)
Van Looy, K.; Minasny, B.; Nemes, A.; Verhoef, A.; Weihermueller, L.; Vereecken, H.
2017-12-01
We make a stronghold for a new generation of Pedotransfer functions (PTFs) that is currently developed in the different disciplines of Earth system science, offering strong perspectives for improvement of integrated process-based models, from local to global scale applications. PTFs are simple to complex knowledge rules that relate available soil information to soil properties and variables that are needed to parameterize soil processes. To meet the methodological challenges for a successful application in Earth system modeling, we highlight how PTF development needs to go hand in hand with suitable extrapolation and upscaling techniques such that the PTFs correctly capture the spatial heterogeneity of soils. Most actively pursued recent developments are related to parameterizations of solute transport, heat exchange, soil respiration and organic carbon content, root density and vegetation water uptake. We present an outlook and stepwise approach to the development of a comprehensive set of PTFs that can be applied throughout a wide range of disciplines of Earth system science, with emphasis on land surface models. Novel sensing techniques and soil information availability provide a true breakthrough for this, yet further improvements are necessary in three domains: 1) the determining of unknown relationships and dealing with uncertainty in Earth system modeling; 2) the step of spatially deploying this knowledge with PTF validation at regional to global scales; and 3) the integration and linking of the complex model parameterizations (coupled parameterization). Integration is an achievable goal we will show.
Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone
NASA Astrophysics Data System (ADS)
Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo
2017-12-01
The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.
Chakravarthi, Srikant; Monroy-Sosa, Alejandro; Gonen, Lior; Fukui, Melanie; Rovin, Richard; Kojis, Nathaniel; Lindsay, Mark; Khalili, Sammy; Celix, Juanita; Corsten, Martin; Kassam, Amin B
2018-06-01
Endoscopic endonasal access to the jugular foramen and occipital condyle - the transcondylar-transtubercular approach - is anatomically complex and requires detailed knowledge of the relative position of critical neurovascular structures, in order to avoid inadvertent injury and resultant complications. However, access to this region can be confusing as the orientation and relationships of osseous, vascular, and neural structures are very much different from traditional dorsal approaches. This review aims at providing an organizational construct for a more understandable framework in accessing the transcondylar-transtubercular window. The region can be conceptualized using a three-vector coordinate system: vector 1 represents a dorsal or ventral corridor, vector 2 represents the outer and inner circumferential anatomical limits; in an "onion-skin" fashion, key osseous, vascular, and neural landmarks are organized based on a 360-degree skull base model, and vector 3 represents the final core or target of the surgical corridor. The creation of an organized "global-positioning system" may better guide the surgeon in accessing the far-medial transcondylar-transtubercular region, and related pathologies, and help understand the surgical limits to the occipital condyle and jugular foramen - the ventral posterolateral corridor - via the endoscopic endonasal approach.
Construction of siRNA/miRNA expression vectors based on a one-step PCR process
Xu, Jun; Zeng, Jie Qiong; Wan, Gang; Hu, Gui Bin; Yan, Hong; Ma, Li Xin
2009-01-01
Background RNA interference (RNAi) has become a powerful means for silencing target gene expression in mammalian cells and is envisioned to be useful in therapeutic approaches to human disease. In recent years, high-throughput, genome-wide screening of siRNA/miRNA libraries has emerged as a desirable approach. Current methods for constructing siRNA/miRNA expression vectors require the synthesis of long oligonucleotides, which is costly and suffers from mutation problems. Results Here we report an ingenious method to solve traditional problems associated with construction of siRNA/miRNA expression vectors. We synthesized shorter primers (< 50 nucleotides) to generate a linear expression structure by PCR. The PCR products were directly transformed into chemically competent E. coli and converted to functional vectors in vivo via homologous recombination. The positive clones could be easily screened under UV light. Using this method we successfully constructed over 500 functional siRNA/miRNA expression vectors. Sequencing of the vectors confirmed a high accuracy rate. Conclusion This novel, convenient, low-cost and highly efficient approach may be useful for high-throughput assays of RNAi libraries. PMID:19490634
Accelerating 4D flow MRI by exploiting vector field divergence regularization.
Santelli, Claudio; Loecher, Michael; Busch, Julia; Wieben, Oliver; Schaeffter, Tobias; Kozerke, Sebastian
2016-01-01
To improve velocity vector field reconstruction from undersampled four-dimensional (4D) flow MRI by penalizing divergence of the measured flow field. Iterative image reconstruction in which magnitude and phase are regularized separately in alternating iterations was implemented. The approach allows incorporating prior knowledge of the flow field being imaged. In the present work, velocity data were regularized to reduce divergence, using either divergence-free wavelets (DFW) or a finite difference (FD) method using the ℓ1-norm of divergence and curl. The reconstruction methods were tested on a numerical phantom and in vivo data. Results of the DFW and FD approaches were compared with data obtained with standard compressed sensing (CS) reconstruction. Relative to standard CS, directional errors of vector fields and divergence were reduced by 55-60% and 38-48% for three- and six-fold undersampled data with the DFW and FD methods. Velocity vector displays of the numerical phantom and in vivo data were found to be improved upon DFW or FD reconstruction. Regularization of vector field divergence in image reconstruction from undersampled 4D flow data is a valuable approach to improve reconstruction accuracy of velocity vector fields. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Gyftakis, Konstantinos N.; Marques Cardoso, Antonio J.; Antonino-Daviu, Jose A.
2017-09-01
The Park's Vector Approach (PVA), together with its variations, has been one of the most widespread diagnostic methods for electrical machines and drives. Regarding the broken rotor bars fault diagnosis in induction motors, the common practice is to rely on the width increase of the Park's Vector (PV) ring and then apply some more sophisticated signal processing methods. It is shown in this paper that this method can be unreliable and is strongly dependent on the magnetic poles and rotor slot numbers. To overcome this constraint, the novel Filtered Park's/Extended Park's Vector Approach (FPVA/FEPVA) is introduced. The investigation is carried out with FEM simulations and experimental testing. The results prove to satisfyingly coincide, whereas the proposed advanced FPVA method is desirably reliable.
Evaluation and intercomparison of five major dry deposition algorithms in North America
Dry deposition of various pollutants needs to be quantified in air quality monitoring networks as well as in chemical transport models. The inferential method is the most commonly used approach in which the dry deposition velocity (Vd) is empirically parameterized as a function o...
USDA-ARS?s Scientific Manuscript database
The complexity of the hydrologic system challenges the development of models. One issue faced during the model development stage is the uncertainty involved in model parameterization. Using a single optimized set of parameters (one snapshot) to represent baseline conditions of the system limits the ...
Anopheles Vectors in Mainland China While Approaching Malaria Elimination.
Zhang, Shaosen; Guo, Shaohua; Feng, Xinyu; Afelt, Aneta; Frutos, Roger; Zhou, Shuisen; Manguin, Sylvie
2017-11-01
China is approaching malaria elimination; however, well-documented information on malaria vectors is still missing, which could hinder the development of appropriate surveillance strategies and WHO certification. This review summarizes the nationwide distribution of malaria vectors, their bionomic characteristics, control measures, and related studies. After several years of effort, the area of distribution of the principal malaria vectors was reduced, in particular for Anopheles lesteri (synonym: An. anthropophagus) and Anopheles dirus s.l., which nearly disappeared from their former endemic regions. Anopheles sinensis is becoming the predominant species in southwestern China. The bionomic characteristics of these species have changed, and resistance to insecticides was reported. There is a need to update surveillance tools and investigate the role of secondary vectors in malaria transmission. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reverse chemical ecology approach for the identification of a mosquito oviposition attractant
USDA-ARS?s Scientific Manuscript database
Pheromones and other semiochemicals play a crucial role in today’s integrated pest and vector management strategies for controlling populations of insects causing loses to agriculture and vectoring diseases to humans. These semiochemicals are typically discovered by bioassay-guided approaches. Here,...
A vectorial semantics approach to personality assessment.
Neuman, Yair; Cohen, Yochai
2014-04-23
Personality assessment and, specifically, the assessment of personality disorders have traditionally been indifferent to computational models. Computational personality is a new field that involves the automatic classification of individuals' personality traits that can be compared against gold-standard labels. In this context, we introduce a new vectorial semantics approach to personality assessment, which involves the construction of vectors representing personality dimensions and disorders, and the automatic measurements of the similarity between these vectors and texts written by human subjects. We evaluated our approach by using a corpus of 2468 essays written by students who were also assessed through the five-factor personality model. To validate our approach, we measured the similarity between the essays and the personality vectors to produce personality disorder scores. These scores and their correspondence with the subjects' classification of the five personality factors reproduce patterns well-documented in the psychological literature. In addition, we show that, based on the personality vectors, we can predict each of the five personality factors with high accuracy.
A Vectorial Semantics Approach to Personality Assessment
NASA Astrophysics Data System (ADS)
Neuman, Yair; Cohen, Yochai
2014-04-01
Personality assessment and, specifically, the assessment of personality disorders have traditionally been indifferent to computational models. Computational personality is a new field that involves the automatic classification of individuals' personality traits that can be compared against gold-standard labels. In this context, we introduce a new vectorial semantics approach to personality assessment, which involves the construction of vectors representing personality dimensions and disorders, and the automatic measurements of the similarity between these vectors and texts written by human subjects. We evaluated our approach by using a corpus of 2468 essays written by students who were also assessed through the five-factor personality model. To validate our approach, we measured the similarity between the essays and the personality vectors to produce personality disorder scores. These scores and their correspondence with the subjects' classification of the five personality factors reproduce patterns well-documented in the psychological literature. In addition, we show that, based on the personality vectors, we can predict each of the five personality factors with high accuracy.
A Vectorial Semantics Approach to Personality Assessment
Neuman, Yair; Cohen, Yochai
2014-01-01
Personality assessment and, specifically, the assessment of personality disorders have traditionally been indifferent to computational models. Computational personality is a new field that involves the automatic classification of individuals' personality traits that can be compared against gold-standard labels. In this context, we introduce a new vectorial semantics approach to personality assessment, which involves the construction of vectors representing personality dimensions and disorders, and the automatic measurements of the similarity between these vectors and texts written by human subjects. We evaluated our approach by using a corpus of 2468 essays written by students who were also assessed through the five-factor personality model. To validate our approach, we measured the similarity between the essays and the personality vectors to produce personality disorder scores. These scores and their correspondence with the subjects' classification of the five personality factors reproduce patterns well-documented in the psychological literature. In addition, we show that, based on the personality vectors, we can predict each of the five personality factors with high accuracy. PMID:24755833
ERIC Educational Resources Information Center
Kwon, Oh Hoon
2012-01-01
This dissertation documents a new way of conceptualizing vectors in college mathematics, especially in geometry. First, I will introduce three problems to show the complexity and subtlety of the construct of vectors with the classical vector representations. These highlight the need for a new framework that: (1) differentiates abstraction from a…
Surface water areas significantly impacted 2014 dengue outbreaks in Guangzhou, China
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Huaiyu; Huang, Shanqian
Dengue transmission in urban areas is strongly influenced by a range of biological and environmental factors, yet the key drivers still need further exploration. To better understand mechanisms of environment–mosquito–urban dengue transmission, we propose an empirical model parameterized and cross-validated from a unique dataset including viral gene sequences, vector dynamics and human dengue cases in Guangzhou, China, together with a 36-year urban environmental change maps investigated by spatiotemporal satellite image fusion. The dengue epidemics in Guangzhou are highly episodic and were not associated with annual rainfall over time. Our results indicate that urban environmental changes, especially variations in surface areamore » covered by water in urban areas, can substantially alter the virus population and dengue transmission. The recent severe dengue outbreaks in Guangzhou may be due to the surge in an artificial lake construction, which could increase infection force between vector (mainly Aedes albopictus) and host when urban water area significantly increased. Impacts of urban environmental change on dengue dynamics may not have been thoroughly investigated in the past studies and more work needs to be done to better understand the consequences of urbanization processes in our changing world. - Highlights: • Urban dengue outbreak is associated with water area in Guangzhou, 1978–2014. • Surface water area can alter population size of dengue virus in urban area. • Urban dengue outbreak is not associated with annual rainfall in Guangzhou. • Spatiotemporal satellite image fusion can investigate urban environmental change. • Urban environmental change could induce virus, vector, and dengue epidemic change.« less
Fan filters, the 3-D Radon transform, and image sequence analysis.
Marzetta, T L
1994-01-01
This paper develops a theory for the application of fan filters to moving objects. In contrast to previous treatments of the subject based on the 3-D Fourier transform, simplicity and insight are achieved by using the 3-D Radon transform. With this point of view, the Radon transform decomposes the image sequence into a set of plane waves that are parameterized by a two-component slowness vector. Fan filtering is equivalent to a multiplication in the Radon transform domain by a slowness response function, followed by an inverse Radon transform. The plane wave representation of a moving object involves only a restricted set of slownesses such that the inner product of the plane wave slowness vector and the moving object velocity vector is equal to one. All of the complexity in the application of fan filters to image sequences results from the velocity-slowness mapping not being one-to-one; therefore, the filter response cannot be independently specified at all velocities. A key contribution of this paper is to elucidate both the power and the limitations of fan filtering in this new application. A potential application of 3-D fan filters is in the detection of moving targets in clutter and noise. For example, an appropriately designed fan filter can reject perfectly all moving objects whose speed, irrespective of heading, is less than a specified cut-off speed, with only minor attenuation of significantly faster objects. A simple geometric construction determines the response of the filter for speeds greater than the cut-off speed.
NASA Astrophysics Data System (ADS)
Young, Jonathan; Ridgway, Gerard; Leung, Kelvin; Ourselin, Sebastien
2012-02-01
It is well known that hippocampal atrophy is a marker of the onset of Alzheimer's disease (AD) and as a result hippocampal volumetry has been used in a number of studies to provide early diagnosis of AD and predict conversion of mild cognitive impairment patients to AD. However, rates of atrophy are not uniform across the hippocampus making shape analysis a potentially more accurate biomarker. This study studies the hippocampi from 226 healthy controls, 148 AD patients and 330 MCI patients obtained from T1 weighted structural MRI images from the ADNI database. The hippocampi are anatomically segmented using the MAPS multi-atlas segmentation method, and the resulting binary images are then processed with SPHARM software to decompose their shapes as a weighted sum of spherical harmonic basis functions. The resulting parameterizations are then used as feature vectors in Support Vector Machine (SVM) classification. A wrapper based feature selection method was used as this considers the utility of features in discriminating classes in combination, fully exploiting the multivariate nature of the data and optimizing the selected set of features for the type of classifier that is used. The leave-one-out cross validated accuracy obtained on training data is 88.6% for classifying AD vs controls and 74% for classifying MCI-converters vs MCI-stable with very compact feature sets, showing that this is a highly promising method. There is currently a considerable fall in accuracy on unseen data indicating that the feature selection is sensitive to the data used, however feature ensemble methods may overcome this.
NASA Technical Reports Server (NTRS)
Chen, D. W.; Sengupta, S. K.; Welch, R. M.
1989-01-01
This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.
Campos, Samuel K; Barry, Michael A
2004-11-01
There are extensive efforts to develop cell-targeting adenoviral vectors for gene therapy wherein endogenous cell-binding ligands are ablated and exogenous ligands are introduced by genetic means. Although current approaches can genetically manipulate the capsid genes of adenoviral vectors, these approaches can be time-consuming and require multiple steps to produce a modified viral genome. We present here the use of the bacteriophage lambda Red recombination system as a valuable tool for the easy and rapid construction of capsid-modified adenoviral genomes.
NASA Astrophysics Data System (ADS)
Lim, Yeerang; Jung, Youeyun; Bang, Hyochoong
2018-05-01
This study presents model predictive formation control based on an eccentricity/inclination vector separation strategy. Alternative collision avoidance can be accomplished by using eccentricity/inclination vectors and adding a simple goal function term for optimization process. Real-time control is also achievable with model predictive controller based on convex formulation. Constraint-tightening approach is address as well improve robustness of the controller, and simulation results are presented to verify performance enhancement for the proposed approach.
NASA Astrophysics Data System (ADS)
Lin, Shangfei; Sheng, Jinyu
2017-12-01
Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.
Dislocation dynamics in hexagonal close-packed crystals
Aubry, S.; Rhee, M.; Hommes, G.; ...
2016-04-14
Extensions of the dislocation dynamics methodology necessary to enable accurate simulations of crystal plasticity in hexagonal close-packed (HCP) metals are presented. They concern the introduction of dislocation motion in HCP crystals through linear and non-linear mobility laws, as well as the treatment of composite dislocation physics. Formation, stability and dissociation of and other dislocations with large Burgers vectors defined as composite dislocations are examined and a new topological operation is proposed to enable their dissociation. Furthermore, the results of our simulations suggest that composite dislocations are omnipresent and may play important roles both in specific dislocation mechanisms and in bulkmore » crystal plasticity in HCP materials. While fully microscopic, our bulk DD simulations provide wealth of data that can be used to develop and parameterize constitutive models of crystal plasticity at the mesoscale.« less
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Katsaros, Kristina B.
1994-01-01
Based on a geometric optics model and the assumption of an isotropic Gaussian surface slope distribution, the component of ocean surface microwave emissivity variation due to large-scale surface roughness is parameterized for the frequencies and approximate viewing angle of the Special Sensor Microwave/Imager. Independent geophysical variables in the parameterization are the effective (microwave frequency dependent) slope variance and the sea surface temperature. Using the same physical model, the change in the effective zenith angle of reflected sky radiation arising from large-scale roughness is also parameterized. Independent geophysical variables in this parameterization are the effective slope variance and the atmospheric optical depth at the frequency in question. Both of the above model-based parameterizations are intended for use in conjunction with empirical parameterizations relating effective slope variance and foam coverage to near-surface wind speed. These empirical parameterizations are the subject of a separate paper.
NASA Astrophysics Data System (ADS)
Bae, Seungbin; Lee, Kisung; Seo, Changwoo; Kim, Jungmin; Joo, Sung-Kwan; Joung, Jinhun
2011-09-01
We developed a high precision position decoding method for a positron emission tomography (PET) detector that consists of a thick slab scintillator coupled with a multichannel photomultiplier tube (PMT). The DETECT2000 simulation package was used to validate light response characteristics for a 48.8 mm×48.8 mm×10 mm slab of lutetium oxyorthosilicate coupled to a 64 channel PMT. The data are then combined to produce light collection histograms. We employed a Gaussian mixture model (GMM) to parameterize the composite light response with multiple Gaussian mixtures. In the training step, light photons acquired by N PMT channels was used as an N-dimensional feature vector and were fed into a GMM training model to generate optimal parameters for M mixtures. In the positioning step, we decoded the spatial locations of incident photons by evaluating a sample feature vector with respect to the trained mixture parameters. The average spatial resolutions after positioning with four mixtures were 1.1 mm full width at half maximum (FWHM) at the corner and 1.0 mm FWHM at the center section. This indicates that the proposed algorithm achieved high performance in both spatial resolution and positioning bias, especially at the corner section of the detector.
Kalman Filter for Spinning Spacecraft Attitude Estimation
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Sedlak, Joseph E.
2008-01-01
This paper presents a Kalman filter using a seven-component attitude state vector comprising the angular momentum components in an inertial reference frame, the angular momentum components in the body frame, and a rotation angle. The relatively slow variation of these parameters makes this parameterization advantageous for spinning spacecraft attitude estimation. The filter accounts for the constraint that the magnitude of the angular momentum vector is the same in the inertial and body frames by employing a reduced six-component error state. Four variants of the filter, defined by different choices for the reduced error state, are tested against a quaternion-based filter using simulated data for the THEMIS mission. Three of these variants choose three of the components of the error state to be the infinitesimal attitude error angles, facilitating the computation of measurement sensitivity matrices and causing the usual 3x3 attitude covariance matrix to be a submatrix of the 6x6 covariance of the error state. These variants differ in their choice for the other three components of the error state. The variant employing the infinitesimal attitude error angles and the angular momentum components in an inertial reference frame as the error state shows the best combination of robustness and efficiency in the simulations. Attitude estimation results using THEMIS flight data are also presented.
Optimal lattice-structured materials
Messner, Mark C.
2016-07-09
This paper describes a method for optimizing the mesostructure of lattice-structured materials. These materials are periodic arrays of slender members resembling efficient, lightweight macroscale structures like bridges and frame buildings. Current additive manufacturing technologies can assemble lattice structures with length scales ranging from nanometers to millimeters. Previous work demonstrates that lattice materials have excellent stiffness- and strength-to-weight scaling, outperforming natural materials. However, there are currently no methods for producing optimal mesostructures that consider the full space of possible 3D lattice topologies. The inverse homogenization approach for optimizing the periodic structure of lattice materials requires a parameterized, homogenized material model describingmore » the response of an arbitrary structure. This work develops such a model, starting with a method for describing the long-wavelength, macroscale deformation of an arbitrary lattice. The work combines the homogenized model with a parameterized description of the total design space to generate a parameterized model. Finally, the work describes an optimization method capable of producing optimal mesostructures. Several examples demonstrate the optimization method. One of these examples produces an elastically isotropic, maximally stiff structure, here called the isotruss, that arguably outperforms the anisotropic octet truss topology.« less
Parameterizing the Morse Potential for Coarse-Grained Modeling of Blood Plasma
Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan
2014-01-01
Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately. PMID:24910470
Structural test of the parameterized-backbone method for protein design.
Plecs, Joseph J; Harbury, Pehr B; Kim, Peter S; Alber, Tom
2004-09-03
Designing new protein folds requires a method for simultaneously optimizing the conformation of the backbone and the side-chains. One approach to this problem is the use of a parameterized backbone, which allows the systematic exploration of families of structures. We report the crystal structure of RH3, a right-handed, three-helix coiled coil that was designed using a parameterized backbone and detailed modeling of core packing. This crystal structure was determined using another rationally designed feature, a metal-binding site that permitted experimental phasing of the X-ray data. RH3 adopted the intended fold, which has not been observed previously in biological proteins. Unanticipated structural asymmetry in the trimer was a principal source of variation within the RH3 structure. The sequence of RH3 differs from that of a previously characterized right-handed tetramer, RH4, at only one position in each 11 amino acid sequence repeat. This close similarity indicates that the design method is sensitive to the core packing interactions that specify the protein structure. Comparison of the structures of RH3 and RH4 indicates that both steric overlap and cavity formation provide strong driving forces for oligomer specificity.
Actual and Idealized Crystal Field Parameterizations for the Uranium Ions in UF 4
NASA Astrophysics Data System (ADS)
Gajek, Z.; Mulak, J.; Krupa, J. C.
1993-12-01
The crystal field parameters for the actual coordination symmetries of the uranium ions in UF 4, C2 and C1, and for their idealizations to D2, C2 v , D4, D4 d , and the Archimedean antiprism point symmetries are given. They have been calculated by means of both the perturbative ab initio model and the angular overlap model and are referenced to the recent results fitted by Carnall's group. The equivalency of some different sets of parameters has been verified with the standardization procedure. The adequacy of several idealized approaches has been tested by comparison of the corresponding splitting patterns of the 3H 4 ground state. Our results support the parameterization given by Carnall. Furthermore, the parameterization of the crystal field potential and the splitting diagram for the symmetryless uranium ion U( C1) are given. Having at our disposal the crystal field splittings for the two kinds of uranium ions in UF 4, U( C2) and U( C1), we calculate the model plots of the paramagnetic susceptibility χ( T) and the magnetic entropy associated with the Schottky anomaly Δ S( T) for UF 4.
Model-driven harmonic parameterization of the cortical surface: HIP-HOP.
Auzias, G; Lefèvre, J; Le Troter, A; Fischer, C; Perrot, M; Régis, J; Coulon, O
2013-05-01
In the context of inter subject brain surface matching, we present a parameterization of the cortical surface constrained by a model of cortical organization. The parameterization is defined via an harmonic mapping of each hemisphere surface to a rectangular planar domain that integrates a representation of the model. As opposed to previous landmark-based registration methods we do not match folds between individuals but instead optimize the fit between cortical sulci and specific iso-coordinate axis in the model. This strategy overcomes some limitation to sulcus-based registration techniques such as topological variability in sulcal landmarks across subjects. Experiments on 62 subjects with manually traced sulci are presented and compared with the result of the Freesurfer software. The evaluation involves a measure of dispersion of sulci with both angular and area distortions. We show that the model-based strategy can lead to a natural, efficient and very fast (less than 5 min per hemisphere) method for defining inter subjects correspondences. We discuss how this approach also reduces the problems inherent to anatomically defined landmarks and open the way to the investigation of cortical organization through the notion of orientation and alignment of structures across the cortex.
Herpes simplex virus type 1-derived recombinant and amplicon vectors.
Fraefel, Cornel; Marconi, Peggy; Epstein, Alberto L
2011-01-01
Herpes simplex virus type 1 (HSV-1) is a human pathogen whose lifestyle is based on a long-term dual interaction with the infected host, being able to establish both lytic and latent infections. The virus genome is a 153 kbp double-stranded DNA molecule encoding more than 80 genes. The interest of HSV-1 as gene transfer vector stems from its ability to infect many different cell types, both quiescent and proliferating cells, the very high packaging capacity of the virus capsid, the outstanding neurotropic adaptations that this virus has evolved, and the fact that it never integrates into the cellular chromosomes, thus avoiding the risk of insertional mutagenesis. Two types of vectors can be derived from HSV-1, recombinant vectors and amplicon vectors, and different methodologies have been developed to prepare large stocks of each type of vector. This chapter summarizes (1) the two approaches most commonly used to prepare recombinant vectors through homologous recombination, either in eukaryotic cells or in bacteria, and (2) the two methodologies currently used to generate helper-free amplicon vectors, either using a bacterial artificial chromosome (BAC)-based approach or a Cre/loxP site-specific recombination strategy.
NASA Astrophysics Data System (ADS)
Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.
2017-12-01
The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.
Prediction of convective activity using a system of parasitic-nested numerical models
NASA Technical Reports Server (NTRS)
Perkey, D. J.
1976-01-01
A limited area, three dimensional, moist, primitive equation (PE) model is developed to test the sensitivity of quantitative precipitation forecasts to the initial relative humidity distribution. Special emphasis is placed on the squall-line region. To accomplish the desired goal, time dependent lateral boundaries and a general convective parameterization scheme suitable for mid-latitude systems were developed. The sequential plume convective parameterization scheme presented is designed to have the versatility necessary in mid-latitudes and to be applicable for short-range forecasts. The results indicate that the scheme is able to function in the frontally forced squallline region, in the gently rising altostratus region ahead of the approaching low center, and in the over-riding region ahead of the warm front. Three experiments are discussed.
An RBF-based reparameterization method for constrained texture mapping.
Yu, Hongchuan; Lee, Tong-Yee; Yeh, I-Cheng; Yang, Xiaosong; Li, Wenxi; Zhang, Jian J
2012-07-01
Texture mapping has long been used in computer graphics to enhance the realism of virtual scenes. However, to match the 3D model feature points with the corresponding pixels in a texture image, surface parameterization must satisfy specific positional constraints. However, despite numerous research efforts, the construction of a mathematically robust, foldover-free parameterization that is subject to positional constraints continues to be a challenge. In the present paper, this foldover problem is addressed by developing radial basis function (RBF)-based reparameterization. Given initial 2D embedding of a 3D surface, the proposed method can reparameterize 2D embedding into a foldover-free 2D mesh, satisfying a set of user-specified constraint points. In addition, this approach is mesh free. Therefore, generating smooth texture mapping results is possible without extra smoothing optimization.
A Novel Vaccine Approach for Chagas Disease Using Rare Adenovirus Serotype 48 Vectors
Farrow, Anitra L.; Peng, Binghao J.; Gu, Linlin; Krendelchtchikov, Alexandre; Matthews, Qiana L.
2016-01-01
Due to the increasing amount of people afflicted worldwide with Chagas disease and an increasing prevalence in the United States, there is a greater need to develop a safe and effective vaccine for this neglected disease. Adenovirus serotype 5 (Ad5) is the most common adenovirus vector used for gene therapy and vaccine approaches, but its efficacy is limited by preexisting vector immunity in humans resulting from natural infections. Therefore, we have employed rare serotype adenovirus 48 (Ad48) as an alternative choice for adenovirus/Chagas vaccine therapy. In this study, we modified Ad5 and Ad48 vectors to contain T. cruzi’s amastigote surface protein 2 (ASP-2) in the adenoviral early gene. We also modified Ad5 and Ad48 vectors to utilize the “Antigen Capsid-Incorporation” strategy by adding T. cruzi epitopes to protein IX (pIX). Mice that were immunized with the modified vectors were able to elicit T. cruzi-specific humoral and cellular responses. This study indicates that Ad48-modified vectors function comparable to or even premium to Ad5-modified vectors. This study provides novel data demonstrating that Ad48 can be used as a potential adenovirus vaccine vector against Chagas disease. PMID:26978385
NASA Astrophysics Data System (ADS)
Hecht-Nielsen, Robert
1997-04-01
A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.
4D Cone-beam CT reconstruction using a motion model based on principal component analysis
Staub, David; Docef, Alen; Brock, Robert S.; Vaman, Constantin; Murphy, Martin J.
2011-01-01
Purpose: To provide a proof of concept validation of a novel 4D cone-beam CT (4DCBCT) reconstruction algorithm and to determine the best methods to train and optimize the algorithm. Methods: The algorithm animates a patient fan-beam CT (FBCT) with a patient specific parametric motion model in order to generate a time series of deformed CTs (the reconstructed 4DCBCT) that track the motion of the patient anatomy on a voxel by voxel scale. The motion model is constrained by requiring that projections cast through the deformed CT time series match the projections of the raw patient 4DCBCT. The motion model uses a basis of eigenvectors that are generated via principal component analysis (PCA) of a training set of displacement vector fields (DVFs) that approximate patient motion. The eigenvectors are weighted by a parameterized function of the patient breathing trace recorded during 4DCBCT. The algorithm is demonstrated and tested via numerical simulation. Results: The algorithm is shown to produce accurate reconstruction results for the most complicated simulated motion, in which voxels move with a pseudo-periodic pattern and relative phase shifts exist between voxels. The tests show that principal component eigenvectors trained on DVFs from a novel 2D/3D registration method give substantially better results than eigenvectors trained on DVFs obtained by conventionally registering 4DCBCT phases reconstructed via filtered backprojection. Conclusions: Proof of concept testing has validated the 4DCBCT reconstruction approach for the types of simulated data considered. In addition, the authors found the 2D/3D registration approach to be our best choice for generating the DVF training set, and the Nelder-Mead simplex algorithm the most robust optimization routine. PMID:22149852
Static investigation of two STOL nozzle concepts with pitch thrust-vectoring capability
NASA Technical Reports Server (NTRS)
Mason, M. L.; Burley, J. R., II
1986-01-01
A static investigation of the internal performance of two short take-off and landing (STOL) nozzle concepts with pitch thrust-vectoring capability has been conducted. An axisymmetric nozzle concept and a nonaxisymmetric nozzle concept were tested at dry and afterburning power settings. The axisymmetric concept consisted of a circular approach duct with a convergent-divergent nozzle. Pitch thrust vectoring was accomplished by vectoring the approach duct without changing the nozzle geometry. The nonaxisymmetric concept consisted of a two dimensional convergent-divergent nozzle. Pitch thrust vectoring was implemented by blocking the nozzle exit and deflecting a door in the lower nozzle flap. The test nozzle pressure ratio was varied up to 10.0, depending on model geometry. Results indicate that both pitch vectoring concepts produced resultant pitch vector angles which were nearly equal to the geometric pitch deflection angles. The axisymmetric nozzle concept had only small thrust losses at the largest pitch deflection angle of 70 deg., but the two-dimensional convergent-divergent nozzle concept had large performance losses at both of the two pitch deflection angles tested, 60 deg. and 70 deg.
Feature Vector Construction Method for IRIS Recognition
NASA Astrophysics Data System (ADS)
Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.
2017-05-01
One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.
Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection.
Hu, Weiming; Gao, Jun; Wang, Yanguo; Wu, Ou; Maybank, Stephen
2014-01-01
Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types.
Multimodel Uncertainty Changes in Simulated River Flows Induced by Human Impact Parameterizations
NASA Technical Reports Server (NTRS)
Liu, Xingcai; Tang, Qiuhong; Cui, Huijuan; Mu, Mengfei; Gerten Dieter; Gosling, Simon; Masaki, Yoshimitsu; Satoh, Yusuke; Wada, Yoshihide
2017-01-01
Human impacts increasingly affect the global hydrological cycle and indeed dominate hydrological changes in some regions. Hydrologists have sought to identify the human-impact-induced hydrological variations via parameterizing anthropogenic water uses in global hydrological models (GHMs). The consequently increased model complexity is likely to introduce additional uncertainty among GHMs. Here, using four GHMs, between-model uncertainties are quantified in terms of the ratio of signal to noise (SNR) for average river flow during 1971-2000 simulated in two experiments, with representation of human impacts (VARSOC) and without (NOSOC). It is the first quantitative investigation of between-model uncertainty resulted from the inclusion of human impact parameterizations. Results show that the between-model uncertainties in terms of SNRs in the VARSOC annual flow are larger (about 2 for global and varied magnitude for different basins) than those in the NOSOC, which are particularly significant in most areas of Asia and northern areas to the Mediterranean Sea. The SNR differences are mostly negative (-20 to 5, indicating higher uncertainty) for basin-averaged annual flow. The VARSOC high flow shows slightly lower uncertainties than NOSOC simulations, with SNR differences mostly ranging from -20 to 20. The uncertainty differences between the two experiments are significantly related to the fraction of irrigation areas of basins. The large additional uncertainties in VARSOC simulations introduced by the inclusion of parameterizations of human impacts raise the urgent need of GHMs development regarding a better understanding of human impacts. Differences in the parameterizations of irrigation, reservoir regulation and water withdrawals are discussed towards potential directions of improvements for future GHM development. We also discuss the advantages of statistical approaches to reduce the between-model uncertainties, and the importance of calibration of GHMs for not only better performances of historical simulations but also more robust and confidential future projections of hydrological changes under a changing environment.
Modeling late rectal toxicities based on a parameterized representation of the 3D dose distribution
NASA Astrophysics Data System (ADS)
Buettner, Florian; Gulliford, Sarah L.; Webb, Steve; Partridge, Mike
2011-04-01
Many models exist for predicting toxicities based on dose-volume histograms (DVHs) or dose-surface histograms (DSHs). This approach has several drawbacks as firstly the reduction of the dose distribution to a histogram results in the loss of spatial information and secondly the bins of the histograms are highly correlated with each other. Furthermore, some of the complex nonlinear models proposed in the past lack a direct physical interpretation and the ability to predict probabilities rather than binary outcomes. We propose a parameterized representation of the 3D distribution of the dose to the rectal wall which explicitly includes geometrical information in the form of the eccentricity of the dose distribution as well as its lateral and longitudinal extent. We use a nonlinear kernel-based probabilistic model to predict late rectal toxicity based on the parameterized dose distribution and assessed its predictive power using data from the MRC RT01 trial (ISCTRN 47772397). The endpoints under consideration were rectal bleeding, loose stools, and a global toxicity score. We extract simple rules identifying 3D dose patterns related to a specifically low risk of complication. Normal tissue complication probability (NTCP) models based on parameterized representations of geometrical and volumetric measures resulted in areas under the curve (AUCs) of 0.66, 0.63 and 0.67 for predicting rectal bleeding, loose stools and global toxicity, respectively. In comparison, NTCP models based on standard DVHs performed worse and resulted in AUCs of 0.59 for all three endpoints. In conclusion, we have presented low-dimensional, interpretable and nonlinear NTCP models based on the parameterized representation of the dose to the rectal wall. These models had a higher predictive power than models based on standard DVHs and their low dimensionality allowed for the identification of 3D dose patterns related to a low risk of complication.
Rapid Assembly of Customized TALENs into Multiple Delivery Systems
Zhang, Zhengxing; Zhang, Siliang; Huang, Xin; Orwig, Kyle E.; Sheng, Yi
2013-01-01
Transcriptional activator-like effector nucleases (TALENs) have become a powerful tool for genome editing. Here we present an efficient TALEN assembly approach in which TALENs are assembled by direct Golden Gate ligation into Gateway® Entry vectors from a repeat variable di-residue (RVD) plasmid array. We constructed TALEN pairs targeted to mouse Ddx3 subfamily genes, and demonstrated that our modified TALEN assembly approach efficiently generates accurate TALEN moieties that effectively introduce mutations into target genes. We generated “user friendly” TALEN Entry vectors containing TALEN expression cassettes with fluorescent reporter genes that can be efficiently transferred via Gateway (LR) recombination into different delivery systems. We demonstrated that the TALEN Entry vectors can be easily transferred to an adenoviral delivery system to expand application to cells that are difficult to transfect. Since TALENs work in pairs, we also generated a TALEN Entry vector set that combines a TALEN pair into one PiggyBac transposon-based destination vector. The approach described here can also be modified for construction of TALE transcriptional activators, repressors or other functional domains. PMID:24244669
Scalar-vector soliton fiber laser mode-locked by nonlinear polarization rotation.
Wu, Zhichao; Liu, Deming; Fu, Songnian; Li, Lei; Tang, Ming; Zhao, Luming
2016-08-08
We report a passively mode-locked fiber laser by nonlinear polarization rotation (NPR), where both vector and scalar soliton can co-exist within the laser cavity. The mode-locked pulse evolves as a vector soliton in the strong birefringent segment and is transformed into a regular scalar soliton after the polarizer within the laser cavity. The existence of solutions in a polarization-dependent cavity comprising a periodic combination of two distinct nonlinear waves is first demonstrated and likely to be applicable to various other nonlinear systems. For very large local birefringence, our laser approaches the operation regime of vector soliton lasers, while it approaches scalar soliton fiber lasers under the condition of very small birefringence.
Holistic versus monomeric strategies for hydrological modelling of human-modified hydrosystems
NASA Astrophysics Data System (ADS)
Nalbantis, I.; Efstratiadis, A.; Rozos, E.; Kopsiafti, M.; Koutsoyiannis, D.
2011-03-01
The modelling of human-modified basins that are inadequately measured constitutes a challenge for hydrological science. Often, models for such systems are detailed and hydraulics-based for only one part of the system while for other parts oversimplified models or rough assumptions are used. This is typically a bottom-up approach, which seeks to exploit knowledge of hydrological processes at the micro-scale at some components of the system. Also, it is a monomeric approach in two ways: first, essential interactions among system components may be poorly represented or even omitted; second, differences in the level of detail of process representation can lead to uncontrolled errors. Additionally, the calibration procedure merely accounts for the reproduction of the observed responses using typical fitting criteria. The paper aims to raise some critical issues, regarding the entire modelling approach for such hydrosystems. For this, two alternative modelling strategies are examined that reflect two modelling approaches or philosophies: a dominant bottom-up approach, which is also monomeric and, very often, based on output information, and a top-down and holistic approach based on generalized information. Critical options are examined, which codify the differences between the two strategies: the representation of surface, groundwater and water management processes, the schematization and parameterization concepts and the parameter estimation methodology. The first strategy is based on stand-alone models for surface and groundwater processes and for water management, which are employed sequentially. For each model, a different (detailed or coarse) parameterization is used, which is dictated by the hydrosystem schematization. The second strategy involves model integration for all processes, parsimonious parameterization and hybrid manual-automatic parameter optimization based on multiple objectives. A test case is examined in a hydrosystem in Greece with high complexities, such as extended surface-groundwater interactions, ill-defined boundaries, sinks to the sea and anthropogenic intervention with unmeasured abstractions both from surface water and aquifers. Criteria for comparison are the physical consistency of parameters, the reproduction of runoff hydrographs at multiple sites within the studied basin, the likelihood of uncontrolled model outputs, the required amount of computational effort and the performance within a stochastic simulation setting. Our work allows for investigating the deterioration of model performance in cases where no balanced attention is paid to all components of human-modified hydrosystems and the related information. Also, sources of errors are identified and their combined effect are evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, May Wai San; Ovchinnikov, Mikhail; Wang, Minghuai
Potential ways of parameterizing vertical turbulent fluxes of hydrometeors are examined using a high-resolution cloud-resolving model. The cloud-resolving model uses the Morrison microphysics scheme, which contains prognostic variables for rain, graupel, ice, and snow. A benchmark simulation with a horizontal grid spacing of 250 m of a deep convection case carried out to evaluate three different ways of parameterizing the turbulent vertical fluxes of hydrometeors: an eddy-diffusion approximation, a quadrant-based decomposition, and a scaling method that accounts for within-quadrant (subplume) correlations. Results show that the down-gradient nature of the eddy-diffusion approximation tends to transport mass away from concentrated regions, whereasmore » the benchmark simulation indicates that the vertical transport tends to transport mass from below the level of maximum to aloft. Unlike the eddy-diffusion approach, the quadri-modal decomposition is able to capture the signs of the flux gradient but underestimates the magnitudes. The scaling approach is shown to perform the best by accounting for within-quadrant correlations, and improves the results for all hydrometeors except for snow. A sensitivity study is performed to examine how vertical transport may affect the microphysics of the hydrometeors. The vertical transport of each hydrometeor type is artificially suppressed in each test. Results from the sensitivity tests show that cloud-droplet-related processes are most sensitive to suppressed rain or graupel transport. In particular, suppressing rain or graupel transport has a strong impact on the production of snow and ice aloft. Lastly, a viable subgrid-scale hydrometeor transport scheme in an assumed probability density function parameterization is discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitagaki, T.; Yuta, H.; Tanaka, S.
1990-09-01
The weak nucleon axial-vector ({ital F}{sub {ital A}}) and vector ({ital F}{sub {ital V}}) form factors are determined from the momentum-transfer-squared ({ital Q}{sup 2}) distributions using 2538 {mu}{sup {minus}} {ital p} and 1384 {mu}{sup {minus}}{Delta}{sup ++} events. The data were obtained from 1 800 000 pictures taken in the BNL 7-foot deuterium-filled bubble chamber exposed to a wide-band neutrino beam with a mean energy {ital E}{sub {nu}}=1.6 GeV. In the framework of the conventional {ital V}{minus}{ital A} theory with standard assumptions, the value obtained from the {mu}{sup {minus}}{ital p} events for the axial-vector mass {ital M}{sub {ital A}} in themore » pure dipole parameterization is 1.070{sub {minus}0.045}{sup +0.040} GeV and from the {mu}{sup {minus}}{Delta}{sup ++} events is 1.28{sub {minus}0.10}{sup +0.08} GeV. These results are in good agreement with an earlier measurement from this experiment and other recent results. The reaction mechanisms for both processes are compared and found to be very similar. A two-parameter fit for the quasielastic reaction, using dipole forms for {ital F}{sub {ital V}} and {ital F}{sub {ital A}}, yields {ital M}{sub {ital A}}=0.97{sub {minus}0.11}{sup +0.14} GeV and {ital M}{sub {ital V}}=0.89{sub {minus}0.07}{sup +0.04} GeV, which is in good agreement with the conserved-vector-current value of {ital M}{sub {ital V}}=0.84 GeV. Possible deviations from the standard assumptions are also discussed.« less
Malaria control under unstable dynamics: reactive vs. climate-based strategies.
Baeza, Andres; Bouma, Menno J; Dhiman, Ramesh; Pascual, Mercedes
2014-01-01
In areas of the world where malaria prevails under unstable conditions, attacking the adult vector population through insecticide-based Indoor Residual Spraying (IRS) is the most common method for controlling epidemics. Defined in policy guidance, the use of Annual Parasitic Incidence (API) is an important tool for assessing the effectiveness of control and for planning new interventions. To investigate the consequences that a policy based on API in previous seasons might have on the population dynamics of the disease and on control itself in regions of low and seasonal transmission, we formulate a mathematical malaria model that couples epidemiologic and vector dynamics with IRS intervention. This model is parameterized for a low transmission and semi-arid region in northwest India, where epidemics are driven by high rainfall variability. We show that this type of feedback mechanism in control strategies can generate transient cycles in malaria even in the absence of environmental variability, and that this tendency to cycle can in turn limit the effectiveness of control in the presence of such variability. Specifically, for realistic rainfall conditions and over a range of control intensities, the effectiveness of such 'reactive' intervention is compared to that of an alternative strategy based on rainfall and therefore vector variability. Results show that the efficacy of intervention is strongly influenced by rainfall variability and the type of policy implemented. In particular, under an API 'reactive' policy, high vector populations can coincide more frequently with low control coverage, and in so doing generate large unexpected epidemics and decrease the likelihood of elimination. These results highlight the importance of incorporating information on climate variability, rather than previous incidence, in planning IRS interventions in regions of unstable malaria. These findings are discussed in the more general context of elimination and other low transmission regions such as highlands. Copyright © 2013. Published by Elsevier B.V.
Vector method for strain estimation in phase-sensitive optical coherence elastography
NASA Astrophysics Data System (ADS)
Matveyev, A. L.; Matveev, L. A.; Sovetsky, A. A.; Gelikonov, G. V.; Moiseev, A. A.; Zaitsev, V. Y.
2018-06-01
A noise-tolerant approach to strain estimation in phase-sensitive optical coherence elastography, robust to decorrelation distortions, is discussed. The method is based on evaluation of interframe phase-variation gradient, but its main feature is that the phase is singled out at the very last step of the gradient estimation. All intermediate steps operate with complex-valued optical coherence tomography (OCT) signals represented as vectors in the complex plane (hence, we call this approach the ‘vector’ method). In comparison with such a popular method as least-square fitting of the phase-difference slope over a selected region (even in the improved variant with amplitude weighting for suppressing small-amplitude noisy pixels), the vector approach demonstrates superior tolerance to both additive noise in the receiving system and speckle-decorrelation caused by tissue straining. Another advantage of the vector approach is that it obviates the usual necessity of error-prone phase unwrapping. Here, special attention is paid to modifications of the vector method that make it especially suitable for processing deformations with significant lateral inhomogeneity, which often occur in real situations. The method’s advantages are demonstrated using both simulated and real OCT scans obtained during reshaping of a collagenous tissue sample irradiated by an IR laser beam producing complex spatially inhomogeneous deformations.
Bisenius, Sandrine; Mueller, Karsten; Diehl-Schmid, Janine; Fassbender, Klaus; Grimmer, Timo; Jessen, Frank; Kassubek, Jan; Kornhuber, Johannes; Landwehrmeyer, Bernhard; Ludolph, Albert; Schneider, Anja; Anderl-Straub, Sarah; Stuke, Katharina; Danek, Adrian; Otto, Markus; Schroeter, Matthias L
2017-01-01
Primary progressive aphasia (PPA) encompasses the three subtypes nonfluent/agrammatic variant PPA, semantic variant PPA, and the logopenic variant PPA, which are characterized by distinct patterns of language difficulties and regional brain atrophy. To validate the potential of structural magnetic resonance imaging data for early individual diagnosis, we used support vector machine classification on grey matter density maps obtained by voxel-based morphometry analysis to discriminate PPA subtypes (44 patients: 16 nonfluent/agrammatic variant PPA, 17 semantic variant PPA, 11 logopenic variant PPA) from 20 healthy controls (matched for sample size, age, and gender) in the cohort of the multi-center study of the German consortium for frontotemporal lobar degeneration. Here, we compared a whole-brain with a meta-analysis-based disease-specific regions-of-interest approach for support vector machine classification. We also used support vector machine classification to discriminate the three PPA subtypes from each other. Whole brain support vector machine classification enabled a very high accuracy between 91 and 97% for identifying specific PPA subtypes vs. healthy controls, and 78/95% for the discrimination between semantic variant vs. nonfluent/agrammatic or logopenic PPA variants. Only for the discrimination between nonfluent/agrammatic and logopenic PPA variants accuracy was low with 55%. Interestingly, the regions that contributed the most to the support vector machine classification of patients corresponded largely to the regions that were atrophic in these patients as revealed by group comparisons. Although the whole brain approach took also into account regions that were not covered in the regions-of-interest approach, both approaches showed similar accuracies due to the disease-specificity of the selected networks. Conclusion, support vector machine classification of multi-center structural magnetic resonance imaging data enables prediction of PPA subtypes with a very high accuracy paving the road for its application in clinical settings.
Gene delivery strategies for the treatment of mucopolysaccharidoses.
Baldo, Guilherme; Giugliani, Roberto; Matte, Ursula
2014-03-01
Mucopolysaccharidosis (MPS) disorders are genetic diseases caused by deficiencies in the lysosomal enzymes responsible for the degradation of glycosaminoglycans. Current treatments are not able to correct all disease symptoms and are not available for all MPS types, which makes gene therapy especially relevant. Multiple gene therapy approaches have been tested for different types of MPS, and our aim in this study is to critically analyze each of them. In this review, we have included the major studies that describe the use of adeno-associated retroviral and lentiviral vectors, as well as relevant non-viral approaches for MPS disorders. Some protocols such as the use of adeno-associated vectors and lentiviral vectors are approaching the clinic for these disorders and, along with combined approaches, seem to be the future of gene therapy for MPS.
A Hybrid Neuro-Fuzzy Model For Integrating Large Earth-Science Datasets
NASA Astrophysics Data System (ADS)
Porwal, A.; Carranza, J.; Hale, M.
2004-12-01
A GIS-based hybrid neuro-fuzzy approach to integration of large earth-science datasets for mineral prospectivity mapping is described. It implements a Takagi-Sugeno type fuzzy inference system in the framework of a four-layered feed-forward adaptive neural network. Each unique combination of the datasets is considered a feature vector whose components are derived by knowledge-based ordinal encoding of the constituent datasets. A subset of feature vectors with a known output target vector (i.e., unique conditions known to be associated with either a mineralized or a barren location) is used for the training of an adaptive neuro-fuzzy inference system. Training involves iterative adjustment of parameters of the adaptive neuro-fuzzy inference system using a hybrid learning procedure for mapping each training vector to its output target vector with minimum sum of squared error. The trained adaptive neuro-fuzzy inference system is used to process all feature vectors. The output for each feature vector is a value that indicates the extent to which a feature vector belongs to the mineralized class or the barren class. These values are used to generate a prospectivity map. The procedure is demonstrated by an application to regional-scale base metal prospectivity mapping in a study area located in the Aravalli metallogenic province (western India). A comparison of the hybrid neuro-fuzzy approach with pure knowledge-driven fuzzy and pure data-driven neural network approaches indicates that the former offers a superior method for integrating large earth-science datasets for predictive spatial mathematical modelling.
Multi-Level Adaptation in End-User Development of 3D Virtual Chemistry Experiments
ERIC Educational Resources Information Center
Liu, Chang; Zhong, Ying
2014-01-01
Multi-level adaptation in end-user development (EUD) is an effective way to enable non-technical end users such as educators to gradually introduce more functionality with increasing complexity to 3D virtual learning environments developed by themselves using EUD approaches. Parameterization, integration, and extension are three levels of…
The EPA/ORD National Exposure Research Lab's (NERL) UA/SA/PE research program addresses both tactical and strategic needs in direct support of ORD's client base. The design represents an integrated approach in achieving the highest levels of quality assurance in environmental de...
The EPA/ORD National Exposure Research Lab's (NERL) UA/SA/PE research program addresses both tactical and strategic needs in direct support of ORD's client base. The design represents an integrated approach in achieving the highest levels of quality assurance in environmental dec...
Hillslope threshold response to rainfall: (2) development and use of a macroscale model
Chris B. Graham; Jeffrey J. McDonnell
2010-01-01
Hillslope hydrological response to precipitation is extremely complex and poorly modeled. One possible approach for reducing the complexity of hillslope response and its mathematical parameterization is to look for macroscale hydrological behavior. Hillslope threshold response to storm precipitation is one such macroscale behavior observed at field sites across the...
A New Approach to Attitude Stability and Control for Low Airspeed Vehicles
NASA Technical Reports Server (NTRS)
Lim, K. B.; Shin, Y-Y.; Moerder, D. D.; Cooper, E. G.
2004-01-01
This paper describes an approach for controlling the attitude of statically unstable thrust-levitated vehicles in hover or slow translation. The large thrust vector that characterizes such vehicles can be modulated to provide control forces and moments to the airframe, but such modulation is accompanied by significant unsteady flow effects. These effects are difficult to model, and can compromise the practical value of thrust vectoring in closed-loop attitude stability, even if the thrust vectoring machinery has sufficient bandwidth for stabilization. The stabilization approach described in this paper is based on using internal angular momentum transfer devices for stability, augmented by thrust vectoring for trim and other "outer loop" control functions. The three main components of this approach are: (1) a z-body axis angular momentum bias enhances static attitude stability, reducing the amount of control activity needed for stabilization, (2) optionally, gimbaled reaction wheels provide high-bandwidth control torques for additional stabilization, or agility, and (3) the resulting strongly coupled system dynamics are controlled by a multivariable controller. A flight test vehicle is described, and nonlinear simulation results are provided that demonstrate the efficiency of the approach.
Cai, Meng-Qiang; Wang, Zhou-Xiang; Liang, Juan; Wang, Yan-Kun; Gao, Xu-Zhen; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian
2017-08-01
The scheme for generating vector optical fields should have not only high efficiency but also flexibility for satisfying the requirements of various applications. However, in general, high efficiency and flexibility are not compatible. Here we present and experimentally demonstrate a solution to directly, flexibly, and efficiently generate vector vortex optical fields (VVOFs) with a reflective phase-only liquid crystal spatial light modulator (LC-SLM) based on optical birefringence of liquid crystal molecules. To generate the VVOFs, this approach needs in principle only a half-wave plate, an LC-SLM, and a quarter-wave plate. This approach has some advantages, including a simple experimental setup, good flexibility, and high efficiency, making the approach very promising in some applications when higher power is need. This approach has a generation efficiency of 44.0%, which is much higher than the 1.1% of the common path interferometric approach.
Data-driven RBE parameterization for helium ion beams
NASA Astrophysics Data System (ADS)
Mairani, A.; Magro, G.; Dokic, I.; Valle, S. M.; Tessonnier, T.; Galm, R.; Ciocca, M.; Parodi, K.; Ferrari, A.; Jäkel, O.; Haberer, T.; Pedroni, P.; Böhlen, T. T.
2016-01-01
Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter {{(α /β )}\\text{ph}} of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the \\text{RB}{{\\text{E}}α}={α\\text{He}}/{α\\text{ph}} and {{\\text{R}}β}={β\\text{He}}/{β\\text{ph}} ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival (\\text{RB}{{\\text{E}}10} ) are compared with the experimental ones. Pearson’s correlation coefficients were, respectively, 0.85 and 0.84 confirming the soundness of the introduced approach. Moreover, due to the lack of experimental data at low LET, clonogenic experiments have been performed irradiating A549 cell line with {{(α /β )}\\text{ph}}=5.4 Gy at the entrance of a 56.4 MeV u-1He beam at the Heidelberg Ion Beam Therapy Center. The proposed parameterization reproduces the measured cell survival within the experimental uncertainties. A RBE formula, which depends only on dose, LET and {{(α /β )}\\text{ph}} as input parameters is proposed, allowing a straightforward implementation in a TP system.
Multisite Evaluation of APEX for Water Quality: I. Best Professional Judgment Parameterization.
Baffaut, Claire; Nelson, Nathan O; Lory, John A; Senaviratne, G M M M Anomaa; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S
2017-11-01
The Agricultural Policy Environmental eXtender (APEX) model is capable of estimating edge-of-field water, nutrient, and sediment transport and is used to assess the environmental impacts of management practices. The current practice is to fully calibrate the model for each site simulation, a task that requires resources and data not always available. The objective of this study was to compare model performance for flow, sediment, and phosphorus transport under two parameterization schemes: a best professional judgment (BPJ) parameterization based on readily available data and a fully calibrated parameterization based on site-specific soil, weather, event flow, and water quality data. The analysis was conducted using 12 datasets at four locations representing poorly drained soils and row-crop production under different tillage systems. Model performance was based on the Nash-Sutcliffe efficiency (NSE), the coefficient of determination () and the regression slope between simulated and measured annualized loads across all site years. Although the BPJ model performance for flow was acceptable (NSE = 0.7) at the annual time step, calibration improved it (NSE = 0.9). Acceptable simulation of sediment and total phosphorus transport (NSE = 0.5 and 0.9, respectively) was obtained only after full calibration at each site. Given the unacceptable performance of the BPJ approach, uncalibrated use of APEX for planning or management purposes may be misleading. Model calibration with water quality data prior to using APEX for simulating sediment and total phosphorus loss is essential. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
On parameterization of the inverse problem for estimating aquifer properties using tracer data
NASA Astrophysics Data System (ADS)
Kowalsky, M. B.; Finsterle, S.; Williams, K. H.; Murray, C.; Commer, M.; Newcomer, D.; Englert, A.; Steefel, C. I.; Hubbard, S. S.
2012-06-01
In developing a reliable approach for inferring hydrological properties through inverse modeling of tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance, as errors in the model structure are partly compensated for by estimating biased property values during the inversion. These biased estimates, while potentially providing an improved fit to the calibration data, may lead to wrong interpretations and conclusions and reduce the ability of the model to make reliable predictions. We consider the estimation of spatial variations in permeability and several other parameters through inverse modeling of tracer data, specifically synthetic and actual field data associated with the 2007 Winchester experiment from the Department of Energy Rifle site. Characterization is challenging due to the real-world complexities associated with field experiments in such a dynamic groundwater system. Our aim is to highlight and quantify the impact on inversion results of various decisions related to parameterization, such as the positioning of pilot points in a geostatistical parameterization; the handling of up-gradient regions; the inclusion of zonal information derived from geophysical data or core logs; extension from 2-D to 3-D; assumptions regarding the gradient direction, porosity, and the semivariogram function; and deteriorating experimental conditions. This work adds to the relatively limited number of studies that offer guidance on the use of pilot points in complex real-world experiments involving tracer data (as opposed to hydraulic head data).
NASA Astrophysics Data System (ADS)
Cipriani, L.; Fantini, F.; Bertacchi, S.
2014-06-01
Image-based modelling tools based on SfM algorithms gained great popularity since several software houses provided applications able to achieve 3D textured models easily and automatically. The aim of this paper is to point out the importance of controlling models parameterization process, considering that automatic solutions included in these modelling tools can produce poor results in terms of texture utilization. In order to achieve a better quality of textured models from image-based modelling applications, this research presents a series of practical strategies aimed at providing a better balance between geometric resolution of models from passive sensors and their corresponding (u,v) map reference systems. This aspect is essential for the achievement of a high-quality 3D representation, since "apparent colour" is a fundamental aspect in the field of Cultural Heritage documentation. Complex meshes without native parameterization have to be "flatten" or "unwrapped" in the (u,v) parameter space, with the main objective to be mapped with a single image. This result can be obtained by using two different strategies: the former automatic and faster, while the latter manual and time-consuming. Reverse modelling applications provide automatic solutions based on splitting the models by means of different algorithms, that produce a sort of "atlas" of the original model in the parameter space, in many instances not adequate and negatively affecting the overall quality of representation. Using in synergy different solutions, ranging from semantic aware modelling techniques to quad-dominant meshes achieved using retopology tools, it is possible to obtain a complete control of the parameterization process.
Thermodynamic properties for applications in chemical industry via classical force fields.
Guevara-Carrion, Gabriela; Hasse, Hans; Vrabec, Jadran
2012-01-01
Thermodynamic properties of fluids are of key importance for the chemical industry. Presently, the fluid property models used in process design and optimization are mostly equations of state or G (E) models, which are parameterized using experimental data. Molecular modeling and simulation based on classical force fields is a promising alternative route, which in many cases reasonably complements the well established methods. This chapter gives an introduction to the state-of-the-art in this field regarding molecular models, simulation methods, and tools. Attention is given to the way modeling and simulation on the scale of molecular force fields interact with other scales, which is mainly by parameter inheritance. Parameters for molecular force fields are determined both bottom-up from quantum chemistry and top-down from experimental data. Commonly used functional forms for describing the intra- and intermolecular interactions are presented. Several approaches for ab initio to empirical force field parameterization are discussed. Some transferable force field families, which are frequently used in chemical engineering applications, are described. Furthermore, some examples of force fields that were parameterized for specific molecules are given. Molecular dynamics and Monte Carlo methods for the calculation of transport properties and vapor-liquid equilibria are introduced. Two case studies are presented. First, using liquid ammonia as an example, the capabilities of semi-empirical force fields, parameterized on the basis of quantum chemical information and experimental data, are discussed with respect to thermodynamic properties that are relevant for the chemical industry. Second, the ability of molecular simulation methods to describe accurately vapor-liquid equilibrium properties of binary mixtures containing CO(2) is shown.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Höft, J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Hoft, Jan
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
NASA Astrophysics Data System (ADS)
Dörr, Dominik; Schirmaier, Fabian J.; Henning, Frank; Kärger, Luise
2017-10-01
Finite Element (FE) forming simulation offers the possibility of a detailed analysis of the deformation behavior of multilayered thermoplastic blanks during forming, considering material behavior and process conditions. Rate-dependent bending behavior is a material characteristic, which is so far not considered in FE forming simulation of pre-impregnated, continuously fiber reinforced polymers (CFRPs). Therefore, an approach for modeling viscoelastic bending behavior in FE composite forming simulation is presented in this work. The presented approach accounts for the distinct rate-dependent bending behavior of e.g. thermoplastic CFRPs at process conditions. The approach is based on a Voigt-Kelvin (VK) and a generalized Maxwell (GM) approach, implemented within a FE forming simulation framework implemented in several user-subroutines of the commercially available FE solver Abaqus. The VK, GM, as well as purely elastic bending modeling approaches are parameterized according to dynamic bending characterization results for a PA6-CF UD-tape. It is found that only the GM approach is capable to represent the bending deformation characteristic for all of the considered bending deformation rates. The parameterized bending modeling approaches are applied to a hemisphere test and to a generic geometry. A comparison of the forming simulation results of the generic geometry to experimental tests show a good agreement between simulation and experiments. Furthermore, the simulation results reveal that especially a correct modeling of the initial bending stiffness is relevant for the prediction of wrinkling behavior, as a similar onset of wrinkles is observed for the GM, the VK and an elastic approach, fitted to the stiffness observed in the dynamic rheometer test for low curvatures. Hence, characterization and modeling of rate-dependent bending behavior is crucial for FE forming simulation of thermoplastic CFRPs.
NASA Astrophysics Data System (ADS)
Nunes, João Pedro; Catarina Simões Vieira, Diana; Keizer, Jan Jacob
2017-04-01
Fires impact soil hydrological properties, enhancing soil water repellency and therefore increasing the potential for surface runoff generation and soil erosion. In consequence, the successful application of hydrological models to post-fire conditions requires the appropriate simulation of the effects of soil water repellency on soil hydrology. This work compared three approaches to model soil water repellency impacts on soil hydrology in burnt eucalypt and pine forest slopes in central Portugal: 1) Daily approach, simulating repellency as a function of soil moisture, and influencing the maximum soil available water holding capacity. It is based on the Thornthwaite-Mather soil water modelling approach, and is parameterized with the soil's wilting point and field capacity, and a parameter relating soil water repellency with water holding capacity. It was tested with soil moisture data from burnt and unburnt hillslopes. This approach was able to simulate post-fire soil moisture patterns, which the model without repellency was unable to do. However, model parameters were different between the burnt and unburnt slopes, indicating that more research is needed to derive standardized parameters from commonly measured soil and vegetation properties. 2) Seasonal approach, pre-determining repellency at the seasonal scale (3 months) in four classes (from none to extreme). It is based on the Morgan-Morgan-Finney (MMF) runoff and erosion model, applied at the seasonal scale and is parameterized with a parameter relating repellency class with field capacity. It was tested with runoff and erosion data from several experimental plots, and led to important improvements on runoff prediction over an approach with constant field capacity for all seasons (calibrated for repellency effects), but only slight improvements in erosion predictions. In contrast with the daily approach, the parameters could be reproduced between different sites 3) Constant approach, specifying values for soil water repellency for the three years after the fire, and keeping them constant throughout the year. It is based on a daily Curve Number (CN) approach, and was incorporated directly in the Soil and Water Assessment Tool (SWAT) model and tested with erosion data from a burnt hillslope. This approach was able to successfully reproduce soil erosion. The results indicate that simplified approaches can be used to adapt existing models for post-fire simulation, taking repellency into account. Taking into account the seasonality of repellency seems more important to simulate surface runoff than erosion, possibly since simulating the larger runoff rates correctly is sufficient for erosion simulation. The constant approach can be applied directly in the parameterization of existing runoff and erosion models for soil loss and sediment yield prediction, while the seasonal approach can readily be developed as a next step, with further work being needed to assess if the approach and associated parameters can be applied in multiple post-fire environments.
Urban Canopy Effects in Regional Climate Simulations - An Inter-Model Comparison
NASA Astrophysics Data System (ADS)
Halenka, T.; Huszar, P.; Belda, M.; Karlicky, J.
2017-12-01
To assess the impact of cities and urban surfaces on climate, the modeling approach is often used with inclusion of urban parameterization in land-surface interactions. This is especially important when going to higher resolution, which is common trend both in operational weather prediction and regional climate modelling. Model description of urban canopy related meteorological effects can, however, differ largely given especially the underlying surface models and the urban canopy parameterizations, representing a certain uncertainty. To assess this uncertainty is important for adaptation and mitigation measures often applied in the big cities, especially in connection to climate change perspective, which is one of the main task of the new project OP-PPR Proof of Concept UK. In this study we contribute to the estimation of this uncertainty by performing numerous experiments to assess the urban canopy meteorological forcing over central Europe on climate for the decade 2001-2010, using two regional climate models (RegCM4 and WRF) in 10 km resolution driven by ERA-Interim reanalyses, three surface schemes (BATS and CLM4.5 for RegCM4 and Noah for WRF) and five urban canopy parameterizations available: one bulk urban scheme, three single layer and a multilayer urban scheme. Effects of cities on urban and remote areas were evaluated. There are some differences in sensitivity of individual canopy model implementations to the UHI effects, depending on season and size of the city as well. Effect of reducing diurnal temperature range in cities (around 2 °C in summer mean) is noticeable in all simulations, independent to urban parameterization type and model, due to well-known warmer summer city nights. For the adaptation and mitigation purposes, rather than the average urban heat island intensity the distribution of it is more important providing the information on extreme UHI effects, e.g. during heat waves. We demonstrate that for big central European cities this effect can approach 10°C, even for not so big ones these extreme effects can go above 5°C.
Yue, Xu; Mickley, Loretta J.; Logan, Jennifer A.; Kaplan, Jed O.
2013-01-01
We estimate future wildfire activity over the western United States during the mid-21st century (2046–2065), based on results from 15 climate models following the A1B scenario. We develop fire prediction models by regressing meteorological variables from the current and previous years together with fire indexes onto observed regional area burned. The regressions explain 0.25–0.60 of the variance in observed annual area burned during 1980–2004, depending on the ecoregion. We also parameterize daily area burned with temperature, precipitation, and relative humidity. This approach explains ~0.5 of the variance in observed area burned over forest ecoregions but shows no predictive capability in the semi-arid regions of Nevada and California. By applying the meteorological fields from 15 climate models to our fire prediction models, we quantify the robustness of our wildfire projections at mid-century. We calculate increases of 24–124% in area burned using regressions and 63–169% with the parameterization. Our projections are most robust in the southwestern desert, where all GCMs predict significant (p<0.05) meteorological changes. For forested ecoregions, more GCMs predict significant increases in future area burned with the parameterization than with the regressions, because the latter approach is sensitive to hydrological variables that show large inter-model variability in the climate projections. The parameterization predicts that the fire season lengthens by 23 days in the warmer and drier climate at mid-century. Using a chemical transport model, we find that wildfire emissions will increase summertime surface organic carbon aerosol over the western United States by 46–70% and black carbon by 20–27% at midcentury, relative to the present day. The pollution is most enhanced during extreme episodes: above the 84th percentile of concentrations, OC increases by ~90% and BC by ~50%, while visibility decreases from 130 km to 100 km in 32 Federal Class 1 areas in Rocky Mountains Forest. PMID:24015109
USDA-ARS?s Scientific Manuscript database
The energy transport in a vegetated (corn) surface layer is examined by solving the vector radiative transfer equation using a numerical iterative approach. This approach allows a higher order that includes the multiple scattering effects. Multiple scattering effects are important when the optical t...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanyal, Tanmoy; Shell, M. Scott, E-mail: shell@engineering.ucsb.edu
Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one atmore » which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.« less
NASA Astrophysics Data System (ADS)
Farquharson, C.; Long, J.; Lu, X.; Lelievre, P. G.
2017-12-01
Real-life geology is complex, and so, even when allowing for the diffusive, low resolution nature of geophysical electromagnetic methods, we need Earth models that can accurately represent this complexity when modelling and inverting electromagnetic data. This is particularly the case for the scales, detail and conductivity contrasts involved in mineral and hydrocarbon exploration and development, but also for the larger scale of lithospheric studies. Unstructured tetrahedral meshes provide a flexible means of discretizing a general, arbitrary Earth model. This is important when wanting to integrate a geophysical Earth model with a geological Earth model parameterized in terms of surfaces. Finite-element and finite-volume methods can be derived for computing the electric and magnetic fields in a model parameterized using an unstructured tetrahedral mesh. A number of such variants have been proposed and have proven successful. However, the efficiency and accuracy of these methods can be affected by the "quality" of the tetrahedral discretization, that is, how many of the tetrahedral cells in the mesh are long, narrow and pointy. This is particularly the case if one wants to use an iterative technique to solve the resulting linear system of equations. One approach to deal with this issue is to develop sophisticated model and mesh building and manipulation capabilities in order to ensure that any mesh built from geological information is of sufficient quality for the electromagnetic modelling. Another approach is to investigate other methods of synthesizing the electromagnetic fields. One such example is a "meshfree" approach in which the electromagnetic fields are synthesized using a mesh that is distinct from the mesh used to parameterized the Earth model. There are then two meshes, one describing the Earth model and one used for the numerical mathematics of computing the fields. This means that there are no longer any quality requirements on the model mesh, which makes the process of building a geophysical Earth model from a geological model much simpler. In this presentation we will explore the issues that arise when working with realistic Earth models and when synthesizing geophysical electromagnetic data for them. We briefly consider meshfree methods as a possible means of alleviating some of these issues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, K. D.; Bohrer, G.; Kenny, W. T.
Surface roughness parameters, namely the roughness length and displacement height, are an integral input used to model surface fluxes. However, most models assume these parameters to be a fixed property of plant functional type and disregard the governing structural heterogeneity and dynamics. In this study, we use large-eddy simulations to explore, in silico, the effects of canopy-structure characteristics on surface roughness parameters. We performed a virtual experiment to test the sensitivity of resolved surface roughness to four axes of canopy structure: (1) leaf area index, (2) the vertical profile of leaf density, (3) canopy height, and (4) canopy gap fraction.more » We found roughness parameters to be highly variable, but uncovered positive relationships between displacement height and maximum canopy height, aerodynamic canopy height and maximum canopy height and leaf area index, and eddy-penetration depth and gap fraction. We also found negative relationships between aerodynamic canopy height and gap fraction, as well as between eddy-penetration depth and maximum canopy height and leaf area index. We generalized our model results into a virtual "biometric" parameterization that relates roughness length and displacement height to canopy height, leaf area index, and gap fraction. Using a decade of wind and canopy-structure observations in a site in Michigan, we tested the effectiveness of our model-driven biometric parameterization approach in predicting the friction velocity over heterogeneous and disturbed canopies. We compared the accuracy of these predictions with the friction-velocity predictions obtained from the common simple approximation related to canopy height, the values calculated with large-eddy simulations of the explicit canopy structure as measured by airborne and ground-based lidar, two other parameterization approaches that utilize varying canopy-structure inputs, and the annual and decadal means of the surface roughness parameters at the site from meteorological observations. We found that the classical representation of constant roughness parameters (in space and time) as a fraction of canopy height performed relatively well. Nonetheless, of the approaches we tested, most of the empirical approaches that incorporate seasonal and interannual variation of roughness length and displacement height as a function of the dynamics of canopy structure produced more precise and less biased estimates for friction velocity than models with temporally invariable parameters.« less
A stochastic parameterization for deep convection using cellular automata
NASA Astrophysics Data System (ADS)
Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.
2012-12-01
Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in large-scale variables in regions where convective activity is large. A two month extended evaluation of the deterministic behaviour of the scheme indicate a neutral impact on forecast skill. References: Bengtsson, L., H. Körnich, E. Källén, and G. Svensson, 2011: Large-scale dynamical response to sub-grid scale organization provided by cellular automata. Journal of the Atmospheric Sciences, 68, 3132-3144. Frenkel, Y., A. Majda, and B. Khouider, 2011: Using the stochastic multicloud model to improve tropical convective parameterization: A paradigm example. Journal of the Atmospheric Sciences, doi: 10.1175/JAS-D-11-0148.1. Huang, X.-Y., 1988: The organization of moist convection by internal 365 gravity waves. Tellus A, 42, 270-285. Khouider, B., J. Biello, and A. Majda, 2010: A Stochastic Multicloud Model for Tropical Convection. Comm. Math. Sci., 8, 187-216. Palmer, T., 2011: Towards the Probabilistic Earth-System Simulator: A Vision for the Future of Climate and Weather Prediction. Quarterly Journal of the Royal Meteorological Society 138 (2012) 841-861 Plant, R. and G. Craig, 2008: A stochastic parameterization for deep convection based on equilibrium statistics. J. Atmos. Sci., 65, 87-105.
Current Advances and Future Challenges in Adenoviral Vector Biology and Targeting
Campos, Samuel K.; Barry, Michael A.
2008-01-01
Gene delivery vectors based on Adenoviral (Ad) vectors have enormous potential for the treatment of both hereditary and acquired disease. Detailed structural analysis of the Ad virion, combined with functional studies has broadened our knowledge of the structure/function relationships between Ad vectors and host cells/tissues and substantial achievement has been made towards a thorough understanding of the biology of Ad vectors. The widespread use of Ad vectors for clinical gene therapy is compromised by their inherent immunogenicity. The generation of safer and more effective Ad vectors, targeted to the site of disease, has therefore become a great ambition in the field of Ad vector development. This review provides a synopsis of the structure/function relationships between Ad vectors and host systems and summarizes the many innovative approaches towards achieving Ad vector targeting. PMID:17584037
NASA Astrophysics Data System (ADS)
Ullrich, Romy; Hiranuma, Naruki; Hoose, Corinna; Möhler, Ottmar; Niemand, Monika; Steinke, Isabelle; Wagner, Robert
2014-05-01
Developing a new parameterization framework for the heterogeneous ice nucleation of atmospheric aerosol particles Ullrich, R., Hiranuma, N., Hoose, C., Möhler, O., Niemand, M., Steinke, I., Wagner, R. Aerosols of different nature induce microphysical processes of importance for the Earth's atmosphere. They affect not only directly the radiative budget, more importantly they essentially influence the formation and life cycles of clouds. Hence, aerosols and their ice nucleating ability are a fundamental input parameter for weather and climate models. During the previous years, the AIDA (Aerosol Interactions and Dynamics in the Atmosphere) cloud chamber was used to extensively measure, under nearly realistic conditions, the ice nucleating properties of different aerosols. Numerous experiments were performed with a broad variety of aerosol types and under different freezing conditions. A reanalysis of these experiments offers the opportunity to develop a uniform parameterization framework of ice formation for many atmospherically relevant aerosols in a broad temperature and humidity range. The analysis includes both deposition nucleation and immersion freezing. The aim of this study is to develop this comprehensive parameterization for heterogeneous ice formation mainly by using the ice nucleation active site (INAS) approach. Niemand et al. (2012) already developed a temperature dependent parameterization for the INAS- density for immersion freezing on desert dust particles. In addition to a reanalysis of the ice nucleation behaviour of desert dust (Niemand et al. (2012)), volcanic ash (Steinke et al. (2010)) and organic particles (Wagner et al. (2010,2011)) this contribution will also show new results for the immersion freezing and deposition nucleation of soot aerosols. The next step will be the implementation of the parameterizations into the COSMO- ART model in order to test and demonstrate the usability of the framework. Hoose, C. and Möhler, O. (2012) Atmos. Chem. Phys. 12, 9817-9854 Niemand, M., Möhler, O., Vogel, B., Hoose, C., Connolly, P., Klein, H., Bingemer, H., DeMott, P.J., Skrotzki, J. and Leisner, T. (2012) J. Atmos. Sci. 69, 3077-3092 Steinke, I., Möhler, O., Kiselev, A., Niemand, M., Saathoff, H., Schnaiter, M., Skrotzki, J., Hoose, C. and Leisner, T. (2011) Atmos. Chem. Phys. 11, 12945-12958 Wagner, R., Möhler, O., Saathoff, H., Schnaiter, M. and Leisner, T. (2010) Atmos. Chem. Phys. 10, 7617-7641 Wagner, R., Möhler, O., Saathoff, H., Schnaiter, M. and Leisner, T. (2011) Atmos. Chem. Phys. 11, 2083-2110
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, Kuo-Nan
2016-02-09
Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracingmore » computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS) for the computation of light absorption and scattering by complex and inhomogeneous particles for application to aggregates and snow grains with external and internal mixing structures. We demonstrated that a small black (BC) particle on the order of 1 μm internally mixed with snow grains could effectively reduce visible snow albedo by as much as 5–10%. Following this work and within the context of DOE support, we have made two key accomplishments presented in the attached final report.« less
Pseudotyped Lentiviral Vectors for Retrograde Gene Delivery into Target Brain Regions
Kobayashi, Kenta; Inoue, Ken-ichi; Tanabe, Soshi; Kato, Shigeki; Takada, Masahiko; Kobayashi, Kazuto
2017-01-01
Gene transfer through retrograde axonal transport of viral vectors offers a substantial advantage for analyzing roles of specific neuronal pathways or cell types forming complex neural networks. This genetic approach may also be useful in gene therapy trials by enabling delivery of transgenes into a target brain region distant from the injection site of the vectors. Pseudotyping of a lentiviral vector based on human immunodeficiency virus type 1 (HIV-1) with various fusion envelope glycoproteins composed of different combinations of rabies virus glycoprotein (RV-G) and vesicular stomatitis virus glycoprotein (VSV-G) enhances the efficiency of retrograde gene transfer in both rodent and nonhuman primate brains. The most recently developed lentiviral vector is a pseudotype with fusion glycoprotein type E (FuG-E), which demonstrates highly efficient retrograde gene transfer in the brain. The FuG-E–pseudotyped vector permits powerful experimental strategies for more precisely investigating the mechanisms underlying various brain functions. It also contributes to the development of new gene therapy approaches for neurodegenerative disorders, such as Parkinson’s disease, by delivering genes required for survival and protection into specific neuronal populations. In this review article, we report the properties of the FuG-E–pseudotyped vector, and we describe the application of the vector to neural circuit analysis and the potential use of the FuG-E vector in gene therapy for Parkinson’s disease. PMID:28824385
Production of SV40-derived vectors.
Strayer, David S; Mitchell, Christine; Maier, Dawn A; Nichols, Carmen N
2010-06-01
Recombinant simian virus 40 (rSV40)-derived vectors are particularly useful for gene delivery to bone marrow progenitor cells and their differentiated derivatives, certain types of epithelial cells (e.g., hepatocytes), and central nervous system neurons and microglia. They integrate rapidly into cellular DNA to provide long-term gene expression in vitro and in vivo in both resting and dividing cells. Here we describe a protocol for production and purification of these vectors. These procedures require only packaging cells (e.g., COS-7) and circular vector genome DNA. Amplification involves repeated infection of packaging cells with vector produced by transfection. Cotransfection is not required in any step. Viruses are purified by centrifugation using discontinuous sucrose or cesium chloride (CsCl) gradients and resulting vectors are replication-incompetent and contain no detectable wild-type SV40 revertants. These approaches are simple, give reproducible results, and may be used to generate vectors that are deleted only for large T antigen (Tag), or for all SV40-coding sequences capable of carrying up to 5 kb of foreign DNA. These vectors are best applied to long-term expression of proteins normally encoded by mammalian cells or by viruses that infect mammalian cells, or of untranslated RNAs (e.g., RNA interference). The preparative approaches described facilitate application of these vectors and allow almost any laboratory to exploit their strengths for diverse gene delivery applications.
Bayesian data assimilation provides rapid decision support for vector-borne diseases.
Jewell, Chris P; Brown, Richard G
2015-07-06
Predicting the spread of vector-borne diseases in response to incursions requires knowledge of both host and vector demographics in advance of an outbreak. Although host population data are typically available, for novel disease introductions there is a high chance of the pathogen using a vector for which data are unavailable. This presents a barrier to estimating the parameters of dynamical models representing host-vector-pathogen interaction, and hence limits their ability to provide quantitative risk forecasts. The Theileria orientalis (Ikeda) outbreak in New Zealand cattle demonstrates this problem: even though the vector has received extensive laboratory study, a high degree of uncertainty persists over its national demographic distribution. Addressing this, we develop a Bayesian data assimilation approach whereby indirect observations of vector activity inform a seasonal spatio-temporal risk surface within a stochastic epidemic model. We provide quantitative predictions for the future spread of the epidemic, quantifying uncertainty in the model parameters, case infection times and the disease status of undetected infections. Importantly, we demonstrate how our model learns sequentially as the epidemic unfolds and provide evidence for changing epidemic dynamics through time. Our approach therefore provides a significant advance in rapid decision support for novel vector-borne disease outbreaks. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Vector-transmitted disease vaccines: targeting salivary proteins in transmission (SPIT).
McDowell, Mary Ann
2015-08-01
More than half the population of the world is at risk for morbidity and mortality from vector-transmitted diseases, and emerging vector-transmitted infections are threatening new populations. Rising insecticide resistance and lack of efficacious vaccines highlight the need for novel control measures. One such approach is targeting the vector-host interface by incorporating vector salivary proteins in anti-pathogen vaccines. Debate remains about whether vector saliva exposure exacerbates or protects against more severe clinical manifestations, induces immunity through natural exposure or extends to all vector species and associated pathogens. Nevertheless, exploiting this unique biology holds promise as a viable strategy for the development of vaccines against vector-transmitted diseases. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bayesian data assimilation provides rapid decision support for vector-borne diseases
Jewell, Chris P.; Brown, Richard G.
2015-01-01
Predicting the spread of vector-borne diseases in response to incursions requires knowledge of both host and vector demographics in advance of an outbreak. Although host population data are typically available, for novel disease introductions there is a high chance of the pathogen using a vector for which data are unavailable. This presents a barrier to estimating the parameters of dynamical models representing host–vector–pathogen interaction, and hence limits their ability to provide quantitative risk forecasts. The Theileria orientalis (Ikeda) outbreak in New Zealand cattle demonstrates this problem: even though the vector has received extensive laboratory study, a high degree of uncertainty persists over its national demographic distribution. Addressing this, we develop a Bayesian data assimilation approach whereby indirect observations of vector activity inform a seasonal spatio-temporal risk surface within a stochastic epidemic model. We provide quantitative predictions for the future spread of the epidemic, quantifying uncertainty in the model parameters, case infection times and the disease status of undetected infections. Importantly, we demonstrate how our model learns sequentially as the epidemic unfolds and provide evidence for changing epidemic dynamics through time. Our approach therefore provides a significant advance in rapid decision support for novel vector-borne disease outbreaks. PMID:26136225
Topology of the Relative Motion: Circular and Eccentric Reference Orbit Cases
NASA Technical Reports Server (NTRS)
FontdecabaiBaig, Jordi; Metris, Gilles; Exertier, Pierre
2007-01-01
This paper deals with the topology of the relative trajectories in flight formations. The purpose is to study the different types of relative trajectories, their degrees of freedom, and to give an adapted parameterization. The paper also deals with the research of local circular motions. Even if they exist only when the reference orbit is circular, we extrapolate initial conditions to the eccentric reference orbit case.This alternative approach is complementary with traditional approaches in terms of cartesian coordinates or differences of orbital elements.
2012-07-06
layer affected by ground interference. Using this approach for measurements acquired over the Salinas Valley , we showed that additional range gates...demonstrated the benefits of the two-step approach using measurements acquired over the Salinas Valley in central California. The additional range gates...four hours of data between the surface and 3000 m MSL along a 40 km segment of the Salinas Valley during this day. The airborne lidar measurements
Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds
NASA Astrophysics Data System (ADS)
Yun, Yuxing; Penner, Joyce E.
2012-04-01
A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.
A Nonlinear Interactions Approximation Model for Large-Eddy Simulation
NASA Astrophysics Data System (ADS)
Haliloglu, Mehmet U.; Akhavan, Rayhaneh
2003-11-01
A new approach to LES modelling is proposed based on direct approximation of the nonlinear terms \\overlineu_iuj in the filtered Navier-Stokes equations, instead of the subgrid-scale stress, τ_ij. The proposed model, which we call the Nonlinear Interactions Approximation (NIA) model, uses graded filters and deconvolution to parameterize the local interactions across the LES cutoff, and a Smagorinsky eddy viscosity term to parameterize the distant interactions. A dynamic procedure is used to determine the unknown eddy viscosity coefficient, rendering the model free of adjustable parameters. The proposed NIA model has been applied to LES of turbulent channel flows at Re_τ ≈ 210 and Re_τ ≈ 570. The results show good agreement with DNS not only for the mean and resolved second-order turbulence statistics but also for the full (resolved plus subgrid) Reynolds stress and turbulence intensities.
Abad-Franch, Fernando; Valença-Barbosa, Carolina; Sarquis, Otília; Lima, Marli M.
2014-01-01
Background Vector-borne diseases are major public health concerns worldwide. For many of them, vector control is still key to primary prevention, with control actions planned and evaluated using vector occurrence records. Yet vectors can be difficult to detect, and vector occurrence indices will be biased whenever spurious detection/non-detection records arise during surveys. Here, we investigate the process of Chagas disease vector detection, assessing the performance of the surveillance method used in most control programs – active triatomine-bug searches by trained health agents. Methodology/Principal Findings Control agents conducted triplicate vector searches in 414 man-made ecotopes of two rural localities. Ecotope-specific ‘detection histories’ (vectors or their traces detected or not in each individual search) were analyzed using ordinary methods that disregard detection failures and multiple detection-state site-occupancy models that accommodate false-negative and false-positive detections. Mean (±SE) vector-search sensitivity was ∼0.283±0.057. Vector-detection odds increased as bug colonies grew denser, and were lower in houses than in most peridomestic structures, particularly woodpiles. False-positive detections (non-vector fecal streaks misidentified as signs of vector presence) occurred with probability ∼0.011±0.008. The model-averaged estimate of infestation (44.5±6.4%) was ∼2.4–3.9 times higher than naïve indices computed assuming perfect detection after single vector searches (11.4–18.8%); about 106–137 infestation foci went undetected during such standard searches. Conclusions/Significance We illustrate a relatively straightforward approach to addressing vector detection uncertainty under realistic field survey conditions. Standard vector searches had low sensitivity except in certain singular circumstances. Our findings suggest that many infestation foci may go undetected during routine surveys, especially when vector density is low. Undetected foci can cause control failures and induce bias in entomological indices; this may confound disease risk assessment and mislead program managers into flawed decision making. By helping correct bias in naïve indices, the approach we illustrate has potential to critically strengthen vector-borne disease control-surveillance systems. PMID:25233352
Evaluation of Warm-Rain Microphysical Parameterizations in Cloudy Boundary Layer Transitions
NASA Astrophysics Data System (ADS)
Nelson, K.; Mechem, D. B.
2014-12-01
Common warm-rain microphysical parameterizations used for marine boundary layer (MBL) clouds are either tuned for specific cloud types (e.g., the Khairoutdinov and Kogan 2000 parameterization, "KK2000") or are altogether ill-posed (Kessler 1969). An ideal microphysical parameterization should be "unified" in the sense of being suitable across MBL cloud regimes that include stratocumulus, cumulus rising into stratocumulus, and shallow trade cumulus. The recent parameterization of Kogan (2013, "K2013") was formulated for shallow cumulus but has been shown in a large-eddy simulation environment to work quite well for stratocumulus as well. We report on our efforts to implement and test this parameterization into a regional forecast model (NRL COAMPS). Results from K2013 and KK2000 are compared with the operational Kessler parameterization for a 5-day period of the VOCALS-REx field campaign, which took place over the southeast Pacific. We focus on both the relative performance of the three parameterizations and also on how they compare to the VOCALS-REx observations from the NOAA R/V Ronald H. Brown, in particular estimates of boundary-layer depth, liquid water path (LWP), cloud base, and area-mean precipitation rate obtained from C-band radar.
A Hamiltonian approach to the planar optimization of mid-course corrections
NASA Astrophysics Data System (ADS)
Iorfida, E.; Palmer, P. L.; Roberts, M.
2016-04-01
Lawden's primer vector theory gives a set of necessary conditions that characterize the optimality of a transfer orbit, defined accordingly to the possibility of adding mid-course corrections. In this paper a novel approach is proposed where, through a polar coordinates transformation, the primer vector components decouple. Furthermore, the case when transfer, departure and arrival orbits are coplanar is analyzed using a Hamiltonian approach. This procedure leads to approximate analytic solutions for the in-plane components of the primer vector. Moreover, the solution for the circular transfer case is proven to be the Hill's solution. The novel procedure reduces the mathematical and computational complexity of the original case study. It is shown that the primer vector is independent of the semi-major axis of the transfer orbit. The case with a fixed transfer trajectory and variable initial and final thrust impulses is studied. The acquired related optimality maps are presented and analyzed and they express the likelihood of a set of trajectories to be optimal. Furthermore, it is presented which kind of requirements have to be fulfilled by a set of departure and arrival orbits to have the same profile of primer vector.
Strange resonance poles from Kπ scattering below 1.8 GeV
NASA Astrophysics Data System (ADS)
Pelaez, J. R.; Rodas, A.; Ruiz de Elvira, J.
2017-02-01
In this work we present a determination of the mass, width, and coupling of the resonances that appear in kaon-pion scattering below 1.8 GeV. These are: the much debated scalar κ -meson, nowadays known as K_0^*(800), the scalar K_0^*(1430), the K^*(892) and K_1^*(1410) vectors, the spin-two K_2^*(1430) as well as the spin-three K^*_3(1780). The parameters will be determined from the pole associated to each resonance by means of an analytic continuation of the Kπ scattering amplitudes obtained in a recent and precise data analysis constrained with dispersion relations, which were not well satisfied in previous analyses. This analytic continuation will be performed by means of Padé approximants, thus avoiding a particular model for the pole parameterization. We also pay particular attention to the evaluation of uncertainties.
Generating Performance Models for Irregular Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav
2017-05-30
Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scalingmore » when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.« less
Interaction of 〈1 0 0〉 dislocation loops with dislocations studied by dislocation dynamics in α-iron
NASA Astrophysics Data System (ADS)
Shi, X. J.; Dupuy, L.; Devincre, B.; Terentyev, D.; Vincent, L.
2015-05-01
Interstitial dislocation loops with Burgers vector of 〈1 0 0〉 type are formed in α-iron under neutron or heavy ion irradiation. As the density and size of these loops increase with radiation dose and temperature, these defects are thought to play a key role in hardening and subsequent embrittlement of iron-based steels. The aim of the present work is to study the pinning strength of the loops on mobile dislocations. Prior to run massive Dislocation Dynamics (DD) simulations involving experimentally representative array of radiation defects and dislocations, the DD code and its parameterization are validated by comparing the individual loop-dislocation reactions with those obtained from direct atomistic Molecular Dynamics (MD) simulations. Several loop-dislocation reaction mechanisms are successfully reproduced as well as the values of the unpinning stress to detach mobile dislocations from the defects.
Solar physics applications of computer graphics and image processing
NASA Technical Reports Server (NTRS)
Altschuler, M. D.
1985-01-01
Computer graphics devices coupled with computers and carefully developed software provide new opportunities to achieve insight into the geometry and time evolution of scalar, vector, and tensor fields and to extract more information quickly and cheaply from the same image data. Two or more different fields which overlay in space can be calculated from the data (and the physics), then displayed from any perspective, and compared visually. The maximum regions of one field can be compared with the gradients of another. Time changing fields can also be compared. Images can be added, subtracted, transformed, noise filtered, frequency filtered, contrast enhanced, color coded, enlarged, compressed, parameterized, and histogrammed, in whole or section by section. Today it is possible to process multiple digital images to reveal spatial and temporal correlations and cross correlations. Data from different observatories taken at different times can be processed, interpolated, and transformed to a common coordinate system.
Limited Rank Matrix Learning, discriminative dimension reduction and visualization.
Bunte, Kerstin; Schneider, Petra; Hammer, Barbara; Schleif, Frank-Michael; Villmann, Thomas; Biehl, Michael
2012-02-01
We present an extension of the recently introduced Generalized Matrix Learning Vector Quantization algorithm. In the original scheme, adaptive square matrices of relevance factors parameterize a discriminative distance measure. We extend the scheme to matrices of limited rank corresponding to low-dimensional representations of the data. This allows to incorporate prior knowledge of the intrinsic dimension and to reduce the number of adaptive parameters efficiently. In particular, for very large dimensional data, the limitation of the rank can reduce computation time and memory requirements significantly. Furthermore, two- or three-dimensional representations constitute an efficient visualization method for labeled data sets. The identification of a suitable projection is not treated as a pre-processing step but as an integral part of the supervised training. Several real world data sets serve as an illustration and demonstrate the usefulness of the suggested method. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hu, Xiao-Ming; Zhang, Fuqing; Nielsen-Gammon, John W.
2010-04-01
This study explores the treatment of model error and uncertainties through simultaneous state and parameter estimation (SSPE) with an ensemble Kalman filter (EnKF) in the simulation of a 2006 air pollution event over the greater Houston area during the Second Texas Air Quality Study (TexAQS-II). Two parameters in the atmospheric boundary layer parameterization associated with large model sensitivities are combined with standard prognostic variables in an augmented state vector to be continuously updated through assimilation of wind profiler observations. It is found that forecasts of the atmosphere with EnKF/SSPE are markedly improved over experiments with no state and/or parameter estimation. More specifically, the EnKF/SSPE is shown to help alleviate a near-surface cold bias and to alter the momentum mixing in the boundary layer to produce more realistic wind profiles.
USDA-ARS?s Scientific Manuscript database
Irrigation is a widely used water management practice that is often poorly parameterized in land surface and climate models. Previous studies have addressed this issue via use of irrigation area, applied water inventory data, or soil moisture content. These approaches have a variety of drawbacks i...
USDA-ARS?s Scientific Manuscript database
Biochemical models of leaf photosynthesis, which are essential for understanding the impact of photosynthesis to changing environments, depend on accurate parameterizations. The CO2 photocompensation point can be especially difficult to determine accurately but can be measured from the intersection ...
Slicing cluster mass functions with a Bayesian razor
NASA Astrophysics Data System (ADS)
Sealfon, C. D.
2010-08-01
We apply a Bayesian ``razor" to forecast Bayes factors between different parameterizations of the galaxy cluster mass function. To demonstrate this approach, we calculate the minimum size N-body simulation needed for strong evidence favoring a two-parameter mass function over one-parameter mass functions and visa versa, as a function of the minimum cluster mass.
A modified force-restore approach to modeling snow-surface heat fluxes
Charles H. Luce; David G. Tarboton
2001-01-01
Accurate modeling of the energy balance of a snowpack requires good estimates of the snow surface temperature. The snow surface temperature allows a balance between atmospheric heat fluxes and the conductive flux into the snowpack. While the dependency of atmospheric fluxes on surface temperature is reasonably well understood and parameterized, conduction of heat from...
Abstraction Techniques for Parameterized Verification
2006-11-01
approach for applying model checking to unbounded systems is to extract finite state models from them using conservative abstraction techniques. Prop...36 2.5.1 Multiple Reference Processes . . . . . . . . . . . . . . . . . . . 36 2.5.2 Adding Monitor Processes...model checking to complex pieces of code like device drivers depends on the use of abstraction methods. An abstraction method extracts a small finite
Bengtsson, Niclas E.; Hall, John K.; Odom, Guy L.; Phelps, Michael P.; Andrus, Colin R.; Hawkins, R. David; Hauschka, Stephen D.; Chamberlain, Joel R.; Chamberlain, Jeffrey S.
2017-01-01
Gene replacement therapies utilizing adeno-associated viral (AAV) vectors hold great promise for treating Duchenne muscular dystrophy (DMD). A related approach uses AAV vectors to edit specific regions of the DMD gene using CRISPR/Cas9. Here we develop multiple approaches for editing the mutation in dystrophic mdx4cv mice using single and dual AAV vector delivery of a muscle-specific Cas9 cassette together with single-guide RNA cassettes and, in one approach, a dystrophin homology region to fully correct the mutation. Muscle-restricted Cas9 expression enables direct editing of the mutation, multi-exon deletion or complete gene correction via homologous recombination in myogenic cells. Treated muscles express dystrophin in up to 70% of the myogenic area and increased force generation following intramuscular delivery. Furthermore, systemic administration of the vectors results in widespread expression of dystrophin in both skeletal and cardiac muscles. Our results demonstrate that AAV-mediated muscle-specific gene editing has significant potential for therapy of neuromuscular disorders. PMID:28195574
Approaches to control diseases vectored by ambrosia beetles in avocado and other American Lauraceae
USDA-ARS?s Scientific Manuscript database
Invasive ambrosia beetles and the plant pathogenic fungi they vector represent a significant challenge to North American agriculture, native and landscape trees. Ambrosia beetles encompass a range of insect species and they vector a diverse set of plant pathogenic fungi. Our lab has taken several bi...
Structural Analysis of Biodiversity
Sirovich, Lawrence; Stoeckle, Mark Y.; Zhang, Yu
2010-01-01
Large, recently-available genomic databases cover a wide range of life forms, suggesting opportunity for insights into genetic structure of biodiversity. In this study we refine our recently-described technique using indicator vectors to analyze and visualize nucleotide sequences. The indicator vector approach generates correlation matrices, dubbed Klee diagrams, which represent a novel way of assembling and viewing large genomic datasets. To explore its potential utility, here we apply the improved algorithm to a collection of almost 17000 DNA barcode sequences covering 12 widely-separated animal taxa, demonstrating that indicator vectors for classification gave correct assignment in all 11000 test cases. Indicator vector analysis revealed discontinuities corresponding to species- and higher-level taxonomic divisions, suggesting an efficient approach to classification of organisms from poorly-studied groups. As compared to standard distance metrics, indicator vectors preserve diagnostic character probabilities, enable automated classification of test sequences, and generate high-information density single-page displays. These results support application of indicator vectors for comparative analysis of large nucleotide data sets and raise prospect of gaining insight into broad-scale patterns in the genetic structure of biodiversity. PMID:20195371
NASA Astrophysics Data System (ADS)
Ferhat, Ipar
With increasing advancement in material science and computational power of current computers that allows us to analyze high dimensional systems, very light and large structures are being designed and built for aerospace applications. One example is a reflector of a space telescope that is made of membrane structures. These reflectors are light and foldable which makes the shipment easy and cheaper unlike traditional reflectors made of glass or other heavy materials. However, one of the disadvantages of membranes is that they are very sensitive to external changes, such as thermal load or maneuvering of the space telescope. These effects create vibrations that dramatically affect the performance of the reflector. To overcome vibrations in membranes, in this work, piezoelectric actuators are used to develop distributed controllers for membranes. These actuators generate bending effects to suppress the vibration. The actuators attached to a membrane are relatively thick which makes the system heterogeneous; thus, an analytical solution cannot be obtained to solve the partial differential equation of the system. Therefore, the Finite Element Model is applied to obtain an approximate solution for the membrane actuator system. Another difficulty that arises with very flexible large structures is the dimension of the discretized system. To obtain an accurate result, the system needs to be discretized using smaller segments which makes the dimension of the system very high. This issue will persist as long as the improving technology will allow increasingly complex and large systems to be designed and built. To deal with this difficulty, the analysis of the system and controller development to suppress the vibration are carried out using vector second order form as an alternative to vector first order form. In vector second order form, the number of equations that need to be solved are half of the number equations in vector first order form. Analyzing the system for control characteristics such as stability, controllability and observability is a key step that needs to be carried out before developing a controller. This analysis determines what kind of system is being modeled and the appropriate approach for controller development. Therefore, accuracy of the system analysis is very crucial. The results of the system analysis using vector second order form and vector first order form show the computational advantages of using vector second order form. Using similar concepts, LQR and LQG controllers, that are developed to suppress the vibration, are derived using vector second order form. To develop a controller using vector second order form, two different approaches are used. One is reducing the size of the Algebraic Riccati Equation to half by partitioning the solution matrix. The other approach is using the Hamiltonian method directly in vector second order form. Controllers are developed using both approaches and compared to each other. Some simple solutions for special cases are derived for vector second order form using the reduced Algebraic Riccati Equation. The advantages and drawbacks of both approaches are explained through examples. System analysis and controller applications are carried out for a square membrane system with four actuators. Two different systems with different actuator locations are analyzed. One system has the actuators at the corners of the membrane, the other has the actuators away from the corners. The structural and control effect of actuator locations are demonstrated with mode shapes and simulations. The results of the controller applications and the comparison of the vector first order form with the vector second order form demonstrate the efficacy of the controllers.
NASA Astrophysics Data System (ADS)
Liou, K. N.; Takano, Y.; He, C.; Yang, P.; Leung, L. R.; Gu, Y.; Lee, W. L.
2014-06-01
A stochastic approach has been developed to model the positions of BC (black carbon)/dust internally mixed with two snow grain types: hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine BC/dust single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), the action of internal mixing absorbs substantially more light than external mixing. The snow grain shape effect on absorption is relatively small, but its effect on asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2-5 µm) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 µm, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo substantially more than external mixing and that the snow grain shape plays a critical role in snow albedo calculations through its forward scattering strength. Also, multiple inclusion of BC/dust significantly reduces snow albedo as compared to an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization involving contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountain/snow topography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, K. N.; Takano, Y.; He, Cenlin
2014-06-27
A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions ofmore » BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.« less
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
2015-06-13
The Berkeley Out-of-Order Machine (BOOM): An Industry- Competitive, Synthesizable, Parameterized RISC-V Processor Christopher Celio David A...Synthesizable, Parameterized RISC-V Processor Christopher Celio, David Patterson, and Krste Asanović University of California, Berkeley, California 94720...Order Machine BOOM is a synthesizable, parameterized, superscalar out- of-order RISC-V core designed to serve as the prototypical baseline processor
Shamir, Reuben R; Dolber, Trygve; Noecker, Angela M; Walter, Benjamin L; McIntyre, Cameron C
2015-01-01
Deep brain stimulation (DBS) of the subthalamic region is an established therapy for advanced Parkinson's disease (PD). However, patients often require time-intensive post-operative management to balance their coupled stimulation and medication treatments. Given the large and complex parameter space associated with this task, we propose that clinical decision support systems (CDSS) based on machine learning algorithms could assist in treatment optimization. Develop a proof-of-concept implementation of a CDSS that incorporates patient-specific details on both stimulation and medication. Clinical data from 10 patients, and 89 post-DBS surgery visits, were used to create a prototype CDSS. The system was designed to provide three key functions: (1) information retrieval; (2) visualization of treatment, and; (3) recommendation on expected effective stimulation and drug dosages, based on three machine learning methods that included support vector machines, Naïve Bayes, and random forest. Measures of medication dosages, time factors, and symptom-specific pre-operative response to levodopa were significantly correlated with post-operative outcomes (P < 0.05) and their effect on outcomes was of similar magnitude to that of DBS. Using those results, the combined machine learning algorithms were able to accurately predict 86% (12/14) of the motor improvement scores at one year after surgery. Using patient-specific details, an appropriately parameterized CDSS could help select theoretically optimal DBS parameter settings and medication dosages that have potential to improve the clinical management of PD patients. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Moses, David A.; Mesgarani, Nima; Leonard, Matthew K.; Chang, Edward F.
2016-10-01
Objective. The superior temporal gyrus (STG) and neighboring brain regions play a key role in human language processing. Previous studies have attempted to reconstruct speech information from brain activity in the STG, but few of them incorporate the probabilistic framework and engineering methodology used in modern speech recognition systems. In this work, we describe the initial efforts toward the design of a neural speech recognition (NSR) system that performs continuous phoneme recognition on English stimuli with arbitrary vocabulary sizes using the high gamma band power of local field potentials in the STG and neighboring cortical areas obtained via electrocorticography. Approach. The system implements a Viterbi decoder that incorporates phoneme likelihood estimates from a linear discriminant analysis model and transition probabilities from an n-gram phonemic language model. Grid searches were used in an attempt to determine optimal parameterizations of the feature vectors and Viterbi decoder. Main results. The performance of the system was significantly improved by using spatiotemporal representations of the neural activity (as opposed to purely spatial representations) and by including language modeling and Viterbi decoding in the NSR system. Significance. These results emphasize the importance of modeling the temporal dynamics of neural responses when analyzing their variations with respect to varying stimuli and demonstrate that speech recognition techniques can be successfully leveraged when decoding speech from neural signals. Guided by the results detailed in this work, further development of the NSR system could have applications in the fields of automatic speech recognition and neural prosthetics.
NASA Astrophysics Data System (ADS)
Radestock, Martin; Rose, Michael; Monner, Hans Peter
2017-04-01
In most aviation applications, a major cost benefit can be achieved by a reduction of the system weight. Often the acoustic properties of the fuselage structure are not in the focus of the primary design process, too. A final correction of poor acoustic properties is usually done using insulation mats in the chamber between the primary and secondary shell. It is plausible that a more sophisticated material distribution in that area can result in a substantially reduced weight. Topology optimization is a well-known approach to reduce material of compliant structures. In this paper an adaption of this method to acoustic problems is investigated. The gap full of insulation mats is suitably parameterized to achieve different material distributions. To find advantageous configurations, the objective in the underlying topology optimization is chosen to obtain good acoustic pressure patterns in the aircraft cabin. An important task in the optimization is an adequate Finite Element model of the system. This can usually not be obtained from commercially available programs due to the lack of special sensitivity data with respect to the design parameters. Therefore an appropriate implementation of the algorithm has been done, exploiting the vector and matrix capabilities in the MATLABQ environment. Finally some new aspects of the Finite Element implementation will also be presented, since they are interesting on its own and can be generalized to efficiently solve other partial differential equations as well.
Integrating vector control across diseases.
Golding, Nick; Wilson, Anne L; Moyes, Catherine L; Cano, Jorge; Pigott, David M; Velayudhan, Raman; Brooker, Simon J; Smith, David L; Hay, Simon I; Lindsay, Steve W
2015-10-01
Vector-borne diseases cause a significant proportion of the overall burden of disease across the globe, accounting for over 10 % of the burden of infectious diseases. Despite the availability of effective interventions for many of these diseases, a lack of resources prevents their effective control. Many existing vector control interventions are known to be effective against multiple diseases, so combining vector control programmes to simultaneously tackle several diseases could offer more cost-effective and therefore sustainable disease reductions. The highly successful cross-disease integration of vaccine and mass drug administration programmes in low-resource settings acts a precedent for cross-disease vector control. Whilst deliberate implementation of vector control programmes across multiple diseases has yet to be trialled on a large scale, a number of examples of 'accidental' cross-disease vector control suggest the potential of such an approach. Combining contemporary high-resolution global maps of the major vector-borne pathogens enables us to quantify overlap in their distributions and to estimate the populations jointly at risk of multiple diseases. Such an analysis shows that over 80 % of the global population live in regions of the world at risk from one vector-borne disease, and more than half the world's population live in areas where at least two different vector-borne diseases pose a threat to health. Combining information on co-endemicity with an assessment of the overlap of vector control methods effective against these diseases allows us to highlight opportunities for such integration. Malaria, leishmaniasis, lymphatic filariasis, and dengue are prime candidates for combined vector control. All four of these diseases overlap considerably in their distributions and there is a growing body of evidence for the effectiveness of insecticide-treated nets, screens, and curtains for controlling all of their vectors. The real-world effectiveness of cross-disease vector control programmes can only be evaluated by large-scale trials, but there is clear evidence of the potential of such an approach to enable greater overall health benefit using the limited funds available.
Serendipity in dark photon searches
NASA Astrophysics Data System (ADS)
Ilten, Philip; Soreq, Yotam; Williams, Mike; Xue, Wei
2018-06-01
Searches for dark photons provide serendipitous discovery potential for other types of vector particles. We develop a framework for recasting dark photon searches to obtain constraints on more general theories, which includes a data-driven method for determining hadronic decay rates. We demonstrate our approach by deriving constraints on a vector that couples to the B-L current, a leptophobic B boson that couples directly to baryon number and to leptons via B- γ kinetic mixing, and on a vector that mediates a protophobic force. Our approach can easily be generalized to any massive gauge boson with vector couplings to the Standard Model fermions, and software to perform any such recasting is provided at
A simple method for construction of artificial microRNA vector in plant.
Li, Yang; Li, Yang; Zhao, Sunping; Zhong, Sheng; Wang, Zhaohai; Ding, Bo; Li, Yangsheng
2014-10-01
Artificial microRNA (amiRNA) is a powerful tool for silencing genes in many plant species. Here we provide an easy method to construct amiRNA vectors that reinvents the Golden Gate cloning approach and features a novel system called top speed amiRNA construction (TAC). This speedy approach accomplishes one restriction-ligation step in only 5 min, allowing easy and high-throughput vector construction. Three primers were annealed to be a specific adaptor, then digested and ligated on our novel vector pTAC. Importantly, this method allows the recombined amiRNA constructs to maintain the precursor of osa-miR528 with exception of the desired amiRNA/amiRNA* sequences. Using this method, our results showed the expected decrease of targeted genes in Nicotiana benthamiana and Oryza sativa.
Expressing Transgenes That Exceed the Packaging Capacity of Adeno-Associated Virus Capsids
Chamberlain, Kyle; Riyad, Jalish Mahmud; Weber, Thomas
2016-01-01
Recombinant adeno-associated virus vectors (rAAV) are being explored as gene delivery vehicles for the treatment of various inherited and acquired disorders. rAAVs are attractive vectors for several reasons: wild-type AAVs are nonpathogenic, and rAAVs can trigger long-term transgene expression even in the absence of genome integration—at least in postmitotic tissues. Moreover, rAAVs have a low immunogenic profile, and the various AAV serotypes and variants display broad but distinct tropisms. One limitation of rAAVs is that their genome-packaging capacity is only ∼5 kb. For most applications this is not of major concern because the median human protein size is 375 amino acids. Excluding the ITRs, for a protein of typical length, this allows the incorporation of ∼3.5 kb of DNA for the promoter, polyadenylation sequence, and other regulatory elements into a single AAV vector. Nonetheless, for certain diseases the packaging limit of AAV does not allow the delivery of a full-length therapeutic protein by a single AAV vector. Hence, approaches to overcome this limitation have become an important area of research for AAV gene therapy. Among the most promising approaches to overcome the limitation imposed by the packaging capacity of AAV is the use of dual-vector approaches, whereby a transgene is split across two separate AAV vectors. Coinfection of a cell with these two rAAVs will then—through a variety of mechanisms—result in the transcription of an assembled mRNA that could not be encoded by a single AAV vector because of the DNA packaging limits of AAV. The main purpose of this review is to assess the current literature with respect to dual-AAV-vector design, to highlight the effectiveness of the different methodologies and to briefly discuss future areas of research to improve the efficiency of dual-AAV-vector transduction. PMID:26757051
NASA Technical Reports Server (NTRS)
Beck, Louisa R.; Rodriquez, Mario H.; Dister, Sheri W.; Rodriquez, Americo D.; Rejmankova, Eliska; Ulloa, Armando; Meza, Rosa A.; Roberts, Donald R.; Paris, Jack F.; Spanner, Michael A.;
1994-01-01
A landscape approach using remote sensing and Geographic Information System (GIS) technologies was developed to discriminate between villages at high and low risk for malaria transmission, as defined by adult Anopheles albimanus abundance. Satellite data for an area in southern Chiapas, Mexico were digitally processed to generate a map of landscape elements. The GIS processes were used to determine the proportion of mapped landscape elements surrounding 40 villages where An. albimanus data had been collected. The relationships between vector abundance and landscape element proportions were investigated using stepwise discriminant analysis and stepwise linear regression. Both analyses indicated that the most important landscape elements in terms of explaining vector abundance were transitional swamp and unmanaged pasture. Discriminant functions generated for these two elements were able to correctly distinguish between villages with high ind low vector abundance, with an overall accuracy of 90%. Regression results found both transitional swamp and unmanaged pasture proportions to be predictive of vector abundance during the mid-to-late wet season. This approach, which integrates remotely sensed data and GIS capabilities to identify villages with high vector-human contact risk, provides a promising tool for malaria surveillance programs that depend on labor-intensive field techniques. This is particularly relevant in areas where the lack of accurate surveillance capabilities may result in no malaria control action when, in fact, directed action is necessary. In general, this landscape approach could be applied to other vector-borne diseases in areas where: 1. the landscape elements critical to vector survival are known and 2. these elements can be detected at remote sensing scales.
Implementing a warm cloud microphysics parameterization for convective clouds in NCAR CESM
NASA Astrophysics Data System (ADS)
Shiu, C.; Chen, Y.; Chen, W.; Li, J. F.; Tsai, I.; Chen, J.; Hsu, H.
2013-12-01
Most of cumulus convection schemes use simple empirical approaches to convert cloud liquid mass to rain water or cloud ice to snow e.g. using a constant autoconversion rate and dividing cloud liquid mass into cloud water and ice as function of air temperature (e.g. Zhang and McFarlane scheme in NCAR CAM model). There are few studies trying to use cloud microphysical schemes to better simulate such precipitation processes in the convective schemes of global models (e.g. Lohmann [2008] and Song, Zhang, and Li [2012]). A two-moment warm cloud parameterization (i.e. Chen and Liu [2004]) is implemented into the deep convection scheme of CAM5.2 of CESM model for treatment of conversion of cloud liquid water to rain water. Short-term AMIP type global simulations are conducted to evaluate the possible impacts from the modification of this physical parameterization. Simulated results are further compared to observational results from AMWG diagnostic package and CloudSAT data sets. Several sensitivity tests regarding to changes in cloud top droplet concentration (here as a rough testing for aerosol indirect effects) and changes in detrained cloud size of convective cloud ice are also carried out to understand their possible impacts on the cloud and precipitation simulations.
Enhanced representation of soil NO emissions in the ...
Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12 km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and
NASA Astrophysics Data System (ADS)
Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur
2015-03-01
Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the parameterizations.
NASA Astrophysics Data System (ADS)
Guo, Yamin; Cheng, Jie; Liang, Shunlin
2018-02-01
Surface downward longwave radiation (SDLR) is a key variable for calculating the earth's surface radiation budget. In this study, we evaluated seven widely used clear-sky parameterization methods using ground measurements collected from 71 globally distributed fluxnet sites. The Bayesian model averaging (BMA) method was also introduced to obtain a multi-model ensemble estimate. As a whole, the parameterization method of Carmona et al. (2014) performs the best, with an average BIAS, RMSE, and R 2 of - 0.11 W/m2, 20.35 W/m2, and 0.92, respectively, followed by the parameterization methods of Idso (1981), Prata (Q J R Meteorol Soc 122:1127-1151, 1996), Brunt and Sc (Q J R Meteorol Soc 58:389-420, 1932), and Brutsaert (Water Resour Res 11:742-744, 1975). The accuracy of the BMA is close to that of the parameterization method of Carmona et al. (2014) and comparable to that of the parameterization method of Idso (1981). The advantage of the BMA is that it achieves balanced results compared to the integrated single parameterization methods. To fully assess the performance of the parameterization methods, the effects of climate type, land cover, and surface elevation were also investigated. The five parameterization methods and BMA all failed over land with the tropical climate type, with high water vapor, and had poor results over forest, wetland, and ice. These methods achieved better results over desert, bare land, cropland, and grass and had acceptable accuracies for sites at different elevations, except for the parameterization method of Carmona et al. (2014) over high elevation sites. Thus, a method that can be successfully applied everywhere does not exist.
Progress in malaria vector control.
Pant, C P; Rishikesh, N; Bang, Y H; Smith, A
1981-01-01
Malaria control, except in tropical Africa, will probably continue to be based to a large extent on the use of insecticides for many years. However, the development of resistance to insecticides in the vectors has caused serious difficulties and it is necessary to change the strategy of insecticide use to maximize their efficacy. A thorough knowledge of the ecology and behaviour of each vector species is required before the control strategy can be adapted to different epidemiological situations. The behavioural differences between sibling species have been recognized for several years, but study of this problem has recently been simplified by improved means of identification that involve chromosomal banding patterns and electrophoretic analysis. Behavioural differences have also been associated with certain chromosomal rearrangements.New records of insecticide resistance among anophelines continue to appear and the impact of this on antimalaria operations has been seriously felt in Central America (multi-resistance in Anopheles albimanus), Turkey (A. sacharovi), India and several Asian countries (A. culicifacies and A. stephensi), and some other countries. Work continues on the screening and testing of newer insecticides that can be used as alternatives, but DDT, malathion, temephos, fenitrothion, and propoxur continue to be used as the main insecticides in many malaria control projects. The search for simpler and innovative approaches to insecticide application also continues.Biological control of vectors is receiving increased attention, as it could become an important component of integrated vector control strategies, and most progress has been made with the spore-forming bacterium, serotype H-14 of Bacillus thuringiensis. Larvivorous fish such as Gambusia spp. and Poecilia spp. continue to be used in some programmes.Application of environmental management measures, such as source reduction, source elimination, flushing of drainage and irrigation channels, and intermittent irrigation have been re-examined and currently a great deal of interest is being shown in these approaches.There has been limited interest in the genetic control of mosquitos and the phenomenon of refractoriness in some strains of the disease vectors, with the idea of replacing the vector species with the refractory strain. More research is needed before this approach can become a practical tool.It is apparent that in future a more integrated approach will have to be used for vector control within the context of antimalaria programmes. Training of staff, research, and cooperation at all levels will be an essential requirement for this approach.
An Overview of Numerical Weather Prediction on Various Scales
NASA Astrophysics Data System (ADS)
Bao, J.-W.
2009-04-01
The increasing public need for detailed weather forecasts, along with the advances in computer technology, has motivated many research institutes and national weather forecasting centers to develop and run global as well as regional numerical weather prediction (NWP) models at high resolutions (i.e., with horizontal resolutions of ~10 km or higher for global models and 1 km or higher for regional models, and with ~60 vertical levels or higher). The need for running NWP models at high horizontal and vertical resolutions requires the implementation of non-hydrostatic dynamic core with a choice of horizontal grid configurations and vertical coordinates that are appropriate for high resolutions. Development of advanced numerics will also be needed for high resolution global and regional models, in particular, when the models are applied to transport problems and air quality applications. In addition to the challenges in numerics, the NWP community is also facing the challenges of developing physics parameterizations that are well suited for high-resolution NWP models. For example, when NWP models are run at resolutions of ~5 km or higher, the use of much more detailed microphysics parameterizations than those currently used in NWP model will become important. Another example is that regional NWP models at ~1 km or higher only partially resolve convective energy containing eddies in the lower troposphere. Parameterizations to account for the subgrid diffusion associated with unresolved turbulence still need to be developed. Further, physically sound parameterizations for air-sea interaction will be a critical component for tropical NWP models, particularly for hurricane predictions models. In this review presentation, the above issues will be elaborated on and the approaches to address them will be discussed.
A physically-based approach of treating dust-water cloud interactions in climate models
NASA Astrophysics Data System (ADS)
Kumar, P.; Karydis, V.; Barahona, D.; Sokolik, I. N.; Nenes, A.
2011-12-01
All aerosol-cloud-climate assessment studies to date assume that the ability of dust (and other insoluble species) to act as a Cloud Condensation Nuclei (CCN) is determined solely by their dry size and amount of soluble material. Recent evidence however clearly shows that dust can act as efficient CCN (even if lacking appreciable amounts of soluble material) through adsorption of water vapor onto the surface of the particle. This "inherent" CCN activity is augmented as the dust accumulates soluble material through atmospheric aging. A comprehensive treatment of dust-cloud interactions therefore requires including both of these sources of CCN activity in atmospheric models. This study presents a "unified" theory of CCN activity that considers both effects of adsorption and solute. The theory is corroborated and constrained with experiments of CCN activity of mineral aerosols generated from clays, calcite, quartz, dry lake beds and desert soil samples from Northern Africa, East Asia/China, and Northern America. The unified activation theory then is included within the mechanistic droplet activation parameterization of Kumar et al. (2009) (including the giant CCN correction of Barahona et al., 2010), for a comprehensive treatment of dust impacts on global CCN and cloud droplet number. The parameterization is demonstrated with the NASA Global Modeling Initiative (GMI) Chemical Transport Model using wind fields computed with the Goddard Institute for Space Studies (GISS) general circulation model. References Barahona, D. et al. (2010) Comprehensively Accounting for the Effect of Giant CCN in Cloud Activation Parameterizations, Atmos.Chem.Phys., 10, 2467-2473 Kumar, P., I.N. Sokolik, and A. Nenes (2009), Parameterization of cloud droplet formation for global and regional models: including adsorption activation from insoluble CCN, Atmos.Chem.Phys., 9, 2517- 2532
Aerosol hygroscopic growth parameterization based on a solute specific coefficient
NASA Astrophysics Data System (ADS)
Metzger, S.; Steil, B.; Xu, L.; Penner, J. E.; Lelieveld, J.
2011-09-01
Water is a main component of atmospheric aerosols and its amount depends on the particle chemical composition. We introduce a new parameterization for the aerosol hygroscopic growth factor (HGF), based on an empirical relation between water activity (aw) and solute molality (μs) through a single solute specific coefficient νi. Three main advantages are: (1) wide applicability, (2) simplicity and (3) analytical nature. (1) Our approach considers the Kelvin effect and covers ideal solutions at large relative humidity (RH), including CCN activation, as well as concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD). (2) A single νi coefficient suffices to parameterize the HGF for a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. (3) In contrast to previous methods, our analytical aw parameterization depends not only on a linear correction factor for the solute molality, instead νi also appears in the exponent in form x · ax. According to our findings, νi can be assumed constant for the entire aw range (0-1). Thus, the νi based method is computationally efficient. In this work we focus on single solute solutions, where νi is pre-determined with the bisection method from our analytical equations using RHD measurements and the saturation molality μssat. The computed aerosol HGF and supersaturation (Köhler-theory) compare well with the results of the thermodynamic reference model E-AIM for the key compounds NaCl and (NH4)2SO4 relevant for CCN modeling and calibration studies. The equations introduced here provide the basis of our revised gas-liquid-solid partitioning model, i.e. version 4 of the EQuilibrium Simplified Aerosol Model (EQSAM4), described in a companion paper.
NASA Astrophysics Data System (ADS)
Breen, S. J.; Lochbuehler, T.; Detwiler, R. L.; Linde, N.
2013-12-01
Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic ERT inversion approaches, probabilistic inversion provides not only a single saturation model but a full posterior probability density function for each model parameter. Furthermore, the uncertainty inherent in the underlying petrophysics (e.g., Archie's Law) can be incorporated in a straightforward manner. In this study, the data are from bench-scale ERT experiments conducted during gas injection into a quasi-2D (1 cm thick), translucent, brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. We estimate saturation fields by Markov chain Monte Carlo sampling with the MT-DREAM(ZS) algorithm and compare them quantitatively to independent saturation measurements from a light transmission technique, as well as results from deterministic inversions. Different model parameterizations are evaluated in terms of the recovered saturation fields and petrophysical parameters. The saturation field is parameterized (1) in cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values and gradients in structural elements defined by a gaussian bell of arbitrary shape and location. Synthetic tests reveal that a priori knowledge about the expected geologic structures (as in parameterization (3)) markedly improves the parameter estimates. The number of degrees of freedom thus strongly affects the inversion results. In an additional step, we explore the effects of assuming that the total volume of injected gas is known a priori and that no gas has migrated away from the monitored region.
Carbonaro Sarracino, Denise; Tarantal, Alice F; Lee, C Chang I; Martinez, Michele; Jin, Xiangyang; Wang, Xiaoyan; Hardee, Cinnamon L; Geiger, Sabine; Kahl, Christoph A; Kohn, Donald B
2014-10-01
Systemic delivery of a lentiviral vector carrying a therapeutic gene represents a new treatment for monogenic disease. Previously, we have shown that transfer of the adenosine deaminase (ADA) cDNA in vivo rescues the lethal phenotype and reconstitutes immune function in ADA-deficient mice. In order to translate this approach to ADA-deficient severe combined immune deficiency patients, neonatal ADA-deficient mice and newborn rhesus monkeys were treated with species-matched and mismatched vectors and pseudotypes. We compared gene delivery by the HIV-1-based vector to murine γ-retroviral vectors pseudotyped with vesicular stomatitis virus-glycoprotein or murine retroviral envelopes in ADA-deficient mice. The vesicular stomatitis virus-glycoprotein pseudotyped lentiviral vectors had the highest titer and resulted in the highest vector copy number in multiple tissues, particularly liver and lung. In monkeys, HIV-1 or simian immunodeficiency virus vectors resulted in similar biodistribution in most tissues including bone marrow, spleen, liver, and lung. Simian immunodeficiency virus pseudotyped with the gibbon ape leukemia virus envelope produced 10- to 30-fold lower titers than the vesicular stomatitis virus-glycoprotein pseudotype, but had a similar tissue biodistribution and similar copy number in blood cells. The relative copy numbers achieved in mice and monkeys were similar when adjusted to the administered dose per kg. These results suggest that this approach can be scaled-up to clinical levels for treatment of ADA-deficient severe combined immune deficiency subjects with suboptimal hematopoietic stem cell transplantation options.
Alternatives to the stochastic "noise vector" approach
NASA Astrophysics Data System (ADS)
de Forcrand, Philippe; Jäger, Benjamin
2018-03-01
Several important observables, like the quark condensate and the Taylor coefficients of the expansion of the QCD pressure with respect to the chemical potential, are based on the trace of the inverse Dirac operator and of its powers. Such traces are traditionally estimated with "noise vectors" sandwiching the operator. We explore alternative approaches based on polynomial approximations of the inverse Dirac operator.
Sentence alignment using feed forward neural network.
Fattah, Mohamed Abdel; Ren, Fuji; Kuroiwa, Shingo
2006-12-01
Parallel corpora have become an essential resource for work in multi lingual natural language processing. However, sentence aligned parallel corpora are more efficient than non-aligned parallel corpora for cross language information retrieval and machine translation applications. In this paper, we present a new approach to align sentences in bilingual parallel corpora based on feed forward neural network classifier. A feature parameter vector is extracted from the text pair under consideration. This vector contains text features such as length, punctuate score, and cognate score values. A set of manually prepared training data has been assigned to train the feed forward neural network. Another set of data was used for testing. Using this new approach, we could achieve an error reduction of 60% over length based approach when applied on English-Arabic parallel documents. Moreover this new approach is valid for any language pair and it is quite flexible approach since the feature parameter vector may contain more/less or different features than that we used in our system such as lexical match feature.
Link-Based Similarity Measures Using Reachability Vectors
Yoon, Seok-Ho; Kim, Ji-Soo; Ryu, Minsoo; Choi, Ho-Jin
2014-01-01
We present a novel approach for computing link-based similarities among objects accurately by utilizing the link information pertaining to the objects involved. We discuss the problems with previous link-based similarity measures and propose a novel approach for computing link based similarities that does not suffer from these problems. In the proposed approach each target object is represented by a vector. Each element of the vector corresponds to all the objects in the given data, and the value of each element denotes the weight for the corresponding object. As for this weight value, we propose to utilize the probability of reaching from the target object to the specific object, computed using the “Random Walk with Restart” strategy. Then, we define the similarity between two objects as the cosine similarity of the two vectors. In this paper, we provide examples to show that our approach does not suffer from the aforementioned problems. We also evaluate the performance of the proposed methods in comparison with existing link-based measures, qualitatively and quantitatively, with respect to two kinds of data sets, scientific papers and Web documents. Our experimental results indicate that the proposed methods significantly outperform the existing measures. PMID:24701188
NASA Astrophysics Data System (ADS)
Irazoqui Apecechea, Maialen; Verlaan, Martin; Zijl, Firmijn; Le Coz, Camille; Kernkamp, Herman
2017-06-01
The impact of the self-attraction and loading effect (SAL) in a regional 2D barotropic tidal model has been assessed, a term with acknowledged and well-understood importance for global models but omitted for boundary-forced, regional models, for which the implementation of SAL is non-trivial due to its non-local nature. In order to understand the impact of the lack of SAL effects in a regional scale, we have forced a regional model of the Northwest European Continental Shelf and the North Sea (continental shelf model (CSM)) with the SAL potential field derived from a global model (GTSM), in the form of a pressure field. Impacts have been studied in an uncalibrated setup and with only tidal forcing activated, in order to isolate effects. Additionally, the usually adopted simple SAL parameterization, in which the SAL contribution to the total tide is parameterized as a percentage of the barotropic pressure gradient (typically chosen 10%), is also implemented and compared to the results obtained with a full SAL computation. A significant impact on M2 representation is observed in the English Channel, Irish Sea and the west (UK East coast) and south (Belgian and Dutch Coast) of the North Sea, with an impact of up to 20 cm in vector difference terms. The impact of SAL translates into a consistent M2 amplitude and propagation speeds reduction throughout the domain. Results using the beta approximation, with an optimal domain-wide constant value of 1.5%, show a somewhat comparable impact in phase but opposite direction of the impact in amplitude, increasing amplitudes everywhere. In relative terms, both implementations lead to a reduction of the tidal representation error in comparison with the reference run without SAL, with the full SAL approach showing further impacted, improved results. Although the overprediction of tidal amplitudes and propagation speeds in the reference run might have additional sources like the lack of additional dissipative processes and non-considered bottom friction settings, results show an overall significant impact, most remarkable in tidal phases. After showing evidence of the SAL impact in regional models, the question of how to include this physical process in them in an efficient way arises, since SAL is a non-local effect and depends on the instantaneous water levels over the whole ocean, which is non-trivial to implement.
NASA Technical Reports Server (NTRS)
Chao, Winston C.
2015-01-01
The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.
Ruff, Kiersten M.; Harmon, Tyler S.; Pappu, Rohit V.
2015-01-01
We report the development and deployment of a coarse-graining method that is well suited for computer simulations of aggregation and phase separation of protein sequences with block-copolymeric architectures. Our algorithm, named CAMELOT for Coarse-grained simulations Aided by MachinE Learning Optimization and Training, leverages information from converged all atom simulations that is used to determine a suitable resolution and parameterize the coarse-grained model. To parameterize a system-specific coarse-grained model, we use a combination of Boltzmann inversion, non-linear regression, and a Gaussian process Bayesian optimization approach. The accuracy of the coarse-grained model is demonstrated through direct comparisons to results from all atom simulations. We demonstrate the utility of our coarse-graining approach using the block-copolymeric sequence from the exon 1 encoded sequence of the huntingtin protein. This sequence comprises of 17 residues from the N-terminal end of huntingtin (N17) followed by a polyglutamine (polyQ) tract. Simulations based on the CAMELOT approach are used to show that the adsorption and unfolding of the wild type N17 and its sequence variants on the surface of polyQ tracts engender a patchy colloid like architecture that promotes the formation of linear aggregates. These results provide a plausible explanation for experimental observations, which show that N17 accelerates the formation of linear aggregates in block-copolymeric N17-polyQ sequences. The CAMELOT approach is versatile and is generalizable for simulating the aggregation and phase behavior of a range of block-copolymeric protein sequences. PMID:26723608
Vector-model-supported approach in prostate plan optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Eva Sau Fan; Department of Health Technology and Informatics, The Hong Kong Polytechnic University; Wu, Vincent Wing Cheung
Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100more » previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration number without compromising the plan quality.« less
Spaceborne Autonomous and Ground Based Relative Orbit Control for the TerraSAR-X/TanDEM-X Formation
NASA Technical Reports Server (NTRS)
Ardaens, J. S.; D'Amico, S.; Kazeminejad, B.; Montenbruck, O.; Gill, E.
2007-01-01
TerraSAR-X (TSX) and TanDEM-X (TDX) are two advanced synthetic aperture radar (SAR) satellites flying in formation. SAR interferometry allows a high resolution imaging of the Earth by processing SAR images obtained from two slightly different orbits. TSX operates as a repeat-pass interferometer in the first phase of its lifetime and will be supplemented after two years by TDX in order to produce digital elevation models (DEM) with unprecedented accuracy. Such a flying formation makes indeed possible a simultaneous interferometric data acquisition characterized by highly flexible baselines with range of variations between a few hundreds meters and several kilometers [1]. TSX has been successfully launched on the 15th of June, 2007. TDX is expected to be launched on the 31st of May, 2009. A safe and robust maintenance of the formation is based on the concept of relative eccentricity/inclination (e/i) vector separation whose efficiency has already been demonstrated during the Gravity Recovery and Climate Experiment (GRACE) [2]. Here, the satellite relative motion is parameterized by mean of relative orbit elements and the key idea is to align the relative eccentricity and inclination vectors to minimize the hazard of a collision. Previous studies have already shown the pertinence of this concept and have described the way of controlling the formation using an impulsive deterministic control law [3]. Despite the completely different relative orbit control requirements, the same approach can be applied to the TSX/TDX formation. The task of TDX is to maintain the close formation configuration by actively controlling its relative motion with respect to TSX, the leader of the formation. TDX must replicate the absolute orbit keeping maneuvers executed by TSX and also compensate the natural deviation of the relative e/i vectors. In fact the relative orbital elements of the formation tend to drift because of the secular non-keplerian perturbations acting on both satellites. The goal of the ground segment is thus to regularly correct this configuration by performing small orbit correction maneuvers on TDX. The ground station contacts are limited due to the geographic position of the station and the costs for contact time. Only with a polar ground station a contact visibility is possible every orbit for LEO satellites. TSX and TDX use only the Weilheim ground station (in the southern part of Germany) during routine operations. This station allows two scheduled contact per day for the nominal orbit configuration, meaning that the satellite conditions can be checked with an interval of 12 hours. While this limitation is usually not critical for single satellite operations, the visibility constraints drive the achievable orbit control accuracy for a LEO formation if a ground based approach is chosen. Along-track position uncertainties and maneuver execution errors affect the relative motion and can be compensated only after a ground station contact.
Evaluation of scale-aware subgrid mesoscale eddy models in a global eddy-rich model
NASA Astrophysics Data System (ADS)
Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank
2017-07-01
Two parameterizations for horizontal mixing of momentum and tracers by subgrid mesoscale eddies are implemented in a high-resolution global ocean model. These parameterizations follow on the techniques of large eddy simulation (LES). The theory underlying one parameterization (2D Leith due to Leith, 1996) is that of enstrophy cascades in two-dimensional turbulence, while the other (QG Leith) is designed for potential enstrophy cascades in quasi-geostrophic turbulence. Simulations using each of these parameterizations are compared with a control simulation using standard biharmonic horizontal mixing.Simulations using the 2D Leith and QG Leith parameterizations are more realistic than those using biharmonic mixing. In particular, the 2D Leith and QG Leith simulations have more energy in resolved mesoscale eddies, have a spectral slope more consistent with turbulence theory (an inertial enstrophy or potential enstrophy cascade), have bottom drag and vertical viscosity as the primary sinks of energy instead of lateral friction, and have isoneutral parameterized mesoscale tracer transport. The parameterization choice also affects mass transports, but the impact varies regionally in magnitude and sign.
Tomkins, Melissa; Kliot, Adi; Marée, Athanasius Fm; Hogenhout, Saskia A
2018-03-13
Members of the Candidatus genus Phytoplasma are small bacterial pathogens that hijack their plant hosts via the secretion of virulence proteins (effectors) leading to a fascinating array of plant phenotypes, such as witch's brooms (stem proliferations) and phyllody (retrograde development of flowers into vegetative tissues). Phytoplasma depend on insect vectors for transmission, and interestingly, these insect vectors were found to be (in)directly attracted to plants with these phenotypes. Therefore, phytoplasma effectors appear to reprogram plant development and defence to lure insect vectors, similarly to social engineering malware, which employs tricks to lure people to infected computers and webpages. A multi-layered mechanistic modelling approach will enable a better understanding of how phytoplasma effector-mediated modulations of plant host development and insect vector behaviour contribute to phytoplasma spread, and ultimately to predict the long reach of phytoplasma effector genes. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Kotchi, Serge Olivier; Brazeau, Stephanie; Ludwig, Antoinette; Aube, Guy; Berthiaume, Pilippe
2016-08-01
Environmental determinants (EVDs) were identified as key determinant of health (DoH) for the emergence and re-emergence of several vector-borne diseases. Maintaining ongoing acquisition of data related to EVDs at local scale and for large regions constitutes a significant challenge. Earth observation (EO) satellites offer a framework to overcome this challenge. However, EO image analysis methods commonly used to estimate EVDs are time and resource consuming. Moreover, variations of microclimatic conditions combined with high landscape heterogeneity limit the effectiveness of climatic variables derived from EO. In this study, we present what are DoH and EVDs, the impacts of EVDs on vector-borne diseases in the context of global environmental change, the need to characterize EVDs of vector-borne diseases at local scale and its challenges, and finally we propose an approach based on EO images to estimate at local scale indicators pertaining to EVDs of vector-borne diseases.
Pellecer, Mariele J.; Dorn, Patricia L.; Bustamante, Dulce M.; Rodas, Antonieta; Monroy, M. Carlota
2013-01-01
A novel method using vector blood meal sources to assess the impact of control efforts on the risk of transmission of Chagas disease was tested in the village of El Tule, Jutiapa, Guatemala. Control used Ecohealth interventions, where villagers ameliorated the factors identified as most important for transmission. First, after an initial insecticide application, house walls were plastered. Later, bedroom floors were improved and domestic animals were moved outdoors. Only vector blood meal sources revealed the success of the first interventions: human blood meals declined from 38% to 3% after insecticide application and wall plastering. Following all interventions both vector blood meal sources and entomological indices revealed the reduction in transmission risk. These results indicate that vector blood meals may reveal effects of control efforts early on, effects that may not be apparent using traditional entomological indices, and provide further support for the Ecohealth approach to Chagas control in Guatemala. PMID:23382165
Local gene transfection in the cochlea (Review).
Xia, Li; Yin, Shankai
2013-07-01
There is much interest in the potential application of vector-induced gene therapeutic approaches to several forms of hearing disorders due to the poor efficacy of existing treatments. The cochlea is an ideal site for local gene transfection due to its anatomical encapsulation and fluid flow within its ducts. However, this requires the development of novel technologies in materials science and microbial supply vectors for target gene delivery. This review focuses on the introduction of various viral and non-viral vectors as well as injection approaches to transfecting cochlear cells in vivo. Finally, the perspective of local gene therapy was discussed. Therapeutic approaches using local gene transfection may provide a means of cochlear cell and tissue protection and treatment in cases of exogenous hearing loss and endogenous disorders.
NASA Astrophysics Data System (ADS)
Nelson, R. R.; O'Dell, C.
2017-12-01
The primary goal of OCO-2 is to use hyperspectral measurements of reflected near-infrared sunlight to retrieve the column-averaged dry-air mole fraction of carbon dioxide (XCO2) with high accuracy. This is only possible for measurements of scenes nearly free of optically thick clouds and aerosols. As some cloud or aerosol contamination will always be present, the OCO-2 retrieval algorithm includes clouds and aerosols as retrieved properties in its state vector. Information content analyses demonstrate that there are only 2-6 pieces of information about aerosols in the OCO-2 radiances. However, the upcoming OCO-2 algorithm (B8) attempts to retrieve 9 aerosol parameters; this over-fitting can hinder convergence and produce multiple solutions. In this work, we develop a simplified cloud and aerosol parameterization that intelligently reduces the number of retrieved parameters to 5 by only retrieving information about two aerosol layers: a lower tropospheric layer and an upper tropospheric / stratospheric layer. We retrieve the optical depth of each layer and the height of the lower tropospheric layer. Each of these layers contains a mixture of fine and coarse mode aerosol. In comparisons between OCO-2 XCO2 estimates and validation sources including TCCON, this scheme performs about as well as the more complicated OCO-2 retrieval algorithm, but has the potential benefits of more interpretable aerosol results, faster convergence, less nonlinearity, and greater throughput. We also investigate the dependence of our results on the optical properties of the fine and coarse mode aerosol types, such as their effective radii and the environmental relative humidity.
NASA Astrophysics Data System (ADS)
Firl, G. J.; Randall, D. A.
2013-12-01
The so-called "assumed probability density function (PDF)" approach to subgrid-scale (SGS) parameterization has shown to be a promising method for more accurately representing boundary layer cloudiness under a wide range of conditions. A new parameterization has been developed, named the Two-and-a-Half ORder closure (THOR), that combines this approach with a higher-order turbulence closure. THOR predicts the time evolution of the turbulence kinetic energy components, the variance of ice-liquid water potential temperature (θil) and total non-precipitating water mixing ratio (qt) and the covariance between the two, and the vertical fluxes of horizontal momentum, θil, and qt. Ten corresponding third-order moments in addition to the skewnesses of θil and qt are calculated using diagnostic functions assuming negligible time tendencies. The statistical moments are used to define a trivariate double Gaussian PDF among vertical velocity, θil, and qt. The first three statistical moments of each variable are used to estimate the two Gaussian plume means, variances, and weights. Unlike previous similar models, plume variances are not assumed to be equal or zero. Instead, they are parameterized using the idea that the less dominant Gaussian plume (typically representing the updraft-containing portion of a grid cell) has greater variance than the dominant plume (typically representing the "environmental" or slowly subsiding portion of a grid cell). Correlations among the three variables are calculated using the appropriate covariance moments, and both plume correlations are assumed to be equal. The diagnosed PDF in each grid cell is used to calculate SGS condensation, SGS fluxes of cloud water species, SGS buoyancy terms, and to inform other physical parameterizations about SGS variability. SGS condensation is extended from previous similar models to include condensation over both liquid and ice substrates, dependent on the grid cell temperature. Implementations have been included in THOR to drive existing microphysical and radiation parameterizations with samples drawn from the trivariate PDF. THOR has been tested in a single-column model framework using standardized test cases spanning a range of large-scale conditions conducive to both shallow cumulus and stratocumulus clouds and the transition between the two states. The results were compared to published LES intercomparison results using the same cases, and the gross characteristics of both cloudiness and boundary layer turbulence produced by THOR were within the range of results from the respective LES ensembles. In addition, THOR was used in a single-column model framework to study low cloud feedbacks in the northeastern Pacific Ocean. Using initialization and forcings developed as part of the CGILS project, THOR was run at 8 points along a cross-section from the trade-wind cumulus region east of Hawaii to the coastal stratocumulus region off the coast of California for both the control climate and a climate perturbed by +2K SST. A neutral to weakly positive cloud feedback of 0-4 W m-2 K-1 was simulated along the cross-section. The physical mechanisms responsible appeared to be increased boundary layer entrainment and stratocumulus decoupling leading to reduced maximum cloud cover and liquid water path.
2013-01-01
Background Indoor residual insecticide spraying (IRS) and long-lasting insecticide treated nets (LLINs) are commonly used together even though evidence that such combinations confer greater protection against malaria than either method alone is inconsistent. Methods A deterministic model of mosquito life cycle processes was adapted to allow parameterization with results from experimental hut trials of various combinations of untreated nets or LLINs (Olyset®, PermaNet 2.0®, Icon Life® nets) with IRS (pirimiphos methyl, lambda cyhalothrin, DDT), in a setting where vector populations are dominated by Anopheles arabiensis, so that community level impact upon malaria transmission at high coverage could be predicted. Results Intact untreated nets alone provide equivalent personal protection to all three LLINs. Relative to IRS plus untreated nets, community level protection is slightly higher when Olyset® or PermaNet 2.0® nets are added onto IRS with pirimiphos methyl or lambda cyhalothrin but not DDT, and when Icon Life® nets supplement any of the IRS insecticides. Adding IRS onto any net modestly enhances communal protection when pirimiphos methyl is sprayed, while spraying lambda cyhalothrin enhances protection for untreated nets but not LLINs. Addition of DDT reduces communal protection when added to LLINs. Conclusions Where transmission is mediated primarily by An. arabiensis, adding IRS to high LLIN coverage provides only modest incremental benefit (e.g. when an organophosphate like pirimiphos methyl is used), but can be redundant (e.g. when a pyrethroid like lambda cyhalothin is used) or even regressive (e.g. when DDT is used for the IRS). Relative to IRS plus untreated nets, supplementing IRS with LLINs will only modestly improve community protection. Beyond the physical protection that intact nets provide, additional protection against transmission by An. arabiensis conferred by insecticides will be remarkably small, regardless of whether they are delivered as LLINs or IRS. The insecticidal action of LLINs and IRS probably already approaches their absolute limit of potential impact upon this persistent vector so personal protection of nets should be enhanced by improving the physical integrity and durability. Combining LLINs and non-pyrethroid IRS in residual transmission systems may nevertheless be justified as a means to manage insecticide resistance and prevent potential rebound of not only An. arabiensis, but also more potent, vulnerable and historically important species such as Anopheles gambiae and Anopheles funestus. PMID:23324456
NASA Astrophysics Data System (ADS)
Gao, Wei; Li, Xiang-ru
2017-07-01
The multi-task learning takes the multiple tasks together to make analysis and calculation, so as to dig out the correlations among them, and therefore to improve the accuracy of the analyzed results. This kind of methods have been widely applied to the machine learning, pattern recognition, computer vision, and other related fields. This paper investigates the application of multi-task learning in estimating the stellar atmospheric parameters, including the surface temperature (Teff), surface gravitational acceleration (lg g), and chemical abundance ([Fe/H]). Firstly, the spectral features of the three stellar atmospheric parameters are extracted by using the multi-task sparse group Lasso algorithm, then the support vector machine is used to estimate the atmospheric physical parameters. The proposed scheme is evaluated on both the Sloan stellar spectra and the theoretical spectra computed from the Kurucz's New Opacity Distribution Function (NEWODF) model. The mean absolute errors (MAEs) on the Sloan spectra are: 0.0064 for lg (Teff /K), 0.1622 for lg (g/(cm · s-2)), and 0.1221 dex for [Fe/H]; the MAEs on the synthetic spectra are 0.0006 for lg (Teff /K), 0.0098 for lg (g/(cm · s-2)), and 0.0082 dex for [Fe/H]. Experimental results show that the proposed scheme has a rather high accuracy for the estimation of stellar atmospheric parameters.
Stochastic Parametrization for the Impact of Neglected Variability Patterns
NASA Astrophysics Data System (ADS)
Kaiser, Olga; Hien, Steffen; Achatz, Ulrich; Horenko, Illia
2017-04-01
An efficient description of the gravity wave variability and the related spontaneous emission processes requires an empirical stochastic closure for the impact of neglected variability patterns (subgridscales or SGS). In particular, we focus on the analysis of the IGW emission within a tangent linear model which requires a stochastic SGS parameterization for taking the self interaction of the ageostrophic flow components into account. For this purpose, we identify the best SGS model in terms of exactness and simplicity by deploying a wide range of different data-driven model classes, including standard stationary regression models, autoregression and artificial neuronal networks models - as well as the family of nonstationary models like FEM-BV-VARX model class (Finite Element based vector autoregressive time series analysis with bounded variation of the model parameters). The models are used to investigate the main characteristics of the underlying dynamics and to explore the significant spatial and temporal neighbourhood dependencies. The best SGS model in terms of exactness and simplicity is obtained for the nonstationary FEM-BV-VARX setting, determining only direct spatial and temporal neighbourhood as significant - and allowing to drastically reduce the number of informations that are required for the optimal SGS. Additionally, the models are characterized by sets of vector- and matrix-valued parameters that must be inferred from big data sets provided by simulations - making it a task that can not be solved without deploying high-performance computing facilities (HPC).
Severson, David W.; Behura, Susanta K.
2016-01-01
Dengue (DENV), yellow fever, chikungunya, and Zika virus transmission to humans by a mosquito host is confounded by both intrinsic and extrinsic variables. Besides virulence factors of the individual arboviruses, likelihood of virus transmission is subject to variability in the genome of the primary mosquito vector, Aedes aegypti. The “vectorial capacity” of A. aegypti varies depending upon its density, biting rate, and survival rate, as well as its intrinsic ability to acquire, host and transmit a given arbovirus. This intrinsic ability is known as “vector competence”. Based on whole transcriptome analysis, several genes and pathways have been predicated to have an association with a susceptible or refractory response in A. aegypti to DENV infection. However, the functional genomics of vector competence of A. aegypti is not well understood, primarily due to lack of integrative approaches in genomic or transcriptomic studies. In this review, we focus on the present status of genomics studies of DENV vector competence in A. aegypti as limited information is available relative to the other arboviruses. We propose future areas of research needed to facilitate the integration of vector and virus genomics and environmental factors to work towards better understanding of vector competence and vectorial capacity in natural conditions. PMID:27809220
NASA Astrophysics Data System (ADS)
Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo
2014-05-01
Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.
Covariantized vector Galileons
NASA Astrophysics Data System (ADS)
Hull, Matthew; Koyama, Kazuya; Tasinato, Gianmassimo
2016-03-01
Vector Galileons are ghost-free systems containing higher derivative interactions of vector fields. They break the vector gauge symmetry, and the dynamics of the longitudinal vector polarizations acquire a Galileon symmetry in an appropriate decoupling limit in Minkowski space. Using an Arnowitt-Deser-Misner approach, we carefully reconsider the coupling with gravity of vector Galileons, with the aim of studying the necessary conditions to avoid the propagation of ghosts. We develop arguments that put on a more solid footing the results previously obtained in the literature. Moreover, working in analogy with the scalar counterpart, we find indications for the existence of a "beyond Horndeski" theory involving vector degrees of freedom that avoids the propagation of ghosts thanks to secondary constraints. In addition, we analyze a Higgs mechanism for generating vector Galileons through spontaneous symmetry breaking, and we present its consistent covariantization.
A general science-based framework for dynamical spatio-temporal models
Wikle, C.K.; Hooten, M.B.
2010-01-01
Spatio-temporal statistical models are increasingly being used across a wide variety of scientific disciplines to describe and predict spatially-explicit processes that evolve over time. Correspondingly, in recent years there has been a significant amount of research on new statistical methodology for such models. Although descriptive models that approach the problem from the second-order (covariance) perspective are important, and innovative work is being done in this regard, many real-world processes are dynamic, and it can be more efficient in some cases to characterize the associated spatio-temporal dependence by the use of dynamical models. The chief challenge with the specification of such dynamical models has been related to the curse of dimensionality. Even in fairly simple linear, first-order Markovian, Gaussian error settings, statistical models are often over parameterized. Hierarchical models have proven invaluable in their ability to deal to some extent with this issue by allowing dependency among groups of parameters. In addition, this framework has allowed for the specification of science based parameterizations (and associated prior distributions) in which classes of deterministic dynamical models (e. g., partial differential equations (PDEs), integro-difference equations (IDEs), matrix models, and agent-based models) are used to guide specific parameterizations. Most of the focus for the application of such models in statistics has been in the linear case. The problems mentioned above with linear dynamic models are compounded in the case of nonlinear models. In this sense, the need for coherent and sensible model parameterizations is not only helpful, it is essential. Here, we present an overview of a framework for incorporating scientific information to motivate dynamical spatio-temporal models. First, we illustrate the methodology with the linear case. We then develop a general nonlinear spatio-temporal framework that we call general quadratic nonlinearity and demonstrate that it accommodates many different classes of scientific-based parameterizations as special cases. The model is presented in a hierarchical Bayesian framework and is illustrated with examples from ecology and oceanography. ?? 2010 Sociedad de Estad??stica e Investigaci??n Operativa.
NASA Astrophysics Data System (ADS)
Serbin, S.; Walker, A. P.; Wu, J.; Ely, K.; Rogers, A.; Wolfe, B.
2017-12-01
Tropical forests play a key role in regulating the global carbon (C), water, and energy cycles and stores, as well as influence climate through the exchanges of mass and energy with the atmosphere. However, projected changes in temperature and precipitation patterns are expected to impact the tropics and the strength of the tropical C sink, likely resulting in significant climate feedbacks. Moreover, the impact of stronger, longer, and more extensive droughts not well understood. Critical for the accurate modeling of the tropical C and water cycle in Earth System Models (ESMs) is the representation of the coupled photosynthetic and stomatal conductance processes and how these processes are impacted by environmental and other drivers. Moreover, the parameterization and representation of these processes is an important consideration for ESM projections. We use a novel model framework, the Multi-Assumption Architecture and Testbed (MAAT), together with the open-source bioinformatics toolbox, the Predictive Ecosystem Analyzer (PEcAn), to explore the impact of the multiple mechanistic hypotheses of coupled photosynthesis and stomatal conductance as well as the additional uncertainty related to model parameterization. Our goal was to better understand how model choice and parameterization influences diurnal and seasonal modeling of leaf-level photosynthesis and stomatal conductance. We focused on the 2016 ENSO period and starting in February, monthly measurements of diurnal photosynthesis and conductance were made on 7-9 dominant species at the two Smithsonian canopy crane sites. This benchmark dataset was used to test different representations of stomatal conductance and photosynthetic parameterizations with the MAAT model, running within PEcAn. The MAAT model allows for the easy selection of competing hypotheses to test different photosynthetic modeling approaches while PEcAn provides the ability to explore the uncertainties introduced through parameterization. We found that stomatal choice can play a large role in model-data mismatch and observational constraints can be used to reduce simulated model spread, but can also result in large model disagreements with measurements. These results will be used to help inform the modeling of photosynthesis in tropical systems for the larger ESM community.
Objective calibration of regional climate models
NASA Astrophysics Data System (ADS)
Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.
2012-12-01
Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented methodology is effective and objective. It is argued that objective calibration is an attractive tool and could become standard procedure after introducing new model implementations, or after a spatial transfer of a regional climate model. Objective calibration of parameterizations with regional models could also serve as a strategy toward improving parameterization packages of global climate models.
NASA Astrophysics Data System (ADS)
Anderson, Ray; Skaggs, Todd; Alfieri, Joseph; Kustas, William; Wang, Dong; Ayars, James
2016-04-01
Partitioned land surfaces fluxes (e.g. evaporation, transpiration, photosynthesis, and ecosystem respiration) are needed as input, calibration, and validation data for numerous hydrological and land surface models. However, one of the most commonly used techniques for measuring land surface fluxes, Eddy Covariance (EC), can directly measure net, combined water and carbon fluxes (evapotranspiration and net ecosystem exchange/productivity). Analysis of the correlation structure of high frequency EC time series (hereafter flux partitioning or FP) has been proposed to directly partition net EC fluxes into their constituent components using leaf-level water use efficiency (WUE) data to separate stomatal and non-stomatal transport processes. FP has significant logistical and spatial representativeness advantages over other partitioning approaches (e.g. isotopic fluxes, sap flow, microlysimeters), but the performance of the FP algorithm is reliant on the accuracy of the intercellular CO2 (ci) concentration used to parameterize WUE for each flux averaging interval. In this study, we tested several parameterizations for ci as a function of atmospheric CO2 (ca), including (1) a constant ci/ca ratio for C3 and C4 photosynthetic pathway plants, (2) species-specific ci/ca-Vapor Pressure Deficit (VPD) relationships (quadratic and linear), and (3) generalized C3 and C4 photosynthetic pathway ci/ca-VPD relationships. We tested these ci parameterizations at three agricultural EC towers from 2011-present in C4 and C3 crops (sugarcane - Saccharum officinarum L. and peach - Prunus persica), and validated again sap-flow sensors installed at the peach site. The peach results show that the species-specific parameterizations driven FP algorithm came to convergence significantly more frequently (~20% more frequently) than the constant ci/ca ratio or generic C3-VPD relationship. The FP algorithm parameterizations with a generic VPD relationship also had slightly higher transpiration (5 Wm-2 difference) than the constant ci/ca ratio. However, photosynthesis and respiration fluxes over sugarcane were ~15% lower with a VPD-ci/ca relationship than a constant ci/ca ratio. The results illustrate the importance of combining leaf-level physiological observations with EC to improve the performance of the FP algorithm.
Summary and Findings of the ARL Dynamic Failure Forum
2016-09-29
short beam shear, quasi -static indentation, depth of penetration, and V50 limit velocity. o Experimental technique suggestions for improvement included...art in experimental , theoretical, and computational studies of dynamic failure. The forum also focused on identifying technologies and approaches...Army-specific problems. Experimental exploration of material behavior and an improved ability to parameterize material models is essential to improving
Geometry modeling and grid generation using 3D NURBS control volume
NASA Technical Reports Server (NTRS)
Yu, Tzu-Yi; Soni, Bharat K.; Shih, Ming-Hsin
1995-01-01
The algorithms for volume grid generation using NURBS geometric representation are presented. The parameterization algorithm is enhanced to yield a desired physical distribution on the curve, surface and volume. This approach bridges the gap between CAD surface/volume definition and surface/volume grid generation. Computational examples associated with practical configurations have shown the utilization of these algorithms.
Alignment dynamics of diffusive scalar gradient in a two-dimensional model flow
NASA Astrophysics Data System (ADS)
Gonzalez, M.
2018-04-01
The Lagrangian two-dimensional approach of scalar gradient kinematics is revisited accounting for molecular diffusion. Numerical simulations are performed in an analytic, parameterized model flow, which enables considering different regimes of scalar gradient dynamics. Attention is especially focused on the influence of molecular diffusion on Lagrangian statistical orientations and on the dynamics of scalar gradient alignment.
2012-09-30
oscillation (SAO) and quasi-biennial oscillation ( QBO ) of stratospheric equatorial winds in long-term (10-year) nature runs. The ability of these new schemes...to generate and maintain tropical SAO and QBO circulations in Navy models for the first time is an important breakthrough, since these circulations
Michael J. Falkowski; Andrew T. Hudak; Nicholas L. Crookston; Paul E. Gessler; Edward H. Uebler; Alistair M. S. Smith
2010-01-01
Sustainable forest management requires timely, detailed forest inventory data across large areas, which is difficult to obtain via traditional forest inventory techniques. This study evaluated k-nearest neighbor imputation models incorporating LiDAR data to predict tree-level inventory data (individual tree height, diameter at breast height, and...
NASA Astrophysics Data System (ADS)
Lai, Changliang; Wang, Junbiao; Liu, Chuang
2014-10-01
Six typical composite grid cylindrical shells are constructed by superimposing three basic types of ribs. Then buckling behavior and structural efficiency of these shells are analyzed under axial compression, pure bending, torsion and transverse bending by finite element (FE) models. The FE models are created by a parametrical FE modeling approach that defines FE models with original natural twisted geometry and orients cross-sections of beam elements exactly. And the approach is parameterized and coded by Patran Command Language (PCL). The demonstrations of FE modeling indicate the program enables efficient generation of FE models and facilitates parametric studies and design of grid shells. Using the program, the effects of helical angles on the buckling behavior of six typical grid cylindrical shells are determined. The results of these studies indicate that the triangle grid and rotated triangle grid cylindrical shell are more efficient than others under axial compression and pure bending, whereas under torsion and transverse bending, the hexagon grid cylindrical shell is most efficient. Additionally, buckling mode shapes are compared and provide an understanding of composite grid cylindrical shells that is useful in preliminary design of such structures.
Ferentinos, Konstantinos P
2005-09-01
Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; van Leeuwen, P. J.
2017-12-01
Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.
Ocean-Forced Ice-Shelf Thinning in a Synchronously Coupled Ice-Ocean Model
NASA Astrophysics Data System (ADS)
Jordan, James R.; Holland, Paul R.; Goldberg, Dan; Snow, Kate; Arthern, Robert; Campin, Jean-Michel; Heimbach, Patrick; Jenkins, Adrian
2018-02-01
The first fully synchronous, coupled ice shelf-ocean model with a fixed grounding line and imposed upstream ice velocity has been developed using the MITgcm (Massachusetts Institute of Technology general circulation model). Unlike previous, asynchronous, approaches to coupled modeling our approach is fully conservative of heat, salt, and mass. Synchronous coupling is achieved by continuously updating the ice-shelf thickness on the ocean time step. By simulating an idealized, warm-water ice shelf we show how raising the pycnocline leads to a reduction in both ice-shelf mass and back stress, and hence buttressing. Coupled runs show the formation of a western boundary channel in the ice-shelf base due to increased melting on the western boundary due to Coriolis enhanced flow. Eastern boundary ice thickening is also observed. This is not the case when using a simple depth-dependent parameterized melt, as the ice shelf has relatively thinner sides and a thicker central "bulge" for a given ice-shelf mass. Ice-shelf geometry arising from the parameterized melt rate tends to underestimate backstress (and therefore buttressing) for a given ice-shelf mass due to a thinner ice shelf at the boundaries when compared to coupled model simulations.
A projected decrease in lightning under climate change
NASA Astrophysics Data System (ADS)
Finney, Declan L.; Doherty, Ruth M.; Wild, Oliver; Stevenson, David S.; MacKenzie, Ian A.; Blyth, Alan M.
2018-03-01
Lightning strongly influences atmospheric chemistry1-3, and impacts the frequency of natural wildfires4. Most previous studies project an increase in global lightning with climate change over the coming century1,5-7, but these typically use parameterizations of lightning that neglect cloud ice fluxes, a component generally considered to be fundamental to thunderstorm charging8. As such, the response of lightning to climate change is uncertain. Here, we compare lightning projections for 2100 using two parameterizations: the widely used cloud-top height (CTH) approach9, and a new upward cloud ice flux (IFLUX) approach10 that overcomes previous limitations. In contrast to the previously reported global increase in lightning based on CTH, we find a 15% decrease in total lightning flash rate with IFLUX in 2100 under a strong global warming scenario. Differences are largest in the tropics, where most lightning occurs, with implications for the estimation of future changes in tropospheric ozone and methane, as well as differences in their radiative forcings. These results suggest that lightning schemes more closely related to cloud ice and microphysical processes are needed to robustly estimate future changes in lightning and atmospheric composition.
A New Canopy Integration Factor
NASA Astrophysics Data System (ADS)
Badgley, G.; Anderegg, L. D. L.; Baker, I. T.; Berry, J. A.
2017-12-01
Ecosystem modelers have long debated how to best represent within-canopy heterogeneity. Can one big leaf represent the full range of canopy physiological responses? Or you need two leaves - sun and shade - to get things right? Is it sufficient to treat the canopy as a diffuse medium? Or would it be better to explicitly represent separate canopy layers? These are open questions that have been subject of an enormous amount of research and scrutiny. Yet regardless of how the canopy is represented, each model must grapple with correctly parameterizing its canopy in a way that properly translates leaf-level processes to the canopy and ecosystem scale. We present a new approach for integrating whole-canopy biochemistry by combining remote sensing with ecological theory. Using the Simple Biosphere model (SiB), we redefined how SiB scales photosynthetic processes from leaf-to-canopy as a function of satellite-derived measurements of solar-induced chlorophyll fluorescence (SIF). Across multiple long-term study sites, our approach improves the accuracy of daily modeled photosynthesis by as much as 25 percent. We share additional insights on how SIF might be more directly integrated into photosynthesis models, as well as present ideas for harnessing SIF to more accurately parameterize canopy biochemical variables.
Multiscale Cloud System Modeling
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Moncrieff, Mitchell W.
2009-01-01
The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.
NASA Astrophysics Data System (ADS)
Thomas, Stephanie Margarete; Beierkuhnlein, Carl
2013-05-01
The occurrence of ectotherm disease vectors outside of their previous distribution area and the emergence of vector-borne diseases can be increasingly observed at a global scale and are accompanied by a growing number of studies which investigate the vast range of determining factors and their causal links. Consequently, a broad span of scientific disciplines is involved in tackling these complex phenomena. First, we evaluate the citation behaviour of relevant scientific literature in order to clarify the question "do scientists consider results of other disciplines to extend their expertise?" We then highlight emerging tools and concepts useful for risk assessment. Correlative models (regression-based, machine-learning and profile techniques), mechanistic models (basic reproduction number R 0) and methods of spatial regression, interaction and interpolation are described. We discuss further steps towards multidisciplinary approaches regarding new tools and emerging concepts to combine existing approaches such as Bayesian geostatistical modelling, mechanistic models which avoid the need for parameter fitting, joined correlative and mechanistic models, multi-criteria decision analysis and geographic profiling. We take the quality of both occurrence data for vector, host and disease cases, and data of the predictor variables into consideration as both determine the accuracy of risk area identification. Finally, we underline the importance of multidisciplinary research approaches. Even if the establishment of communication networks between scientific disciplines and the share of specific methods is time consuming, it promises new insights for the surveillance and control of vector-borne diseases worldwide.
Diagnosing the impact of alternative calibration strategies on coupled hydrologic models
NASA Astrophysics Data System (ADS)
Smith, T. J.; Perera, C.; Corrigan, C.
2017-12-01
Hydrologic models represent a significant tool for understanding, predicting, and responding to the impacts of water on society and society on water resources and, as such, are used extensively in water resources planning and management. Given this important role, the validity and fidelity of hydrologic models is imperative. While extensive focus has been paid to improving hydrologic models through better process representation, better parameter estimation, and better uncertainty quantification, significant challenges remain. In this study, we explore a number of competing model calibration scenarios for simple, coupled snowmelt-runoff models to better understand the sensitivity / variability of parameterizations and its impact on model performance, robustness, fidelity, and transferability. Our analysis highlights the sensitivity of coupled snowmelt-runoff model parameterizations to alterations in calibration approach, underscores the concept of information content in hydrologic modeling, and provides insight into potential strategies for improving model robustness / fidelity.
A frequentist approach to computer model calibration
Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.
2016-05-05
The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less
The eddy transport of nonconserved trace species derived from satellite data
NASA Technical Reports Server (NTRS)
Smith, Anne K.; Lyjak, Lawrence V.; Gille, John C.
1988-01-01
Using the approach of the Garcia and Solomon (1983) model and data obtained by the LIMS instrument on Nimbus 7, the chemical eddy transport matrix for planetary waves was calculated, and the chemical eddy contribution to the components of the matrix obtained from the LIMS satellite observations was computed using specified photochemical damping time scales. The dominant component of the transport matrices for several winter months were obtained for ozone, nitric acid, and quasi-geostrophic potential vorticity (PV), and the parameterized transports of these were compared with the 'exact' transports, computed directly from the eddy LIMS data. The results indicate that the chemical eddy effect can account for most of the observed ozone transport in early winter, decreasing to less than half in late winter. The agreement between the parameterized and observed nitric acid and PV was not as good. Reasons for this are discussed.
Production of non viral DNA vectors.
Schleef, Martin; Blaesen, Markus; Schmeer, Marco; Baier, Ruth; Marie, Corinne; Dickson, George; Scherman, Daniel
2010-12-01
After some decades of research, development and first clinical approaches to use DNA vectors in gene therapy, cell therapy and DNA vaccination, the requirements for the pharmaceutical manufacturing of gene vectors has improved significantly step by step. Even the expression level and specificity of non viral DNA vectors were significantly modified and followed the success of viral vectors. The strict separation of "viral" and "non viral" gene transfer are historic borders between scientist and we will show that both fields together are able to allow the next step towards successful prevention and therapy. Here we summarize the features of producing and modifying these non-viral gene vectors to ensure the required quality to modify cells and to treat human and animals.
An innovative ecohealth intervention for Chagas disease vector control in Yucatan, Mexico.
Waleckx, Etienne; Camara-Mejia, Javier; Ramirez-Sierra, Maria Jesus; Cruz-Chan, Vladimir; Rosado-Vallado, Miguel; Vazquez-Narvaez, Santos; Najera-Vazquez, Rosario; Gourbière, Sébastien; Dumonteil, Eric
2015-02-01
Non-domiciliated (intrusive) triatomine vectors remain a challenge for the sustainability of Chagas disease vector control as these triatomines are able to transiently (re-)infest houses. One of the best-characterized examples is Triatoma dimidiata from the Yucatan peninsula, Mexico, where adult insects seasonally infest houses between March and July. We focused our study on three rural villages in the state of Yucatan, Mexico, in which we performed a situation analysis as a first step before the implementation of an ecohealth (ecosystem approach to health) vector control intervention. The identification of the key determinants affecting the transient invasion of human dwellings by T. dimidiata was performed by exploring associations between bug presence and qualitative and quantitative variables describing the ecological, biological and social context of the communities. We then used a participatory action research approach for implementation and evaluation of a control strategy based on window insect screens to reduce house infestation by T. dimidiata. This ecohealth approach may represent a valuable alternative to vertically-organized insecticide spraying. Further evaluation may confirm that it is sustainable and provides effective control (in the sense of limiting infestation of human dwellings and vector/human contacts) of intrusive triatomines in the region. © The author 2015. The World Health Organization has granted Oxford University Press permission for the reproduction of this article.
Evaluation of Surface Flux Parameterizations with Long-Term ARM Observations
Liu, Gang; Liu, Yangang; Endo, Satoshi
2013-02-01
Surface momentum, sensible heat, and latent heat fluxes are critical for atmospheric processes such as clouds and precipitation, and are parameterized in a variety of models ranging from cloud-resolving models to large-scale weather and climate models. However, direct evaluation of the parameterization schemes for these surface fluxes is rare due to limited observations. This study takes advantage of the long-term observations of surface fluxes collected at the Southern Great Plains site by the Department of Energy Atmospheric Radiation Measurement program to evaluate the six surface flux parameterization schemes commonly used in the Weather Research and Forecasting (WRF) model and threemore » U.S. general circulation models (GCMs). The unprecedented 7-yr-long measurements by the eddy correlation (EC) and energy balance Bowen ratio (EBBR) methods permit statistical evaluation of all six parameterizations under a variety of stability conditions, diurnal cycles, and seasonal variations. The statistical analyses show that the momentum flux parameterization agrees best with the EC observations, followed by latent heat flux, sensible heat flux, and evaporation ratio/Bowen ratio. The overall performance of the parameterizations depends on atmospheric stability, being best under neutral stratification and deteriorating toward both more stable and more unstable conditions. Further diagnostic analysis reveals that in addition to the parameterization schemes themselves, the discrepancies between observed and parameterized sensible and latent heat fluxes may stem from inadequate use of input variables such as surface temperature, moisture availability, and roughness length. The results demonstrate the need for improving the land surface models and measurements of surface properties, which would permit the evaluation of full land surface models.« less
Flare Prediction Using Photospheric and Coronal Image Data
NASA Astrophysics Data System (ADS)
Jonas, E.; Shankar, V.; Bobra, M.; Recht, B.
2016-12-01
We attempt to forecast M-and X-class solar flares using a machine-learning algorithm and five years of image data from both the Helioseismic and Magnetic Imager (HMI) and Atmospheric Imaging Assembly (AIA) instruments aboard the Solar Dynamics Observatory. HMI is the first instrument to continuously map the full-disk photospheric vector magnetic field from space (Schou et al., 2012). The AIA instrument maps the transition region and corona using various ultraviolet wavelengths (Lemen et al., 2012). HMI and AIA data are taken nearly simultaneously, providing an opportunity to study the entire solar atmosphere at a rapid cadence. Most flare forecasting efforts described in the literature use some parameterization of solar data - typically of the photospheric magnetic field within active regions. These numbers are considered to capture the information in any given image relevant to predicting solar flares. In our approach, we use HMI and AIA images of solar active regions and a deep convolutional kernel network to predict solar flares. This is effectively a series of shallow-but-wide random convolutional neural networks stacked and then trained with a large-scale block-weighted least squares solver. This algorithm automatically determines which patterns in the image data are most correlated with flaring activity and then uses these patterns to predict solar flares. Using the recently-developed KeystoneML machine learning framework, we construct a pipeline to process millions of images in a few hours on commodity cloud computing infrastructure. This is the first time vector magnetic field images have been combined with coronal imagery to forecast solar flares. This is also the first time such a large dataset of solar images, some 8.5 terabytes of images that together capture over 3000 active regions, has been used to forecast solar flares. We evaluate our method using various flare prediction windows defined in the literature (e.g. Ahmed et al., 2013) and a novel per-hour time series we've constructed which more closely mimics the demands of an operational solar flare prediction system. We estimate the performance of our algorithm using the True Skill Statistic (TSS; Bloomfield et al., 2012). We find that our algorithm gives a high TSS score and predictive abilities.
1990-10-01
type of approach for finding a dense displacement vector field has a time complexity that allows a real - time implementation when an appropriate control...hardly vector fields as they appear in Stereo or motion. The reason for this is the fact that local displacement vector field ( DVF ) esti- mates bave...2 objects’ motion, but that the quantitative optical flow is not a reliable measure of the real motion [VP87, SU87]. This applies even more to the
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...
2017-09-14
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
ERIC Educational Resources Information Center
Chen, Chau-Kuang
2010-01-01
Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…
Variable Speed CMG Control of a Dual-Spin Stabilized Unconventional VTOL Air Vehicle
NASA Technical Reports Server (NTRS)
Lim, Kyong B.; Moerder, Daniel D.; Shin, J-Y.
2004-01-01
This paper describes an approach based on using both bias momentum and multiple control moment gyros for controlling the attitude of statically unstable thrust-levitated vehicles in hover or slow translation. The stabilization approach described in this paper uses these internal angular momentum transfer devices for stability, augmented by thrust vectoring for trim and other outer loop control functions, including CMG stabilization/ desaturation under persistent external disturbances. Simulation results show the feasibility of (1) improved vehicle performance beyond bias momentum assisted vector thrusting control, and (2) using control moment gyros to significantly reduce the external torque required from the vector thrusting machinery.
Minimal entropy probability paths between genome families.
Ahlbrandt, Calvin; Benson, Gary; Casey, William
2004-05-01
We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.
Extensions and applications of a second-order landsurface parameterization
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1983-01-01
Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.
NASA Technical Reports Server (NTRS)
Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.
2000-01-01
The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
USDA-ARS?s Scientific Manuscript database
Diaphorina citri is a major pest of citrus because it transmits the bacterium that causes Huanglongbing (HLB) (a.k.a. citrus greening). One approach to disease management is vector management using insecticides. However, knowledge of vector mortality alone is not sufficient if the vector has had tim...
A force vector and surface orientation sensor for intelligent grasping
NASA Technical Reports Server (NTRS)
Mcglasson, W. D.; Lorenz, R. D.; Duffie, N. A.; Gale, K. L.
1991-01-01
The paper discusses a force vector and surface orientation sensor suitable for intelligent grasping. The use of a novel four degree-of-freedom force vector robotic fingertip sensor allows efficient, real time intelligent grasping operations. The basis of sensing for intelligent grasping operations is presented and experimental results demonstrate the accuracy and ease of implementation of this approach.
A Simple Parameterization of 3 x 3 Magic Squares
ERIC Educational Resources Information Center
Trenkler, Gotz; Schmidt, Karsten; Trenkler, Dietrich
2012-01-01
In this article a new parameterization of magic squares of order three is presented. This parameterization permits an easy computation of their inverses, eigenvalues, eigenvectors and adjoints. Some attention is paid to the Luoshu, one of the oldest magic squares.
Eisen, Lars; Lozano-Fuentes, Saul
2009-01-01
The aims of this review paper are to 1) provide an overview of how mapping and spatial and space-time modeling approaches have been used to date to visualize and analyze mosquito vector and epidemiologic data for dengue; and 2) discuss the potential for these approaches to be included as routine activities in operational vector and dengue control programs. Geographical information system (GIS) software are becoming more user-friendly and now are complemented by free mapping software that provide access to satellite imagery and basic feature-making tools and have the capacity to generate static maps as well as dynamic time-series maps. Our challenge is now to move beyond the research arena by transferring mapping and GIS technologies and spatial statistical analysis techniques in user-friendly packages to operational vector and dengue control programs. This will enable control programs to, for example, generate risk maps for exposure to dengue virus, develop Priority Area Classifications for vector control, and explore socioeconomic associations with dengue risk. PMID:19399163
Thermal noise model of antiferromagnetic dynamics: A macroscopic approach
NASA Astrophysics Data System (ADS)
Li, Xilai; Semenov, Yuriy; Kim, Ki Wook
In the search for post-silicon technologies, antiferromagnetic (AFM) spintronics is receiving widespread attention. Due to faster dynamics when compared with its ferromagnetic counterpart, AFM enables ultra-fast magnetization switching and THz oscillations. A crucial factor that affects the stability of antiferromagnetic dynamics is the thermal fluctuation, rarely considered in AFM research. Here, we derive from theory both stochastic dynamic equations for the macroscopic AFM Neel vector (L-vector) and the corresponding Fokker-Plank equation for the L-vector distribution function. For the dynamic equation approach, thermal noise is modeled by a stochastic fluctuating magnetic field that affects the AFM dynamics. The field is correlated within the correlation time and the amplitude is derived from the energy dissipation theory. For the distribution function approach, the inertial behavior of AFM dynamics forces consideration of the generalized space, including both coordinates and velocities. Finally, applying the proposed thermal noise model, we analyze a particular case of L-vector reversal of AFM nanoparticles by voltage controlled perpendicular magnetic anisotropy (PMA) with a tailored pulse width. This work was supported, in part, by SRC/NRI SWAN.
NASA Astrophysics Data System (ADS)
Guo, X.; Yang, K.; Yang, W.; Li, S.; Long, Z.
2011-12-01
We present a field investigation over a melting valley glacier on the Tibetan Plateau. One particular aspect lies in that three melt phases are distinguished during the glacier's ablation season, which enables us to compare results over snow, bare-ice, and hummocky surfaces [with aerodynamic roughness lengths (z0M) varying on the order of 10-4-10-2 m]. We address two issues of common concern in the study of glacio-meteorology and micrometeorology. First, we study turbulent energy flux estimation through a critical evaluation of three parameterizations of the scalar roughness lengths (z0T for temperature and z0q for humidity), viz. key factors for the accurate estimation of sensible heat and latent heat fluxes using the bulk aerodynamic method. The first approach (Andreas 1987, Boundary-Layer Meteorol 38:159-184) is based on surface-renewal models and has been very widely applied in glaciated areas; the second (Yang et al. 2002, Q J Roy Meteorol Soc 128:2073-2087) has never received application over an ice/snow surface, despite its validity in arid regions; the third approach (Smeets and van den Broeke 2008, Boundary-Layer Meteorol 128:339-355) is proposed for use specifically over rough ice defined as z0M > 10-3 m or so. This empirical z0M threshold value is deemed of general relevance to glaciated areas (e.g. ice sheet/cap and valley/outlet glaciers), above which the first approach gives underestimated z0T and z0q. The first and the third approaches tend to underestimate and overestimate turbulent heat/moisture exchange, respectively (relative errors often > 30%). Overall, the second approach produces fairly low errors in energy flux estimates; it thus emerges as a practically useful choice to parameterize z0T and z0q over an ice/snow surface. Our evaluation of z0T and z0q parameterizations hopefully serves as a useful source of reference for physically based modeling of land-ice surface energy budget and mass balance. Second, we explore how scalar turbulence behaves in the glacier winds, based on the turbulent fluctuations of temperature (T'), and water vapor (q') and CO2 (c') concentrations. This dataset is advantageous to analyses of turbulent scalar similarity, because the source/sink distribution of scalars is uniform over an ice/snow surface. New pieces of knowledge are: (1) T' and q' can be highly correlated, even when sensible heat and latent heat fluxes are in opposite directions. - The same direction of scalar fluxes is not a necessary condition for high scalar correlation. (2) The vertical transport efficiency of T' is always higher than that of q'. - The Bowen ratio (|β| > 1) is one factor underlying the T'-to-q' transport efficiency in stable conditions as well. (3) We provide confirmatory evidence of Detto and Katul's (Boundary-Layer Meteorol 122:205-216) original argument: density effect correction to q' and c' is necessitated for eddy-covariance analyses of turbulence structure.
NASA Astrophysics Data System (ADS)
Berloff, P. S.
2016-12-01
This work aims at developing a framework for dynamically consistent parameterization of mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. The proposed eddy parameterization framework is successfully tested on the classical, wind-driven double-gyre model, which is solved both with explicitly resolved vigorous eddy field and in the non-eddy-resolving configuration with the eddy parameterization replacing the eddy effects. The parameterization focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones. The parameterization locally approximates transient eddy flux divergence by spatially localized and temporally periodic forcing, referred to as the plunger, and focuses on the linear-dynamics flow solution induced by it. The nonlinear self-interaction of this solution, referred to as the footprint, characterizes and quantifies the induced eddy forcing exerted on the large-scale flow. We find that spatial pattern and amplitude of each footprint strongly depend on the underlying large-scale flow, and the corresponding relationships provide the basis for the eddy parameterization and its closure on the large-scale flow properties. Dependencies of the footprints on other important parameters of the problem are also systematically analyzed. The parameterization utilizes the local large-scale flow information, constructs and scales the corresponding footprints, and then sums them up over the gyres to produce the resulting eddy forcing field, which is interactively added to the model as an extra forcing. Thus, the assumed ensemble of plunger solutions can be viewed as a simple model for the cumulative effect of the stochastic eddy forcing. The parameterization framework is implemented in the simplest way, but it provides a systematic strategy for improving the implementation algorithm.
Carbonaro Sarracino, Denise; Tarantal, Alice F; Lee, C Chang I.; Martinez, Michele; Jin, Xiangyang; Wang, Xiaoyan; Hardee, Cinnamon L; Geiger, Sabine; Kahl, Christoph A; Kohn, Donald B
2014-01-01
Systemic delivery of a lentiviral vector carrying a therapeutic gene represents a new treatment for monogenic disease. Previously, we have shown that transfer of the adenosine deaminase (ADA) cDNA in vivo rescues the lethal phenotype and reconstitutes immune function in ADA-deficient mice. In order to translate this approach to ADA-deficient severe combined immune deficiency patients, neonatal ADA-deficient mice and newborn rhesus monkeys were treated with species-matched and mismatched vectors and pseudotypes. We compared gene delivery by the HIV-1-based vector to murine γ-retroviral vectors pseudotyped with vesicular stomatitis virus-glycoprotein or murine retroviral envelopes in ADA-deficient mice. The vesicular stomatitis virus-glycoprotein pseudotyped lentiviral vectors had the highest titer and resulted in the highest vector copy number in multiple tissues, particularly liver and lung. In monkeys, HIV-1 or simian immunodeficiency virus vectors resulted in similar biodistribution in most tissues including bone marrow, spleen, liver, and lung. Simian immunodeficiency virus pseudotyped with the gibbon ape leukemia virus envelope produced 10- to 30-fold lower titers than the vesicular stomatitis virus-glycoprotein pseudotype, but had a similar tissue biodistribution and similar copy number in blood cells. The relative copy numbers achieved in mice and monkeys were similar when adjusted to the administered dose per kg. These results suggest that this approach can be scaled-up to clinical levels for treatment of ADA-deficient severe combined immune deficiency subjects with suboptimal hematopoietic stem cell transplantation options. PMID:24925206
NASA Astrophysics Data System (ADS)
Vorobyov, E. I.
2010-01-01
We study numerically the applicability of the effective-viscosity approach for simulating the effect of gravitational instability (GI) in disks of young stellar objects with different disk-to-star mass ratios ξ . We adopt two α-parameterizations for the effective viscosity based on Lin and Pringle [Lin, D.N.C., Pringle, J.E., 1990. ApJ 358, 515] and Kratter et al. [Kratter, K.M., Matzner, Ch.D., Krumholz, M.R., 2008. ApJ 681, 375] and compare the resultant disk structure, disk and stellar masses, and mass accretion rates with those obtained directly from numerical simulations of self-gravitating disks around low-mass (M∗ ∼ 1.0M⊙) protostars. We find that the effective viscosity can, in principle, simulate the effect of GI in stellar systems with ξ≲ 0.2- 0.3 , thus corroborating a similar conclusion by Lodato and Rice [Lodato, G., Rice, W.K.M., 2004. MNRAS 351, 630] that was based on a different α-parameterization. In particular, the Kratter et al.'s α-parameterization has proven superior to that of Lin and Pringle's, because the success of the latter depends crucially on the proper choice of the α-parameter. However, the α-parameterization generally fails in stellar systems with ξ≳ 0.3 , particularly in the Classes 0 and I phases of stellar evolution, yielding too small stellar masses and too large disk-to-star mass ratios. In addition, the time-averaged mass accretion rates onto the star are underestimated in the early disk evolution and greatly overestimated in the late evolution. The failure of the α-parameterization in the case of large ξ is caused by a growing strength of low-order spiral modes in massive disks. Only in the late Class II phase, when the magnitude of spiral modes diminishes and the mode-to-mode interaction ensues, may the effective viscosity be used to simulate the effect of GI in stellar systems with ξ≳ 0.3 . A simple modification of the effective viscosity that takes into account disk fragmentation can somewhat improve the performance of α-models in the case of large ξ and even approximately reproduce the mass accretion burst phenomenon, the latter being a signature of the early gravitationally unstable stage of stellar evolution [Vorobyov, E.I., Basu, S., 2006. ApJ 650, 956]. However, further numerical experiments are needed to explore this issue.
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Yao, Mao-Sung
1990-01-01
A number of perpetual January simulations are carried out with a two-dimensional zonally averaged model employing various parameterizations of the eddy fluxes of heat (potential temperature) and moisture. The parameterizations are evaluated by comparing these results with the eddy fluxes calculated in a parallel simulation using a three-dimensional general circulation model with zonally symmetric forcing. The three-dimensional model's performance in turn is evaluated by comparing its results using realistic (nonsymmetric) boundary conditions with observations. Branscome's parameterization of the meridional eddy flux of heat and Leovy's parameterization of the meridional eddy flux of moisture simulate the seasonal and latitudinal variations of these fluxes reasonably well, while somewhat underestimating their magnitudes. New parameterizations of the vertical eddy fluxes are developed that take into account the enhancement of the eddy mixing slope in a growing baroclinic wave due to condensation, and also the effect of eddy fluctuations in relative humidity. The new parameterizations, when tested in the two-dimensional model, simulate the seasonal, latitudinal, and vertical variations of the vertical eddy fluxes quite well, when compared with the three-dimensional model, and only underestimate the magnitude of the fluxes by 10 to 20 percent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Heng; Gustafson, Jr., William I.; Hagos, Samson M.
2015-04-18
With this study, to better understand the behavior of quasi-equilibrium-based convection parameterizations at higher resolution, we use a diagnostic framework to examine the resolution-dependence of subgrid-scale vertical transport of moist static energy as parameterized by the Zhang-McFarlane convection parameterization (ZM). Grid-scale input to ZM is supplied by coarsening output from cloud-resolving model (CRM) simulations onto subdomains ranging in size from 8 × 8 to 256 × 256 km 2s.
A preference-ordered discrete-gaming approach to air-combat analysis
NASA Technical Reports Server (NTRS)
Kelley, H. J.; Lefton, L.
1978-01-01
An approach to one-on-one air-combat analysis is described which employs discrete gaming of a parameterized model featuring choice between several closed-loop control policies. A preference-ordering formulation due to Falco is applied to rational choice between outcomes: win, loss, mutual capture, purposeful disengagement, draw. Approximate optimization is provided by an active-cell scheme similar to Falco's obtained by a 'backing up' process similar to that of Kopp. The approach is designed primarily for short-duration duels between craft with large-envelope weaponry. Some illustrative computations are presented for an example modeled using constant-speed vehicles and very rough estimation of energy shifts.
Mitsakakis, Konstantinos; Hin, Sebastian; Müller, Pie; Wipf, Nadja; Thomsen, Edward; Coleman, Michael; Zengerle, Roland; Vontas, John; Mavridis, Konstantinos
2018-02-03
Monitoring malaria prevalence in humans, as well as vector populations, for the presence of Plasmodium , is an integral component of effective malaria control, and eventually, elimination. In the field of human diagnostics, a major challenge is the ability to define, precisely, the causative agent of fever, thereby differentiating among several candidate (also non-malaria) febrile diseases. This requires genetic-based pathogen identification and multiplexed analysis, which, in combination, are hardly provided by the current gold standard diagnostic tools. In the field of vectors, an essential component of control programs is the detection of Plasmodium species within its mosquito vectors, particularly in the salivary glands, where the infective sporozoites reside. In addition, the identification of species composition and insecticide resistance alleles within vector populations is a primary task in routine monitoring activities, aiming to support control efforts. In this context, the use of converging diagnostics is highly desirable for providing comprehensive information, including differential fever diagnosis in humans, and mosquito species composition, infection status, and resistance to insecticides of vectors. Nevertheless, the two fields of human diagnostics and vector control are rarely combined, both at the diagnostic and at the data management end, resulting in fragmented data and mis- or non-communication between various stakeholders. To this direction, molecular technologies, their integration in automated platforms, and the co-assessment of data from multiple diagnostic sources through information and communication technologies are possible pathways towards a unified human vector approach.
Mitsakakis, Konstantinos; Hin, Sebastian; Wipf, Nadja; Coleman, Michael; Zengerle, Roland; Vontas, John; Mavridis, Konstantinos
2018-01-01
Monitoring malaria prevalence in humans, as well as vector populations, for the presence of Plasmodium, is an integral component of effective malaria control, and eventually, elimination. In the field of human diagnostics, a major challenge is the ability to define, precisely, the causative agent of fever, thereby differentiating among several candidate (also non-malaria) febrile diseases. This requires genetic-based pathogen identification and multiplexed analysis, which, in combination, are hardly provided by the current gold standard diagnostic tools. In the field of vectors, an essential component of control programs is the detection of Plasmodium species within its mosquito vectors, particularly in the salivary glands, where the infective sporozoites reside. In addition, the identification of species composition and insecticide resistance alleles within vector populations is a primary task in routine monitoring activities, aiming to support control efforts. In this context, the use of converging diagnostics is highly desirable for providing comprehensive information, including differential fever diagnosis in humans, and mosquito species composition, infection status, and resistance to insecticides of vectors. Nevertheless, the two fields of human diagnostics and vector control are rarely combined, both at the diagnostic and at the data management end, resulting in fragmented data and mis- or non-communication between various stakeholders. To this direction, molecular technologies, their integration in automated platforms, and the co-assessment of data from multiple diagnostic sources through information and communication technologies are possible pathways towards a unified human vector approach. PMID:29401670
Parameterizing by the Number of Numbers
NASA Astrophysics Data System (ADS)
Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.
The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.
Parameterized Cross Sections for Pion Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Swaminathan, Sudha R.; Kruger, Adam T.; Ngom, Moussa; Norbury, John W.; Tripathi, R. K.
2000-01-01
An accurate knowledge of cross sections for pion production in proton-proton collisions finds wide application in particle physics, astrophysics, cosmic ray physics, and space radiation problems, especially in situations where an incident proton is transported through some medium and knowledge of the output particle spectrum is required when given the input spectrum. In these cases, accurate parameterizations of the cross sections are desired. In this paper much of the experimental data are reviewed and compared with a wide variety of different cross section parameterizations. Therefore, parameterizations of neutral and charged pion cross sections are provided that give a very accurate description of the experimental data. Lorentz invariant differential cross sections, spectral distributions, and total cross section parameterizations are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent; Gettelman, Andrew; Morrison, Hugh
In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land.more » The resulting model will be compared with ARM observations.« less
An alternative subspace approach to EEG dipole source localization
NASA Astrophysics Data System (ADS)
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-01
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.
Assessment of different virus-mediated approaches for retinal gene therapy of Usher 1B.
Lopes, Vanda S; Diemer, Tanja; Williams, David S
2014-01-01
Usher syndrome type 1B, which is characterized by congenital deafness and progressive retinal degeneration, is caused by the loss of the function of MYO7A. Prevention of the retinal degeneration should be possible by delivering functional MYO7A to retinal cells. Although this approach has been used successfully in clinical trials for Leber congenital amaurosis (LCA2), it remains a challenge for Usher 1B because of the large size of the MYO7A cDNA. Different viral vectors have been tested for use in MYO7A gene therapy. Here, we review approaches with lentiviruses, which can accommodate larger genes, as well as attempts to use adeno-associated virus (AAV), which has a smaller packaging capacity. In conclusion, both types of viral vector appear to be effective. Despite concerns about the ability of lentiviruses to access the photoreceptor cells, a phenotype of the photoreceptors of Myo7a-mutant mice can be corrected. And although MYO7A cDNA is significantly larger than the nominal carrying capacity of AAV, AAV-MYO7A in single vectors also corrected Myo7a-mutant phenotypes in photoreceptor and RPE cells. Interestingly, however, a dual AAV vector approach was found to be much less effective.
Coherent states for the relativistic harmonic oscillator
NASA Technical Reports Server (NTRS)
Aldaya, Victor; Guerrero, J.
1995-01-01
Recently we have obtained, on the basis of a group approach to quantization, a Bargmann-Fock-like realization of the Relativistic Harmonic Oscillator as well as a generalized Bargmann transform relating fock wave functions and a set of relativistic Hermite polynomials. Nevertheless, the relativistic creation and annihilation operators satisfy typical relativistic commutation relations of the Lie product (vector-z, vector-z(sup dagger)) approximately equals Energy (an SL(2,R) algebra). Here we find higher-order polarization operators on the SL(2,R) group, providing canonical creation and annihilation operators satisfying the Lie product (vector-a, vector-a(sup dagger)) = identity vector 1, the eigenstates of which are 'true' coherent states.
Ice-nucleating particle emissions from photochemically aged diesel and biodiesel exhaust
NASA Astrophysics Data System (ADS)
Schill, G. P.; Jathar, S. H.; Kodros, J. K.; Levin, E. J. T.; Galang, A. M.; Friedman, B.; Link, M. F.; Farmer, D. K.; Pierce, J. R.; Kreidenweis, S. M.; DeMott, P. J.
2016-05-01
Immersion-mode ice-nucleating particle (INP) concentrations from an off-road diesel engine were measured using a continuous-flow diffusion chamber at -30°C. Both petrodiesel and biodiesel were utilized, and the exhaust was aged up to 1.5 photochemically equivalent days using an oxidative flow reactor. We found that aged and unaged diesel exhaust of both fuels is not likely to contribute to atmospheric INP concentrations at mixed-phase cloud conditions. To explore this further, a new limit-of-detection parameterization for ice nucleation on diesel exhaust was developed. Using a global-chemical transport model, potential black carbon INP (INPBC) concentrations were determined using a current literature INPBC parameterization and the limit-of-detection parameterization. Model outputs indicate that the current literature parameterization likely overemphasizes INPBC concentrations, especially in the Northern Hemisphere. These results highlight the need to integrate new INPBC parameterizations into global climate models as generalized INPBC parameterizations are not valid for diesel exhaust.
Radiative flux and forcing parameterization error in aerosol-free clear skies
Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...
2015-07-03
This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less
Krupetsky, Anna; Parveen, Zahida; Marusich, Elena; Goodrich, Adrienne; Dornburg, Ralph
2003-05-01
The method of delivering a therapeutic gene into a patient is still one of the major obstacles towards successful human gene therapy. Here we describe a novel gene delivery approach using TheraCyte immunoisolation devices. Retroviral vector producing cells, derived from the avian retrovirus spleen necrosis virus, SNV, were encapsulated in TheraCyte devices and tested for the release of retroviral vectors. In vitro experiments show that such devices release infectious retroviral vectors into the tissue culture medium for up to 4 months. When such devices were implanted subcutaneously in SCID mice, infectious virus was released into the blood stream. There, the vectors were transported to and infected tumors, which had been induced by subcutaneous injection of tissue culture cells. Thus, this novel concept of a continuous, long-term gene delivery may constitute an attractive approach for future in vivo human gene therapy.
X-31 quasi-tailless flight demonstration
NASA Technical Reports Server (NTRS)
Huber, Peter; Schellenger, Harvey G.
1994-01-01
The primary objective of the quasi-tailless flight demonstration is to demonstrate the feasibility of using thrust vectoring for directional control of an unstable aircraft. By using this low-cost, low-risk approach it is possible to get information about required thrust vector control power and deflection rates from an inflight experiment as well as insight in low-power thrust vectoring issues. The quasi-tailless flight demonstration series with the X-31 began in March 1994. The demonstration flight condition was Mach 1.2 at 37,500 feet. A series of basic flying quality maneuvers, doublets, bank to bank rolls, and wind-up-turns have been performed with a simulated 100% vertical tail reduction. Flight test and supporting simulation demonstrated that the quasi-tailless approach is effective in representing the reduced stability of tailless configurations. The flights also demonstrated that thrust vectoring could be effectively used to stabilize a directionally unstable configuration and provide control power for maneuver coordination.
NASA Astrophysics Data System (ADS)
Kelly, R. E. J.; Saberi, N.; Li, Q.
2017-12-01
With moderate to high spatial resolution (<1 km) regional to global snow water equivalent (SWE) observation approaches yet to be fully scoped and developed, the long-term satellite passive microwave record remains an important tool for cryosphere-climate diagnostics. A new satellite microwave remote sensing approach is described for estimating snow depth (SD) and snow water equivalent (SWE). The algorithm, called the Satellite-based Microwave Snow Algorithm (SMSA), uses Advanced Microwave Scanning Radiometer - 2 (AMSR2) observations aboard the Global Change Observation Mission - Water mission launched by the Japan Aerospace Exploration Agency in 2012. The approach is unique since it leverages observed brightness temperatures (Tb) with static ancillary data to parameterize a physically-based retrieval without requiring parameter constraints from in situ snow depth observations or historical snow depth climatology. After screening snow from non-snow surface targets (water bodies [including freeze/thaw state], rainfall, high altitude plateau regions [e.g. Tibetan plateau]), moderate and shallow snow depths are estimated by minimizing the difference between Dense Media Radiative Transfer model estimates (Tsang et al., 2000; Picard et al., 2011) and AMSR2 Tb observations to retrieve SWE and SD. Parameterization of the model combines a parsimonious snow grain size and density approach originally developed by Kelly et al. (2003). Evaluation of the SMSA performance is achieved using in situ snow depth data from a variety of standard and experiment data sources. Results presented from winter seasons 2012-13 to 2016-17 illustrate the improved performance of the new approach in comparison with the baseline AMSR2 algorithm estimates and approach the performance of the model assimilation-based approach of GlobSnow. Given the variation in estimation power of SWE by different land surface/climate models and selected satellite-derived passive microwave approaches, SMSA provides SWE estimates that are independent of real or near real-time in situ and model data.
Recombinase-Mediated Cassette Exchange Using Adenoviral Vectors.
Kolb, Andreas F; Knowles, Christopher; Pultinevicius, Patrikas; Harbottle, Jennifer A; Petrie, Linda; Robinson, Claire; Sorrell, David A
2017-01-01
Site-specific recombinases are important tools for the modification of mammalian genomes. In conjunction with viral vectors, they can be utilized to mediate site-specific gene insertions in animals and in cell lines which are difficult to transfect. Here we describe a method for the generation and analysis of an adenovirus vector supporting a recombinase-mediated cassette exchange reaction and discuss the advantages and limitations of this approach.
Approaches for Language Identification in Mismatched Environments
2016-09-08
different i-vector systems are considered, which differ in their feature extraction mechanism. The first, which we refer to as the standard i-vector, or...both conversational telephone speech and narrowband broadcast speech. Multiple experiments are conducted to assess the performance of the system in...bottleneck features using i-vectors. The proposed system results in a 30% improvement over the baseline result. Index Terms: language identification
Boost OCR accuracy using iVector based system combination approach
NASA Astrophysics Data System (ADS)
Peng, Xujun; Cao, Huaigu; Natarajan, Prem
2015-01-01
Optical character recognition (OCR) is a challenging task because most existing preprocessing approaches are sensitive to writing style, writing material, noises and image resolution. Thus, a single recognition system cannot address all factors of real document images. In this paper, we describe an approach to combine diverse recognition systems by using iVector based features, which is a newly developed method in the field of speaker verification. Prior to system combination, document images are preprocessed and text line images are extracted with different approaches for each system, where iVector is transformed from a high-dimensional supervector of each text line and is used to predict the accuracy of OCR. We merge hypotheses from multiple recognition systems according to the overlap ratio and the predicted OCR score of text line images. We present evaluation results on an Arabic document database where the proposed method is compared against the single best OCR system using word error rate (WER) metric.
Cross-entropy embedding of high-dimensional data using the neural gas model.
Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi
2005-01-01
A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).
Characterization of GM events by insert knowledge adapted re-sequencing approaches
Yang, Litao; Wang, Congmao; Holst-Jensen, Arne; Morisset, Dany; Lin, Yongjun; Zhang, Dabing
2013-01-01
Detection methods and data from molecular characterization of genetically modified (GM) events are needed by stakeholders of public risk assessors and regulators. Generally, the molecular characteristics of GM events are incomprehensively revealed by current approaches and biased towards detecting transformation vector derived sequences. GM events are classified based on available knowledge of the sequences of vectors and inserts (insert knowledge). Herein we present three insert knowledge-adapted approaches for characterization GM events (TT51-1 and T1c-19 rice as examples) based on paired-end re-sequencing with the advantages of comprehensiveness, accuracy, and automation. The comprehensive molecular characteristics of two rice events were revealed with additional unintended insertions comparing with the results from PCR and Southern blotting. Comprehensive transgene characterization of TT51-1 and T1c-19 is shown to be independent of a priori knowledge of the insert and vector sequences employing the developed approaches. This provides an opportunity to identify and characterize also unknown GM events. PMID:24088728
Characterization of GM events by insert knowledge adapted re-sequencing approaches.
Yang, Litao; Wang, Congmao; Holst-Jensen, Arne; Morisset, Dany; Lin, Yongjun; Zhang, Dabing
2013-10-03
Detection methods and data from molecular characterization of genetically modified (GM) events are needed by stakeholders of public risk assessors and regulators. Generally, the molecular characteristics of GM events are incomprehensively revealed by current approaches and biased towards detecting transformation vector derived sequences. GM events are classified based on available knowledge of the sequences of vectors and inserts (insert knowledge). Herein we present three insert knowledge-adapted approaches for characterization GM events (TT51-1 and T1c-19 rice as examples) based on paired-end re-sequencing with the advantages of comprehensiveness, accuracy, and automation. The comprehensive molecular characteristics of two rice events were revealed with additional unintended insertions comparing with the results from PCR and Southern blotting. Comprehensive transgene characterization of TT51-1 and T1c-19 is shown to be independent of a priori knowledge of the insert and vector sequences employing the developed approaches. This provides an opportunity to identify and characterize also unknown GM events.
Quantifying Groundwater Model Uncertainty
NASA Astrophysics Data System (ADS)
Hill, M. C.; Poeter, E.; Foglia, L.
2007-12-01
Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This approach is attainable through universal model analysis software such as UCODE-2005, PEST, and joint use of these programs, which allow many aspects of a model to be defined as parameters. (2) Use highly parameterized models to quantify aspects of (e). While promising, this approach implicitly includes parameterizations that may be considered unreasonable if investigated explicitly, so that resulting measures of uncertainty may be too large. (3) Use a combination of inferential and global methods that can be facilitated using the new software MMA (Multi-Model Analysis), which is constructed using the JUPITER API. Here we consider issues related to the model discrimination criteria calculated by MMA.
Maurer, K. D.; Bohrer, G.; Kenny, W. T.; ...
2015-04-30
Surface roughness parameters, namely the roughness length and displacement height, are an integral input used to model surface fluxes. However, most models assume these parameters to be a fixed property of plant functional type and disregard the governing structural heterogeneity and dynamics. In this study, we use large-eddy simulations to explore, in silico, the effects of canopy-structure characteristics on surface roughness parameters. We performed a virtual experiment to test the sensitivity of resolved surface roughness to four axes of canopy structure: (1) leaf area index, (2) the vertical profile of leaf density, (3) canopy height, and (4) canopy gap fraction.more » We found roughness parameters to be highly variable, but uncovered positive relationships between displacement height and maximum canopy height, aerodynamic canopy height and maximum canopy height and leaf area index, and eddy-penetration depth and gap fraction. We also found negative relationships between aerodynamic canopy height and gap fraction, as well as between eddy-penetration depth and maximum canopy height and leaf area index. We generalized our model results into a virtual "biometric" parameterization that relates roughness length and displacement height to canopy height, leaf area index, and gap fraction. Using a decade of wind and canopy-structure observations in a site in Michigan, we tested the effectiveness of our model-driven biometric parameterization approach in predicting the friction velocity over heterogeneous and disturbed canopies. We compared the accuracy of these predictions with the friction-velocity predictions obtained from the common simple approximation related to canopy height, the values calculated with large-eddy simulations of the explicit canopy structure as measured by airborne and ground-based lidar, two other parameterization approaches that utilize varying canopy-structure inputs, and the annual and decadal means of the surface roughness parameters at the site from meteorological observations. We found that the classical representation of constant roughness parameters (in space and time) as a fraction of canopy height performed relatively well. Nonetheless, of the approaches we tested, most of the empirical approaches that incorporate seasonal and interannual variation of roughness length and displacement height as a function of the dynamics of canopy structure produced more precise and less biased estimates for friction velocity than models with temporally invariable parameters.« less
NASA Astrophysics Data System (ADS)
Maurer, K. D.; Bohrer, G.; Kenny, W. T.; Ivanov, V. Y.
2015-04-01
Surface roughness parameters, namely the roughness length and displacement height, are an integral input used to model surface fluxes. However, most models assume these parameters to be a fixed property of plant functional type and disregard the governing structural heterogeneity and dynamics. In this study, we use large-eddy simulations to explore, in silico, the effects of canopy-structure characteristics on surface roughness parameters. We performed a virtual experiment to test the sensitivity of resolved surface roughness to four axes of canopy structure: (1) leaf area index, (2) the vertical profile of leaf density, (3) canopy height, and (4) canopy gap fraction. We found roughness parameters to be highly variable, but uncovered positive relationships between displacement height and maximum canopy height, aerodynamic canopy height and maximum canopy height and leaf area index, and eddy-penetration depth and gap fraction. We also found negative relationships between aerodynamic canopy height and gap fraction, as well as between eddy-penetration depth and maximum canopy height and leaf area index. We generalized our model results into a virtual "biometric" parameterization that relates roughness length and displacement height to canopy height, leaf area index, and gap fraction. Using a decade of wind and canopy-structure observations in a site in Michigan, we tested the effectiveness of our model-driven biometric parameterization approach in predicting the friction velocity over heterogeneous and disturbed canopies. We compared the accuracy of these predictions with the friction-velocity predictions obtained from the common simple approximation related to canopy height, the values calculated with large-eddy simulations of the explicit canopy structure as measured by airborne and ground-based lidar, two other parameterization approaches that utilize varying canopy-structure inputs, and the annual and decadal means of the surface roughness parameters at the site from meteorological observations. We found that the classical representation of constant roughness parameters (in space and time) as a fraction of canopy height performed relatively well. Nonetheless, of the approaches we tested, most of the empirical approaches that incorporate seasonal and interannual variation of roughness length and displacement height as a function of the dynamics of canopy structure produced more precise and less biased estimates for friction velocity than models with temporally invariable parameters.
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
Parameterization Interactions in Global Aquaplanet Simulations
NASA Astrophysics Data System (ADS)
Bhattacharya, Ritthik; Bordoni, Simona; Suselj, Kay; Teixeira, João.
2018-02-01
Global climate simulations rely on parameterizations of physical processes that have scales smaller than the resolved ones. In the atmosphere, these parameterizations represent moist convection, boundary layer turbulence and convection, cloud microphysics, longwave and shortwave radiation, and the interaction with the land and ocean surface. These parameterizations can generate different climates involving a wide range of interactions among parameterizations and between the parameterizations and the resolved dynamics. To gain a simplified understanding of a subset of these interactions, we perform aquaplanet simulations with the global version of the Weather Research and Forecasting (WRF) model employing a range (in terms of properties) of moist convection and boundary layer (BL) parameterizations. Significant differences are noted in the simulated precipitation amounts, its partitioning between convective and large-scale precipitation, as well as in the radiative impacts. These differences arise from the way the subcloud physics interacts with convection, both directly and through various pathways involving the large-scale dynamics and the boundary layer, convection, and clouds. A detailed analysis of the profiles of the different tendencies (from the different physical processes) for both potential temperature and water vapor is performed. While different combinations of convection and boundary layer parameterizations can lead to different climates, a key conclusion of this study is that similar climates can be simulated with model versions that are different in terms of the partitioning of the tendencies: the vertically distributed energy and water balances in the tropics can be obtained with significantly different profiles of large-scale, convection, and cloud microphysics tendencies.
Brain Surface Conformal Parameterization Using Riemann Surface Structure
Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung
2011-01-01
In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336
Increasing the Efficacy of Oncolytic Adenovirus Vectors
Toth, Karoly; Wold, William S. M.
2010-01-01
Oncolytic adenovirus (Ad) vectors present a new modality to treat cancer. These vectors attack tumors via replicating in and killing cancer cells. Upon completion of the vector replication cycle, the infected tumor cell lyses and releases progeny virions that are capable of infecting neighboring tumor cells. Repeated cycles of vector replication and cell lysis can destroy the tumor. Numerous Ad vectors have been generated and tested, some of them reaching human clinical trials. In 2005, the first oncolytic Ad was approved for the treatment of head-and-neck cancer by the Chinese FDA. Oncolytic Ads have been proven to be safe, with no serious adverse effects reported even when high doses of the vector were injected intravenously. The vectors demonstrated modest anti-tumor effect when applied as a single agent; their efficacy improved when they were combined with another modality. The efficacy of oncolytic Ads can be improved using various approaches, including vector design, delivery techniques, and ancillary treatment, which will be discussed in this review. PMID:21994711
Gürtler, Ricardo E; Yadon, Zaida E
2015-02-01
This article provides an overview of three research projects which designed and implemented innovative interventions for Chagas disease vector control in Bolivia, Guatemala and Mexico. The research initiative was based on sound principles of community-based ecosystem management (ecohealth), integrated vector management, and interdisciplinary analysis. The initial situational analysis achieved a better understanding of ecological, biological and social determinants of domestic infestation. The key factors identified included: housing quality; type of peridomestic habitats; presence and abundance of domestic dogs, chickens and synanthropic rodents; proximity to public lights; location in the periphery of the village. In Bolivia, plastering of mud walls with appropriate local materials and regular cleaning of beds and of clothes next to the walls, substantially decreased domestic infestation and abundance of the insect vector Triatoma infestans. The Guatemalan project revealed close links between house infestation by rodents and Triatoma dimidiata, and vector infection with Trypanosoma cruzi. A novel community-operated rodent control program significantly reduced rodent infestation and bug infection. In Mexico, large-scale implementation of window screens translated into promising reductions in domestic infestation. A multi-pronged approach including community mobilisation and empowerment, intersectoral cooperation and adhesion to integrated vector management principles may be the key to sustainable vector and disease control in the affected regions. © World Health Organization 2015. The World Health Organization has granted Oxford University Press permission for the reproduction of this article.
"Analytical" vector-functions I
NASA Astrophysics Data System (ADS)
Todorov, Vladimir Todorov
2017-12-01
In this note we try to give a new (or different) approach to the investigation of analytical vector functions. More precisely a notion of a power xn; n ∈ ℕ+ of a vector x ∈ ℝ3 is introduced which allows to define an "analytical" function f : ℝ3 → ℝ3. Let furthermore f (ξ )= ∑n =0 ∞ anξn be an analytical function of the real variable ξ. Here we replace the power ξn of the number ξ with the power of a vector x ∈ ℝ3 to obtain a vector "power series" f (x )= ∑n =0 ∞ anxn . We research some properties of the vector series as well as some applications of this idea. Note that an "analytical" vector function does not depend of any basis, which may be used in research into some problems in physics.
Solar and chemical reaction-induced heating in the terrestrial mesosphere and lower thermosphere
NASA Technical Reports Server (NTRS)
Mlynczak, Martin G.
1992-01-01
Airglow and chemical processes in the terrestrial mesosphere and lower thermosphere are reviewed, and initial parameterizations of the processes applicable to multidimensional models are presented. The basic processes by which absorbed solar energy participates in middle atmosphere energetics for absorption events in which photolysis occurs are illustrated. An approach that permits the heating processes to be incorporated in numerical models is presented.
Image Processing for Planetary Limb/Terminator Extraction
NASA Technical Reports Server (NTRS)
Udomkesmalee, S.; Zhu, D. Q.; Chu, C. -C.
1995-01-01
A novel image segmentation technique for extracting limb and terminator of planetary bodies is proposed. Conventional edge- based histogramming approaches are used to trace object boundaries. The limb and terminator bifurcation is achieved by locating the harmonized segment in the two equations representing the 2-D parameterized boundary curve. Real planetary images from Voyager 1 and 2 served as representative test cases to verify the proposed methodology.
NASA Technical Reports Server (NTRS)
Barber, Peter W.; Demerdash, Nabeel A. O.; Wang, R.; Hurysz, B.; Luo, Z.
1991-01-01
The goal is to analyze the potential effects of electromagnetic interference (EMI) originating from power system processing and transmission components for Space Station Freedom.The approach consists of four steps: (1) develop analytical tools (models and computer programs); (2) conduct parameterization studies; (3) predict the global space station EMI environment; and (4) provide a basis for modification of EMI standards.
Optical Characterization of Deep-Space Object Rotation States
2014-09-01
surface bi-directional reflectance distribution function ( BRDF ), and then estimate the asteroid’s shape via a best-fit parameterized model . This hybrid...approach can be used because asteroid BRDFs are relatively well studied, but their shapes are generally unknown [17]. Asteroid shape models range...can be accomplished using a shape-dependent method that employs a model of the shape and reflectance characteristics of the object. Our analysis
NASA Astrophysics Data System (ADS)
Leckler, F.; Hanafin, J. A.; Ardhuin, F.; Filipot, J.; Anguelova, M. D.; Moat, B. I.; Yelland, M.; Prytherch, J.
2012-12-01
Whitecaps are the main sink of wave energy. Although the exact processes are still unknown, it is clear that they play a significant role in momentum exchange between atmosphere and ocean, and also influence gas and aerosol exchange. Recently, modeling of whitecap properties was implemented in the spectral wave model WAVEWATCH-III ®. This modeling takes place in the context of the Oceanflux-Greenhouse Gas project, to provide a climatology of breaking waves for gas transfer studies. We present here a validation study for two different wave breaking parameterizations implemented in the spectral wave model WAVEWATCH-III ®. The model parameterizations use different approaches related to the steepness of the carrying waves to estimate breaking wave probabilities. That of Ardhuin et al. (2010) is based on the hypothesis that breaking probabilities become significant when the saturation spectrum exceeds a threshold, and includes a modification to allow for greater breaking in the mean wave direction, to agree with observations. It also includes suppression of shorter waves by longer breaking waves. In the second, (Filipot and Ardhuin, 2012) breaking probabilities are defined at different scales using wave steepness, then the breaking wave height distribution is integrated over all scales. We also propose an adaptation of the latter to make it self-consistent. The breaking probabilities parameterized by Filipot and Ardhuin (2012) are much larger for dominant waves than those from the other parameterization, and show better agreement with modeled statistics of breaking crest lengths measured during the FAIRS experiment. This stronger breaking also has an impact on the shorter waves due to the parameterization of short wave damping associated with large breakers, and results in a different distribution of the breaking crest lengths. Converted to whitecap coverage using Reul and Chapron (2003), both parameterizations agree reasonably well with commonly-used empirical fits of whitecap coverage against wind speed (Monahan and Woolf, 1989) and with the global whitecap coverage of Anguelova and Webster (2006), derived from space-borne radiometry. This is mainly due to the fact that the breaking of larger waves in the parametrization by Filipot and Ardhuin (2012) is compensated for by the intense breaking of smaller waves in that of Ardhuin et al. (2010). Comparison with in situ data collected during research ship cruises in the North and South Atlantic (SEASAW, DOGEE and WAGES), and the Norwegian Sea (HiWASE) between 2006 and 2011 also shows good agreement. However, as large scale breakers produce a thicker foam layer, modeled mean foam thickness clearly depends on the scale of the breakers. Foam thickness is thus a more interesting parameter for calibrating and validating breaking wave parameterizations, as the differences in scale can be determined. With this in mind, we present the initial results of validation using an estimation of mean foam thickness using multiple radiometric bands from satellites SMOS and AMSR-E.
Impact of Apex Model parameterization strategy on estimated benefit of conservation practices
USDA-ARS?s Scientific Manuscript database
Three parameterized Agriculture Policy Environmental eXtender (APEX) models for corn-soybean rotation on clay pan soils were developed with the objectives, 1. Evaluate model performance of three parameterization strategies on a validation watershed; and 2. Compare predictions of water quality benefi...
Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somerville, R.C.J.; Iacobellis, S.F.
2005-03-18
Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less
NASA Technical Reports Server (NTRS)
Steffen, K.; Schweiger, A.; Maslanik, J.; Key, J.; Weaver, R.; Barry, R.
1990-01-01
The application of multi-spectral satellite data to estimate polar surface energy fluxes is addressed. To what accuracy and over which geographic areas large scale energy budgets can be estimated are investigated based upon a combination of available remote sensing and climatological data sets. The general approach was to: (1) formulate parameterization schemes for the appropriate sea ice energy budget terms based upon the remotely sensed and/or in-situ data sets; (2) conduct sensitivity analyses using as input both natural variability (observed data in regional case studies) and theoretical variability based upon energy flux model concepts; (3) assess the applicability of these parameterization schemes to both regional and basin wide energy balance estimates using remote sensing data sets; and (4) assemble multi-spectral, multi-sensor data sets for at least two regions of the Arctic Basin and possibly one region of the Antarctic. The type of data needed for a basin-wide assessment is described and the temporal coverage of these data sets are determined by data availability and need as defined by parameterization scheme. The titles of the subjects are as follows: (1) Heat flux calculations from SSM/I and LANDSAT data in the Bering Sea; (2) Energy flux estimation using passive microwave data; (3) Fetch and stability sensitivity estimates of turbulent heat flux; and (4) Surface temperature algorithm.
Longwave Radiative Flux Calculations in the TOVS Pathfinder Path A Data Set
NASA Technical Reports Server (NTRS)
Mehta, Amita; Susskind, Joel
1999-01-01
A radiative transfer model developed to calculate outgoing longwave radiation (OLR) and downwelling longwave, surface flux (DSF) from the Television and Infrared Operational Satellite (TIROS) Operational Vertical Sounder (TOVS) Pathfinder Path A retrieval products is described. The model covers the spectral range of 2 to 2800 cm in 14 medium medium spectral bands. For each band, transmittances are parameterized as a function of temperature, water vapor, and ozone profiles. The form of the band transmittance parameterization is a modified version of the approach we use to model channel transmittances for the High Resolution Infrared Sounder 2 (HIRS2) instrument. We separately derive effective zenith angle for each spectral band such that band-averaged radiance calculated at that angle best approximates directionally integrated radiance for that band. We develop the transmittance parameterization at these band-dependent effective zenith angles to incorporate directional integration of radiances required in the calculations of OLR and DSF. The model calculations of OLR and DSF are accurate and differ by less than 1% from our line-by-line calculations. Also, the model results are within 1% range of other line-by-line calculations provided by the Intercomparison of Radiation Codes in Climate Models (ICRCCM) project for clear-sky and cloudy conditions. The model is currently used to calculate global, multiyear (1985-1998) OLR and DSF from the TOVS Pathfinder Path A Retrievals.
NASA Astrophysics Data System (ADS)
Mazoyer, M.; Roehrig, R.; Nuissier, O.; Duffourg, F.; Somot, S.
2017-12-01
Most regional climate models (RCSMs) face difficulties in representing a reasonable pre-cipitation probability density function in the Mediterranean area and especially over land.Small amounts of rain are too frequent, preventing any realistic representation of droughts orheat waves, while the intensity of heavy precipitating events is underestimated and not welllocated by most state-of-the-art RCSMs using parameterized convection (resolution from10 to 50 km). Convective parameterization is a key point for the representation of suchevents and recently, the new physics implemented in the CNRM-RCSM has been shown toremarkably improve it, even at a 50-km scale.The present study seeks to further analyse the representation of heavy precipitating eventsby this new version of CNRM-RCSM using a process oriented approach. We focus on oneparticular event in the south-east of France, over the Cévennes. Two hindcast experimentswith the CNRM-RCSM (12 and 50 km) are performed and compared with a simulationbased on the convection-permitting model Meso-NH, which makes use of a very similarsetup as CNRM-RCSM hindcasts. The role of small-scale features of the regional topogra-phy and its interaction with the impinging large-scale flow in triggering the convective eventare investigated. This study provides guidance in the ongoing implementation and use of aspecific parameterization dedicated to account for subgrid-scale orography in the triggeringand closure conditions of the CNRM-RCSM convection scheme.