Sample records for parameterization method based

  1. Controllers, observers, and applications thereof

    NASA Technical Reports Server (NTRS)

    Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)

    2011-01-01

    Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.

  2. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    DOE PAGES

    Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...

    2017-09-14

    Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less

  3. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.

    Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less

  4. Parameterizing Coefficients of a POD-Based Dynamical System

    NASA Technical Reports Server (NTRS)

    Kalb, Virginia L.

    2010-01-01

    A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter-continuation software can be used on the parameterized dynamical system to derive a bifurcation diagram that accurately predicts the temporal flow behavior.

  5. Parameterization of light absorption by components of seawater in optically complex coastal waters of the Crimea Peninsula (Black Sea).

    PubMed

    Dmitriev, Egor V; Khomenko, Georges; Chami, Malik; Sokolov, Anton A; Churilova, Tatyana Y; Korotaev, Gennady K

    2009-03-01

    The absorption of sunlight by oceanic constituents significantly contributes to the spectral distribution of the water-leaving radiance. Here it is shown that current parameterizations of absorption coefficients do not apply to the optically complex waters of the Crimea Peninsula. Based on in situ measurements, parameterizations of phytoplankton, nonalgal, and total particulate absorption coefficients are proposed. Their performance is evaluated using a log-log regression combined with a low-pass filter and the nonlinear least-square method. Statistical significance of the estimated parameters is verified using the bootstrap method. The parameterizations are relevant for chlorophyll a concentrations ranging from 0.45 up to 2 mg/m(3).

  6. [Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.

    PubMed

    Takacs, T; Jüttler, B

    2012-11-01

    Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.

  7. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  8. Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2006-12-01

    Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.

  9. An improvement of quantum parametric methods by using SGSA parameterization technique and new elementary parametric functionals

    NASA Astrophysics Data System (ADS)

    Sánchez, M.; Oldenhof, M.; Freitez, J. A.; Mundim, K. C.; Ruette, F.

    A systematic improvement of parametric quantum methods (PQM) is performed by considering: (a) a new application of parameterization procedure to PQMs and (b) novel parametric functionals based on properties of elementary parametric functionals (EPF) [Ruette et al., Int J Quantum Chem 2008, 108, 1831]. Parameterization was carried out by using the simplified generalized simulated annealing (SGSA) method in the CATIVIC program. This code has been parallelized and comparison with MOPAC/2007 (PM6) and MINDO/SR was performed for a set of molecules with C=C, C=H, and H=H bonds. Results showed better accuracy than MINDO/SR and MOPAC-2007 for a selected trial set of molecules.

  10. Method of sound synthesis

    DOEpatents

    Miner, Nadine E.; Caudell, Thomas P.

    2004-06-08

    A sound synthesis method for modeling and synthesizing dynamic, parameterized sounds. The sound synthesis method yields perceptually convincing sounds and provides flexibility through model parameterization. By manipulating model parameters, a variety of related, but perceptually different sounds can be generated. The result is subtle changes in sounds, in addition to synthesis of a variety of sounds, all from a small set of models. The sound models can change dynamically according to changes in the simulation environment. The method is applicable to both stochastic (impulse-based) and non-stochastic (pitched) sounds.

  11. Electronegativity Equalization Method: Parameterization and Validation for Large Sets of Organic, Organohalogene and Organometal Molecule

    PubMed Central

    Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav

    2007-01-01

    The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.

  12. A scheme for parameterizing ice cloud water content in general circulation models

    NASA Technical Reports Server (NTRS)

    Heymsfield, Andrew J.; Donner, Leo J.

    1989-01-01

    A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.

  13. Parameterization of Shortwave Cloud Optical Properties for a Mixture of Ice Particle Habits for use in Atmospheric Models

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.

  14. Comprehensive assessment of parameterization methods for estimating clear-sky surface downward longwave radiation

    NASA Astrophysics Data System (ADS)

    Guo, Yamin; Cheng, Jie; Liang, Shunlin

    2018-02-01

    Surface downward longwave radiation (SDLR) is a key variable for calculating the earth's surface radiation budget. In this study, we evaluated seven widely used clear-sky parameterization methods using ground measurements collected from 71 globally distributed fluxnet sites. The Bayesian model averaging (BMA) method was also introduced to obtain a multi-model ensemble estimate. As a whole, the parameterization method of Carmona et al. (2014) performs the best, with an average BIAS, RMSE, and R 2 of - 0.11 W/m2, 20.35 W/m2, and 0.92, respectively, followed by the parameterization methods of Idso (1981), Prata (Q J R Meteorol Soc 122:1127-1151, 1996), Brunt and Sc (Q J R Meteorol Soc 58:389-420, 1932), and Brutsaert (Water Resour Res 11:742-744, 1975). The accuracy of the BMA is close to that of the parameterization method of Carmona et al. (2014) and comparable to that of the parameterization method of Idso (1981). The advantage of the BMA is that it achieves balanced results compared to the integrated single parameterization methods. To fully assess the performance of the parameterization methods, the effects of climate type, land cover, and surface elevation were also investigated. The five parameterization methods and BMA all failed over land with the tropical climate type, with high water vapor, and had poor results over forest, wetland, and ice. These methods achieved better results over desert, bare land, cropland, and grass and had acceptable accuracies for sites at different elevations, except for the parameterization method of Carmona et al. (2014) over high elevation sites. Thus, a method that can be successfully applied everywhere does not exist.

  15. Surge of Bering Glacier and Bagley Ice Field: Parameterization of surge characteristics based on automated analysis of crevasse image data and laser altimeter data

    NASA Astrophysics Data System (ADS)

    Stachura, M.; Herzfeld, U. C.; McDonald, B.; Weltman, A.; Hale, G.; Trantow, T.

    2012-12-01

    The dynamical processes that occur during the surge of a large, complex glacier system are far from being understood. The aim of this paper is to derive a parameterization of surge characteristics that captures the principle processes and can serve as the basis for a dynamic surge model. Innovative mathematical methods are introduced that facilitate derivation of such a parameterization from remote-sensing observations. Methods include automated geostatistical characterization and connectionist-geostatistical classification of dynamic provinces and deformation states, using the vehicle of crevasse patterns. These methods are applied to analyze satellite and airborne image and laser altimeter data collected during the current surge of Bering Glacier and Bagley Ice Field, Alaska.

  16. Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations

    PubMed Central

    Nigh, Gordon

    2015-01-01

    Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472

  17. High-Fidelity Geometric Modeling and Mesh Generation for Mechanics Characterization of Polycrystalline Materials

    DTIC Science & Technology

    2014-10-26

    From the parameterization results, we extract adaptive and anisotropic T-meshes for the further T- spline surface construction. Finally, a gradient flow...field-based method [7, 12] to generate adaptive and anisotropic quadrilateral meshes, which can be used as the control mesh for high-order T- spline ...parameterization results, we extract adaptive and anisotropic T-meshes for the further T- spline surface construction. Finally, a gradient flow-based

  18. Improving Calculation Accuracies of Accumulation-Mode Fractions Based on Spectral of Aerosol Optical Depths

    NASA Astrophysics Data System (ADS)

    Ying, Zhang; Zhengqiang, Li; Yan, Wang

    2014-03-01

    Anthropogenic aerosols are released into the atmosphere, which cause scattering and absorption of incoming solar radiation, thus exerting a direct radiative forcing on the climate system. Anthropogenic Aerosol Optical Depth (AOD) calculations are important in the research of climate changes. Accumulation-Mode Fractions (AMFs) as an anthropogenic aerosol parameter, which are the fractions of AODs between the particulates with diameters smaller than 1μm and total particulates, could be calculated by AOD spectral deconvolution algorithm, and then the anthropogenic AODs are obtained using AMFs. In this study, we present a parameterization method coupled with an AOD spectral deconvolution algorithm to calculate AMFs in Beijing over 2011. All of data are derived from AErosol RObotic NETwork (AERONET) website. The parameterization method is used to improve the accuracies of AMFs compared with constant truncation radius method. We find a good correlation using parameterization method with the square relation coefficient of 0.96, and mean deviation of AMFs is 0.028. The parameterization method could also effectively solve AMF underestimate in winter. It is suggested that the variations of Angstrom indexes in coarse mode have significant impacts on AMF inversions.

  19. Brain Surface Conformal Parameterization Using Riemann Surface Structure

    PubMed Central

    Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung

    2011-01-01

    In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336

  20. Electronegativity equalization method: parameterization and validation for organic molecules using the Merz-Kollman-Singh charge distribution scheme.

    PubMed

    Jirousková, Zuzana; Vareková, Radka Svobodová; Vanek, Jakub; Koca, Jaroslav

    2009-05-01

    The electronegativity equalization method (EEM) was developed by Mortier et al. as a semiempirical method based on the density-functional theory. After parameterization, in which EEM parameters A(i), B(i), and adjusting factor kappa are obtained, this approach can be used for calculation of average electronegativity and charge distribution in a molecule. The aim of this work is to perform the EEM parameterization using the Merz-Kollman-Singh (MK) charge distribution scheme obtained from B3LYP/6-31G* and HF/6-31G* calculations. To achieve this goal, we selected a set of 380 organic molecules from the Cambridge Structural Database (CSD) and used the methodology, which was recently successfully applied to EEM parameterization to calculate the HF/STO-3G Mulliken charges on large sets of molecules. In the case of B3LYP/6-31G* MK charges, we have improved the EEM parameters for already parameterized elements, specifically C, H, N, O, and F. Moreover, EEM parameters for S, Br, Cl, and Zn, which have not as yet been parameterized for this level of theory and basis set, we also developed. In the case of HF/6-31G* MK charges, we have developed the EEM parameters for C, H, N, O, S, Br, Cl, F, and Zn that have not been parameterized for this level of theory and basis set so far. The obtained EEM parameters were verified by a previously developed validation procedure and used for the charge calculation on a different set of 116 organic molecules from the CSD. The calculated EEM charges are in a very good agreement with the quantum mechanically obtained ab initio charges. 2008 Wiley Periodicals, Inc.

  1. Actinide electronic structure and atomic forces

    NASA Astrophysics Data System (ADS)

    Albers, R. C.; Rudin, Sven P.; Trinkle, Dallas R.; Jones, M. D.

    2000-07-01

    We have developed a new method[1] of fitting tight-binding parameterizations based on functional forms developed at the Naval Research Laboratory.[2] We have applied these methods to actinide metals and report our success using them (see below). The fitting procedure uses first-principles local-density-approximation (LDA) linear augmented plane-wave (LAPW) band structure techniques[3] to first calculate an electronic-structure band structure and total energy for fcc, bcc, and simple cubic crystal structures for the actinide of interest. The tight-binding parameterization is then chosen to fit the detailed energy eigenvalues of the bands along symmetry directions, and the symmetry of the parameterization is constrained to agree with the correct symmetry of the LDA band structure at each eigenvalue and k-vector that is fit to. By fitting to a range of different volumes and the three different crystal structures, we find that the resulting parameterization is robust and appears to accurately calculate other crystal structures and properties of interest.

  2. Developing Parametric Models for the Assembly of Machine Fixtures for Virtual Multiaxial CNC Machining Centers

    NASA Astrophysics Data System (ADS)

    Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.

    2018-01-01

    This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.

  3. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    NASA Astrophysics Data System (ADS)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  4. Patch-based image reconstruction for PET using prior-image derived dictionaries

    NASA Astrophysics Data System (ADS)

    Tahaei, Marzieh S.; Reader, Andrew J.

    2016-09-01

    In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.

  5. Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds

    NASA Astrophysics Data System (ADS)

    Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail

    2011-01-01

    Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.

  6. A novel non-uniform control vector parameterization approach with time grid refinement for flight level tracking optimal control problems.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua

    2018-02-01

    High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen

    2011-08-16

    Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore,more » this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.« less

  8. Parameterizing deep convection using the assumed probability density function method

    DOE PAGES

    Storer, R. L.; Griffin, B. M.; Höft, J.; ...

    2014-06-11

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  9. Parameterizing deep convection using the assumed probability density function method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storer, R. L.; Griffin, B. M.; Höft, J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  10. Parameterizing deep convection using the assumed probability density function method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storer, R. L.; Griffin, B. M.; Hoft, Jan

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  11. A Thermal Infrared Radiation Parameterization for Atmospheric Studies

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)

    2001-01-01

    This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.

  12. Parameterizing the Transport Pathways for Cell Invasion in Complex Scaffold Architectures

    PubMed Central

    Ashworth, Jennifer C.; Mehr, Marco; Buxton, Paul G.; Best, Serena M.

    2016-01-01

    Interconnecting pathways through porous tissue engineering scaffolds play a vital role in determining nutrient supply, cell invasion, and tissue ingrowth. However, the global use of the term “interconnectivity” often fails to describe the transport characteristics of these pathways, giving no clear indication of their potential to support tissue synthesis. This article uses new experimental data to provide a critical analysis of reported methods for the description of scaffold transport pathways, ranging from qualitative image analysis to thorough structural parameterization using X-ray Micro-Computed Tomography. In the collagen scaffolds tested in this study, it was found that the proportion of pore space perceived to be accessible dramatically changed depending on the chosen method of analysis. Measurements of % interconnectivity as defined in this manner varied as a function of direction and connection size, and also showed a dependence on measurement length scale. As an alternative, a method for transport pathway parameterization was investigated, using percolation theory to calculate the diameter of the largest sphere that can travel to infinite distance through a scaffold in a specified direction. As proof of principle, this approach was used to investigate the invasion behavior of primary fibroblasts in response to independent changes in pore wall alignment and pore space accessibility, parameterized using the percolation diameter. The result was that both properties played a distinct role in determining fibroblast invasion efficiency. This example therefore demonstrates the potential of the percolation diameter as a method of transport pathway parameterization, to provide key structural criteria for application-based scaffold design. PMID:26888449

  13. Remote Sensing of Soil Moisture: A Comparison of Optical and Thermal Methods

    NASA Astrophysics Data System (ADS)

    Foroughi, H.; Naseri, A. A.; Boroomandnasab, S.; Sadeghi, M.; Jones, S. B.; Tuller, M.; Babaeian, E.

    2017-12-01

    Recent technological advances in satellite and airborne remote sensing have provided new means for large-scale soil moisture monitoring. Traditional methods for soil moisture retrieval require thermal and optical RS observations. In this study we compared the traditional trapezoid model parameterized based on the land surface temperature - normalized difference vegetation index (LST-NDVI) space with the recently developed optical trapezoid model OPTRAM parameterized based on the shortwave infrared transformed reflectance (STR)-NDVI space for an extensive sugarcane field located in Southwestern Iran. Twelve Landsat-8 satellite images were acquired during the sugarcane growth season (April to October 2016). Reference in situ soil moisture data were obtained at 22 locations at different depths via core sampling and oven-drying. The obtained results indicate that the thermal/optical and optical prediction methods are comparable, both with volumetric moisture content estimation errors of about 0.04 cm3 cm-3. However, the OPTRAM model is more efficient because it does not require thermal data and can be universally parameterized for a specific location, because unlike the LST-soil moisture relationship, the reflectance-soil moisture relationship does not significantly vary with environmental variables (e.g., air temperature, wind speed, etc.).

  14. Constructing IGA-suitable planar parameterization from complex CAD boundary by domain partition and global/local optimization

    NASA Astrophysics Data System (ADS)

    Xu, Gang; Li, Ming; Mourrain, Bernard; Rabczuk, Timon; Xu, Jinlan; Bordas, Stéphane P. A.

    2018-01-01

    In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries consisting of a set of B-spline curves. Instead of forming the computational domain by a simple boundary, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including B\\'ezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior B\\'ezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C1=G1-continuity constraints on the interface of neighboring B\\'ezier patches with respect to each quad in the quadrangulation, the high-quality B\\'ezier patch parameterization is obtained by a C1-constrained local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.

  15. Parameterization of plume chemistry into large-scale atmospheric models: Application to aircraft NOx emissions

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.

    2009-10-01

    A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be introduced in large-scale models, such as ship exhausts, provided that the plume life cycle, the type of emissions, and the major reactions involved in the nonlinear chemical systems can be determined with sufficient accuracy.

  16. Intercomparison Project on Parameterizations of Large-Scale Dynamics for Simulations of Tropical Convection

    NASA Astrophysics Data System (ADS)

    Sobel, A. H.; Wang, S.; Bellon, G.; Sessions, S. L.; Woolnough, S.

    2013-12-01

    Parameterizations of large-scale dynamics have been developed in the past decade for studying the interaction between tropical convection and large-scale dynamics, based on our physical understanding of the tropical atmosphere. A principal advantage of these methods is that they offer a pathway to attack the key question of what controls large-scale variations of tropical deep convection. These methods have been used with both single column models (SCMs) and cloud-resolving models (CRMs) to study the interaction of deep convection with several kinds of environmental forcings. While much has been learned from these efforts, different groups' efforts are somewhat hard to compare. Different models, different versions of the large-scale parameterization methods, and experimental designs that differ in other ways are used. It is not obvious which choices are consequential to the scientific conclusions drawn and which are not. The methods have matured to the point that there is value in an intercomparison project. In this context, the Global Atmospheric Systems Study - Weak Temperature Gradient (GASS-WTG) project was proposed at the Pan-GASS meeting in September 2012. The weak temperature gradient approximation is one method to parameterize large-scale dynamics, and is used in the project name for historical reasons and simplicity, but another method, the damped gravity wave (DGW) method, will also be used in the project. The goal of the GASS-WTG project is to develop community understanding of the parameterization methods currently in use. Their strengths, weaknesses, and functionality in models with different physics and numerics will be explored in detail, and their utility to improve our understanding of tropical weather and climate phenomena will be further evaluated. This presentation will introduce the intercomparison project, including background, goals, and overview of the proposed experimental design. Interested groups will be invited to join (it will not be too late), and preliminary results will be presented.

  17. Separation of Intercepted Multi-Radar Signals Based on Parameterized Time-Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Lu, W. L.; Xie, J. W.; Wang, H. M.; Sheng, C.

    2016-09-01

    Modern radars use complex waveforms to obtain high detection performance and low probabilities of interception and identification. Signals intercepted from multiple radars overlap considerably in both the time and frequency domains and are difficult to separate with primary time parameters. Time-frequency analysis (TFA), as a key signal-processing tool, can provide better insight into the signal than conventional methods. In particular, among the various types of TFA, parameterized time-frequency analysis (PTFA) has shown great potential to investigate the time-frequency features of such non-stationary signals. In this paper, we propose a procedure for PTFA to separate overlapped radar signals; it includes five steps: initiation, parameterized time-frequency analysis, demodulating the signal of interest, adaptive filtering and recovering the signal. The effectiveness of the method was verified with simulated data and an intercepted radar signal received in a microwave laboratory. The results show that the proposed method has good performance and has potential in electronic reconnaissance applications, such as electronic intelligence, electronic warfare support measures, and radar warning.

  18. Model-driven harmonic parameterization of the cortical surface: HIP-HOP.

    PubMed

    Auzias, G; Lefèvre, J; Le Troter, A; Fischer, C; Perrot, M; Régis, J; Coulon, O

    2013-05-01

    In the context of inter subject brain surface matching, we present a parameterization of the cortical surface constrained by a model of cortical organization. The parameterization is defined via an harmonic mapping of each hemisphere surface to a rectangular planar domain that integrates a representation of the model. As opposed to previous landmark-based registration methods we do not match folds between individuals but instead optimize the fit between cortical sulci and specific iso-coordinate axis in the model. This strategy overcomes some limitation to sulcus-based registration techniques such as topological variability in sulcal landmarks across subjects. Experiments on 62 subjects with manually traced sulci are presented and compared with the result of the Freesurfer software. The evaluation involves a measure of dispersion of sulci with both angular and area distortions. We show that the model-based strategy can lead to a natural, efficient and very fast (less than 5 min per hemisphere) method for defining inter subjects correspondences. We discuss how this approach also reduces the problems inherent to anatomically defined landmarks and open the way to the investigation of cortical organization through the notion of orientation and alignment of structures across the cortex.

  19. Aerosol hygroscopic growth parameterization based on a solute specific coefficient

    NASA Astrophysics Data System (ADS)

    Metzger, S.; Steil, B.; Xu, L.; Penner, J. E.; Lelieveld, J.

    2011-09-01

    Water is a main component of atmospheric aerosols and its amount depends on the particle chemical composition. We introduce a new parameterization for the aerosol hygroscopic growth factor (HGF), based on an empirical relation between water activity (aw) and solute molality (μs) through a single solute specific coefficient νi. Three main advantages are: (1) wide applicability, (2) simplicity and (3) analytical nature. (1) Our approach considers the Kelvin effect and covers ideal solutions at large relative humidity (RH), including CCN activation, as well as concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD). (2) A single νi coefficient suffices to parameterize the HGF for a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. (3) In contrast to previous methods, our analytical aw parameterization depends not only on a linear correction factor for the solute molality, instead νi also appears in the exponent in form x · ax. According to our findings, νi can be assumed constant for the entire aw range (0-1). Thus, the νi based method is computationally efficient. In this work we focus on single solute solutions, where νi is pre-determined with the bisection method from our analytical equations using RHD measurements and the saturation molality μssat. The computed aerosol HGF and supersaturation (Köhler-theory) compare well with the results of the thermodynamic reference model E-AIM for the key compounds NaCl and (NH4)2SO4 relevant for CCN modeling and calibration studies. The equations introduced here provide the basis of our revised gas-liquid-solid partitioning model, i.e. version 4 of the EQuilibrium Simplified Aerosol Model (EQSAM4), described in a companion paper.

  20. An Novel Continuation Power Flow Method Based on Line Voltage Stability Index

    NASA Astrophysics Data System (ADS)

    Zhou, Jianfang; He, Yuqing; He, Hongbin; Jiang, Zhuohan

    2018-01-01

    An novel continuation power flow method based on line voltage stability index is proposed in this paper. Line voltage stability index is used to determine the selection of parameterized lines, and constantly updated with the change of load parameterized lines. The calculation stages of the continuation power flow decided by the angle changes of the prediction of development trend equation direction vector are proposed in this paper. And, an adaptive step length control strategy is used to calculate the next prediction direction and value according to different calculation stages. The proposed method is applied clear physical concept, and the high computing speed, also considering the local characteristics of voltage instability which can reflect the weak nodes and weak area in a power system. Due to more fully to calculate the PV curves, the proposed method has certain advantages on analysing the voltage stability margin to large-scale power grid.

  1. Trade-Wind Cloudiness and Climate

    NASA Technical Reports Server (NTRS)

    Randall, David A.

    1997-01-01

    Closed Mesoscale Cellular Convection (MCC) consists of mesoscale cloud patches separated by narrow clear regions. Strong radiative cooling occurs at the cloud top. A dry two-dimensional Bousinesq model is used to study the effects of cloud-top cooling on convection. Wide updrafts and narrow downdrafts are used to indicate the asymmetric circulations associated with the mesoscale cloud patches. Based on the numerical results, a conceptual model was constructed to suggest a mechanism for the formation of closed MCC over cool ocean surfaces. A new method to estimate the radioative and evaporative cooling in the entrainment layer of a stratocumulus-topped boundary layer has been developed. The method was applied to a set of Large-Eddy Simulation (LES) results and to a set of tethered-balloon data obtained during FIRE. We developed a statocumulus-capped marine mixed layer model which includes a parameterization of drizzle based on the use of a predicted Cloud Condensation Nuclei (CCN) number concentration. We have developed, implemented, and tested a very elaborate new stratiform cloudiness parameterization for use in GCMs. Finally, we have developed a new, mechanistic parameterization of the effects of cloud-top cooling on the entrainment rate.

  2. On the usage of classical nucleation theory in quantification of the impact of bacterial INP on weather and climate

    NASA Astrophysics Data System (ADS)

    Sahyoun, Maher; Wex, Heike; Gosewinkel, Ulrich; Šantl-Temkiv, Tina; Nielsen, Niels W.; Finster, Kai; Sørensen, Jens H.; Stratmann, Frank; Korsholm, Ulrik S.

    2016-08-01

    Bacterial ice-nucleating particles (INP) are present in the atmosphere and efficient in heterogeneous ice-nucleation at temperatures up to -2 °C in mixed-phase clouds. However, due to their low emission rates, their climatic impact was considered insignificant in previous modeling studies. In view of uncertainties about the actual atmospheric emission rates and concentrations of bacterial INP, it is important to re-investigate the threshold fraction of cloud droplets containing bacterial INP for a pronounced effect on ice-nucleation, by using a suitable parameterization that describes the ice-nucleation process by bacterial INP properly. Therefore, we compared two heterogeneous ice-nucleation rate parameterizations, denoted CH08 and HOO10 herein, both of which are based on classical-nucleation-theory and measurements, and use similar equations, but different parameters, to an empirical parameterization, denoted HAR13 herein, which considers implicitly the number of bacterial INP. All parameterizations were used to calculate the ice-nucleation probability offline. HAR13 and HOO10 were implemented and tested in a one-dimensional version of a weather-forecast-model in two meteorological cases. Ice-nucleation-probabilities based on HAR13 and CH08 were similar, in spite of their different derivation, and were higher than those based on HOO10. This study shows the importance of the method of parameterization and of the input variable, number of bacterial INP, for accurately assessing their role in meteorological and climatic processes.

  3. Parameterization of air temperature in high temporal and spatial resolution from a combination of the SEVIRI and MODIS instruments

    NASA Astrophysics Data System (ADS)

    Zakšek, Klemen; Schroedter-Homscheidt, Marion

    Some applications, e.g. from traffic or energy management, require air temperature data in high spatial and temporal resolution at two metres height above the ground ( T2m), sometimes in near-real-time. Thus, a parameterization based on boundary layer physical principles was developed that determines the air temperature from remote sensing data (SEVIRI data aboard the MSG and MODIS data aboard Terra and Aqua satellites). The method consists of two parts. First, a downscaling procedure from the SEVIRI pixel resolution of several kilometres to a one kilometre spatial resolution is performed using a regression analysis between the land surface temperature ( LST) and the normalized differential vegetation index ( NDVI) acquired by the MODIS instrument. Second, the lapse rate between the LST and T2m is removed using an empirical parameterization that requires albedo, down-welling surface short-wave flux, relief characteristics and NDVI data. The method was successfully tested for Slovenia, the French region Franche-Comté and southern Germany for the period from May to December 2005, indicating that the parameterization is valid for Central Europe. This parameterization results in a root mean square deviation RMSD of 2.0 K during the daytime with a bias of -0.01 K and a correlation coefficient of 0.95. This is promising, especially considering the high temporal (30 min) and spatial resolution (1000 m) of the results.

  4. Correction of Excessive Precipitation over Steep and High Mountains in a GCM: A Simple Method of Parameterizing the Thermal Effects of Subgrid Topographic Variation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.

    2015-01-01

    The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.

  5. A theory-based parameterization for heterogeneous ice nucleation and implications for the simulation of ice processes in atmospheric models

    NASA Astrophysics Data System (ADS)

    Savre, J.; Ekman, A. M. L.

    2015-05-01

    A new parameterization for heterogeneous ice nucleation constrained by laboratory data and based on classical nucleation theory is introduced. Key features of the parameterization include the following: a consistent and modular modeling framework for treating condensation/immersion and deposition freezing, the possibility to consider various potential ice nucleating particle types (e.g., dust, black carbon, and bacteria), and the possibility to account for an aerosol size distribution. The ice nucleating ability of each aerosol type is described using a contact angle (θ) probability density function (PDF). A new modeling strategy is described to allow the θ PDF to evolve in time so that the most efficient ice nuclei (associated with the lowest θ values) are progressively removed as they nucleate ice. A computationally efficient quasi Monte Carlo method is used to integrate the computed ice nucleation rates over both size and contact angle distributions. The parameterization is employed in a parcel model, forced by an ensemble of Lagrangian trajectories extracted from a three-dimensional simulation of a springtime low-level Arctic mixed-phase cloud, in order to evaluate the accuracy and convergence of the method using different settings. The same model setup is then employed to examine the importance of various parameters for the simulated ice production. Modeling the time evolution of the θ PDF is found to be particularly crucial; assuming a time-independent θ PDF significantly overestimates the ice nucleation rates. It is stressed that the capacity of black carbon (BC) to form ice in the condensation/immersion freezing mode is highly uncertain, in particular at temperatures warmer than -20°C. In its current version, the parameterization most likely overestimates ice initiation by BC.

  6. How to assess the impact of a physical parameterization in simulations of moist convection?

    NASA Astrophysics Data System (ADS)

    Grabowski, Wojciech

    2017-04-01

    A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.

  7. An RBF-based reparameterization method for constrained texture mapping.

    PubMed

    Yu, Hongchuan; Lee, Tong-Yee; Yeh, I-Cheng; Yang, Xiaosong; Li, Wenxi; Zhang, Jian J

    2012-07-01

    Texture mapping has long been used in computer graphics to enhance the realism of virtual scenes. However, to match the 3D model feature points with the corresponding pixels in a texture image, surface parameterization must satisfy specific positional constraints. However, despite numerous research efforts, the construction of a mathematically robust, foldover-free parameterization that is subject to positional constraints continues to be a challenge. In the present paper, this foldover problem is addressed by developing radial basis function (RBF)-based reparameterization. Given initial 2D embedding of a 3D surface, the proposed method can reparameterize 2D embedding into a foldover-free 2D mesh, satisfying a set of user-specified constraint points. In addition, this approach is mesh free. Therefore, generating smooth texture mapping results is possible without extra smoothing optimization.

  8. Non-stationary signal analysis based on general parameterized time-frequency transform and its application in the feature extraction of a rotary machine

    NASA Astrophysics Data System (ADS)

    Zhou, Peng; Peng, Zhike; Chen, Shiqian; Yang, Yang; Zhang, Wenming

    2018-06-01

    With the development of large rotary machines for faster and more integrated performance, the condition monitoring and fault diagnosis for them are becoming more challenging. Since the time-frequency (TF) pattern of the vibration signal from the rotary machine often contains condition information and fault feature, the methods based on TF analysis have been widely-used to solve these two problems in the industrial community. This article introduces an effective non-stationary signal analysis method based on the general parameterized time-frequency transform (GPTFT). The GPTFT is achieved by inserting a rotation operator and a shift operator in the short-time Fourier transform. This method can produce a high-concentrated TF pattern with a general kernel. A multi-component instantaneous frequency (IF) extraction method is proposed based on it. The estimation for the IF of every component is accomplished by defining a spectrum concentration index (SCI). Moreover, such an IF estimation process is iteratively operated until all the components are extracted. The tests on three simulation examples and a real vibration signal demonstrate the effectiveness and superiority of our method.

  9. Estimating age at a specified length from the von Bertalanffy growth function

    USGS Publications Warehouse

    Ogle, Derek H.; Isermann, Daniel A.

    2017-01-01

    Estimating the time required (i.e., age) for fish in a population to reach a specific length (e.g., legal harvest length) is useful for understanding population dynamics and simulating the potential effects of length-based harvest regulations. The age at which a population reaches a specific mean length is typically estimated by fitting a von Bertalanffy growth function to length-at-age data and then rearranging the best-fit equation to solve for age at the specified length. This process precludes the use of standard frequentist methods to compute confidence intervals and compare estimates of age at the specified length among populations. We provide a parameterization of the von Bertalanffy growth function that has age at a specified length as a parameter. With this parameterization, age at a specified length is directly estimated, and standard methods can be used to construct confidence intervals and make among-group comparisons for this parameter. We demonstrate use of the new parameterization with two data sets.

  10. The response of the SSM/I to the marine environment. Part 2: A parameterization of the effect of the sea surface slope distribution on emission and reflection

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.; Katsaros, Kristina B.

    1994-01-01

    Based on a geometric optics model and the assumption of an isotropic Gaussian surface slope distribution, the component of ocean surface microwave emissivity variation due to large-scale surface roughness is parameterized for the frequencies and approximate viewing angle of the Special Sensor Microwave/Imager. Independent geophysical variables in the parameterization are the effective (microwave frequency dependent) slope variance and the sea surface temperature. Using the same physical model, the change in the effective zenith angle of reflected sky radiation arising from large-scale roughness is also parameterized. Independent geophysical variables in this parameterization are the effective slope variance and the atmospheric optical depth at the frequency in question. Both of the above model-based parameterizations are intended for use in conjunction with empirical parameterizations relating effective slope variance and foam coverage to near-surface wind speed. These empirical parameterizations are the subject of a separate paper.

  11. A parameterization method and application in breast tomosynthesis dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xinhua; Zhang, Da; Liu, Bob

    2013-09-15

    Purpose: To present a parameterization method based on singular value decomposition (SVD), and to provide analytical parameterization of the mean glandular dose (MGD) conversion factors from eight references for evaluating breast tomosynthesis dose in the Mammography Quality Standards Act (MQSA) protocol and in the UK, European, and IAEA dosimetry protocols.Methods: MGD conversion factor is usually listed in lookup tables for the factors such as beam quality, breast thickness, breast glandularity, and projection angle. The authors analyzed multiple sets of MGD conversion factors from the Hologic Selenia Dimensions quality control manual and seven previous papers. Each data set was parameterized usingmore » a one- to three-dimensional polynomial function of 2–16 terms. Variable substitution was used to improve accuracy. A least-squares fit was conducted using the SVD.Results: The differences between the originally tabulated MGD conversion factors and the results computed using the parameterization algorithms were (a) 0.08%–0.18% on average and 1.31% maximum for the Selenia Dimensions quality control manual, (b) 0.09%–0.66% on average and 2.97% maximum for the published data by Dance et al. [Phys. Med. Biol. 35, 1211–1219 (1990); ibid. 45, 3225–3240 (2000); ibid. 54, 4361–4372 (2009); ibid. 56, 453–471 (2011)], (c) 0.74%–0.99% on average and 3.94% maximum for the published data by Sechopoulos et al. [Med. Phys. 34, 221–232 (2007); J. Appl. Clin. Med. Phys. 9, 161–171 (2008)], and (d) 0.66%–1.33% on average and 2.72% maximum for the published data by Feng and Sechopoulos [Radiology 263, 35–42 (2012)], excluding one sample in (d) that does not follow the trends in the published data table.Conclusions: A flexible parameterization method is presented in this paper, and was applied to breast tomosynthesis dosimetry. The resultant data offer easy and accurate computations of MGD conversion factors for evaluating mean glandular breast dose in the MQSA protocol and in the UK, European, and IAEA dosimetry protocols. Microsoft Excel™ spreadsheets are provided for the convenience of readers.« less

  12. Physically-based modeling of drag force caused by natural woody vegetation

    NASA Astrophysics Data System (ADS)

    Järvelä, J.; Aberle, J.

    2014-12-01

    Riparian areas and floodplains are characterized by woody vegetation, which is an essential feature to be accounted for in many hydro-environmental models. For applications including flood protection, river restoration and modelling of sediment processes, there is a need to improve the reliability of flow resistance estimates. Conventional methods such as the use of lumped resistance coefficients or simplistic cylinder-based drag force equations can result in significant errors, as these methods do not adequately address the effect of foliage and reconfiguration of flexible plant parts under flow action. To tackle the problem, physically-based methods relying on objective and measurable vegetation properties are advantageous for describing complex vegetation. We have conducted flume and towing tank investigations with living and artificial plants, both in arrays and with isolated plants, providing new insight into advanced parameterization of natural vegetation. The stem, leaf and total areas of the trees confirmed to be suitable characteristic dimensions for estimating flow resistance. Consequently, we propose the use of leaf area index and leaf-to-stem-area ratio to achieve better drag force estimates. Novel remote sensing techniques including laser scanning have become available for effective collection of the required data. The benefits of the proposed parameterization have been clearly demonstrated in our newest experimental studies, but it remains to be investigated to what extent the parameter values are species-specific and how they depend on local habitat conditions. The purpose of this contribution is to summarize developments in the estimation of vegetative drag force based on physically-based approaches as the latest research results are somewhat dispersed. In particular, concerning woody vegetation we seek to discuss three issues: 1) parameterization of reconfiguration with the Vogel exponent; 2) advantage of parameterizing plants with the leaf area index and leaf-to-stem-area ratio, and 3) effect of plant scale (size from twigs to mature trees). To analyze these issues we use experimental data from the authors' research teams as well as from other researchers. The results are expected to be useful for the design of future experimental campaigns and developing drag force models.

  13. Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology

    NASA Astrophysics Data System (ADS)

    Jin, Z.; Azzari, G.; Lobell, D. B.

    2016-12-01

    Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.

  14. Triple collocation based merging of satellite soil moisture retrievals

    USDA-ARS?s Scientific Manuscript database

    We propose a method for merging soil moisture retrievals from space borne active and passive microwave instruments based on weighted averaging taking into account the error characteristics of the individual data sets. The merging scheme is parameterized using error variance estimates obtained from u...

  15. Building integral projection models: a user's guide

    PubMed Central

    Rees, Mark; Childs, Dylan Z; Ellner, Stephen P; Coulson, Tim

    2014-01-01

    In order to understand how changes in individual performance (growth, survival or reproduction) influence population dynamics and evolution, ecologists are increasingly using parameterized mathematical models. For continuously structured populations, where some continuous measure of individual state influences growth, survival or reproduction, integral projection models (IPMs) are commonly used. We provide a detailed description of the steps involved in constructing an IPM, explaining how to: (i) translate your study system into an IPM; (ii) implement your IPM; and (iii) diagnose potential problems with your IPM. We emphasize how the study organism's life cycle, and the timing of censuses, together determine the structure of the IPM kernel and important aspects of the statistical analysis used to parameterize an IPM using data on marked individuals. An IPM based on population studies of Soay sheep is used to illustrate the complete process of constructing, implementing and evaluating an IPM fitted to sample data. We then look at very general approaches to parameterizing an IPM, using a wide range of statistical techniques (e.g. maximum likelihood methods, generalized additive models, nonparametric kernel density estimators). Methods for selecting models for parameterizing IPMs are briefly discussed. We conclude with key recommendations and a brief overview of applications that extend the basic model. The online Supporting Information provides commented R code for all our analyses. PMID:24219157

  16. a Physical Parameterization of Snow Albedo for Use in Climate Models.

    NASA Astrophysics Data System (ADS)

    Marshall, Susan Elaine

    The albedo of a natural snowcover is highly variable ranging from 90 percent for clean, new snow to 30 percent for old, dirty snow. This range in albedo represents a difference in surface energy absorption of 10 to 70 percent of incident solar radiation. Most general circulation models (GCMs) fail to calculate the surface snow albedo accurately, yet the results of these models are sensitive to the assumed value of the snow albedo. This study replaces the current simple empirical parameterizations of snow albedo with a physically-based parameterization which is accurate (within +/- 3% of theoretical estimates) yet efficient to compute. The parameterization is designed as a FORTRAN subroutine (called SNOALB) which can be easily implemented into model code. The subroutine requires less then 0.02 seconds of computer time (CRAY X-MP) per call and adds only one new parameter to the model calculations, the snow grain size. The snow grain size can be calculated according to one of the two methods offered in this thesis. All other input variables to the subroutine are available from a climate model. The subroutine calculates a visible, near-infrared and solar (0.2-5 μm) snow albedo and offers a choice of two wavelengths (0.7 and 0.9 mu m) at which the solar spectrum is separated into the visible and near-infrared components. The parameterization is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, version 1 (CCM1), and the results of a five -year, seasonal cycle, fixed hydrology experiment are compared to the current model snow albedo parameterization. The results show the SNOALB albedos to be comparable to the old CCM1 snow albedos for current climate conditions, with generally higher visible and lower near-infrared snow albedos using the new subroutine. However, this parameterization offers a greater predictability for climate change experiments outside the range of current snow conditions because it is physically-based and not tuned to current empirical results.

  17. Automation of a Linear Accelerator Dosimetric Quality Assurance Program

    NASA Astrophysics Data System (ADS)

    Lebron Gonzalez, Sharon H.

    According to the American Society of Radiation Oncology, two-thirds of all cancer patients will receive radiation therapy during their illness with the majority of the treatments been delivered by a linear accelerator (linac). Therefore, quality assurance (QA) procedures must be enforced in order to deliver treatments with a machine in proper conditions. The overall goal of this project is to automate the linac's dosimetric QA procedures by analyzing and accomplishing various tasks. First, the photon beam dosimetry (i.e. total scatter correction factor, infinite percentage depth dose (PDD) and profiles) were parameterized. Parameterization consists of defining the parameters necessary for the specification of a dosimetric quantity model creating a data set that is portable and easy to implement for different applications including: beam modeling data input into a treatment planning system (TPS), comparing measured and TPS modelled data, the QA of a linac's beam characteristics, and the establishment of a standard data set for comparison with other data, etcetera. Second, this parameterization model was used to develop a universal method to determine the radiation field size of flattened (FF), flattening-filter-free (FFF) and wedge beams which we termed the parameterized gradient method (PGM). Third, the parameterized model was also used to develop a profile-based method for assessing the beam quality of photon FF and FFF beams using an ionization chamber array. The PDD and PDD change was also predicted from the measured profile. Lastly, methods were created to automate the multileaf collimator (MLC) calibration and QA procedures as well as the acquisition of the parameters included in monthly and annual photon dosimetric QA. A two field technique was used for the calculation of the MLC leaf relative offsets using an electronic portal imaging device (EPID). A step-and-shoot technique was used to accurately acquire the radiation field size, flatness, symmetry, output and beam quality specifiers in a single delivery to an ionization chamber array for FF and FFF beams.

  18. Thermodynamic properties for applications in chemical industry via classical force fields.

    PubMed

    Guevara-Carrion, Gabriela; Hasse, Hans; Vrabec, Jadran

    2012-01-01

    Thermodynamic properties of fluids are of key importance for the chemical industry. Presently, the fluid property models used in process design and optimization are mostly equations of state or G (E) models, which are parameterized using experimental data. Molecular modeling and simulation based on classical force fields is a promising alternative route, which in many cases reasonably complements the well established methods. This chapter gives an introduction to the state-of-the-art in this field regarding molecular models, simulation methods, and tools. Attention is given to the way modeling and simulation on the scale of molecular force fields interact with other scales, which is mainly by parameter inheritance. Parameters for molecular force fields are determined both bottom-up from quantum chemistry and top-down from experimental data. Commonly used functional forms for describing the intra- and intermolecular interactions are presented. Several approaches for ab initio to empirical force field parameterization are discussed. Some transferable force field families, which are frequently used in chemical engineering applications, are described. Furthermore, some examples of force fields that were parameterized for specific molecules are given. Molecular dynamics and Monte Carlo methods for the calculation of transport properties and vapor-liquid equilibria are introduced. Two case studies are presented. First, using liquid ammonia as an example, the capabilities of semi-empirical force fields, parameterized on the basis of quantum chemical information and experimental data, are discussed with respect to thermodynamic properties that are relevant for the chemical industry. Second, the ability of molecular simulation methods to describe accurately vapor-liquid equilibrium properties of binary mixtures containing CO(2) is shown.

  19. Electron Impact Ionization: A New Parameterization for 100 eV to 1 MeV Electrons

    NASA Technical Reports Server (NTRS)

    Fang, Xiaohua; Randall, Cora E.; Lummerzheim, Dirk; Solomon, Stanley C.; Mills, Michael J.; Marsh, Daniel; Jackman, Charles H.; Wang, Wenbin; Lu, Gang

    2008-01-01

    Low, medium and high energy electrons can penetrate to the thermosphere (90-400 km; 55-240 miles) and mesosphere (50-90 km; 30-55 miles). These precipitating electrons ionize that region of the atmosphere, creating positively charged atoms and molecules and knocking off other negatively charged electrons. The precipitating electrons also create nitrogen-containing compounds along with other constituents. Since the electron precipitation amounts change within minutes, it is necessary to have a rapid method of computing the ionization and production of nitrogen-containing compounds for inclusion in computationally-demanding global models. A new methodology has been developed, which has parameterized a more detailed model computation of the ionizing impact of precipitating electrons over the very large range of 100 eV up to 1,000,000 eV. This new parameterization method is more accurate than a previous parameterization scheme, when compared with the more detailed model computation. Global models at the National Center for Atmospheric Research will use this new parameterization method in the near future.

  20. Building a Relationship between Elements of Product Form Features and Vocabulary Assessment Models

    ERIC Educational Resources Information Center

    Lo, Chi-Hung

    2016-01-01

    Based on the characteristic feature parameterization and the superiority evaluation method (SEM) in extension engineering, a product-shape design method was proposed in this study. The first step of this method is to decompose the basic feature components of a product. After that, the morphological chart method is used to segregate the ideas so as…

  1. Parameterization of Cloud Optical Properties for a Mixture of Ice Particles for use in Atmospheric Models

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Based on the single-scattering optical properties that are pre-computed using an improve geometric optics method, the bulk mass absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the mean effective particle size of a mixture of ice habits. The parameterization has been applied to compute fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. Compared to the parameterization for a single habit of hexagonal column, the solar heating of clouds computed with the parameterization for a mixture of habits is smaller due to a smaller cosingle-scattering albedo. Whereas the net downward fluxes at the TOA and surface are larger due to a larger asymmetry factor. The maximum difference in the cloud heating rate is approx. 0.2 C per day, which occurs in clouds with an optical thickness greater than 3 and the solar zenith angle less than 45 degrees. Flux difference is less than 10 W per square meters for the optical thickness ranging from 0.6 to 10 and the entire range of the solar zenith angle. The maximum flux difference is approximately 3%, which occurs around an optical thickness of 1 and at high solar zenith angles.

  2. Non-perturbational surface-wave inversion: A Dix-type relation for surface waves

    USGS Publications Warehouse

    Haney, Matt; Tsai, Victor C.

    2015-01-01

    We extend the approach underlying the well-known Dix equation in reflection seismology to surface waves. Within the context of surface wave inversion, the Dix-type relation we derive for surface waves allows accurate depth profiles of shear-wave velocity to be constructed directly from phase velocity data, in contrast to perturbational methods. The depth profiles can subsequently be used as an initial model for nonlinear inversion. We provide examples of the Dix-type relation for under-parameterized and over-parameterized cases. In the under-parameterized case, we use the theory to estimate crustal thickness, crustal shear-wave velocity, and mantle shear-wave velocity across the Western U.S. from phase velocity maps measured at 8-, 20-, and 40-s periods. By adopting a thin-layer formalism and an over-parameterized model, we show how a regularized inversion based on the Dix-type relation yields smooth depth profiles of shear-wave velocity. In the process, we quantitatively demonstrate the depth sensitivity of surface-wave phase velocity as a function of frequency and the accuracy of the Dix-type relation. We apply the over-parameterized approach to a near-surface data set within the frequency band from 5 to 40 Hz and find overall agreement between the inverted model and the result of full nonlinear inversion.

  3. Building integral projection models: a user's guide.

    PubMed

    Rees, Mark; Childs, Dylan Z; Ellner, Stephen P

    2014-05-01

    In order to understand how changes in individual performance (growth, survival or reproduction) influence population dynamics and evolution, ecologists are increasingly using parameterized mathematical models. For continuously structured populations, where some continuous measure of individual state influences growth, survival or reproduction, integral projection models (IPMs) are commonly used. We provide a detailed description of the steps involved in constructing an IPM, explaining how to: (i) translate your study system into an IPM; (ii) implement your IPM; and (iii) diagnose potential problems with your IPM. We emphasize how the study organism's life cycle, and the timing of censuses, together determine the structure of the IPM kernel and important aspects of the statistical analysis used to parameterize an IPM using data on marked individuals. An IPM based on population studies of Soay sheep is used to illustrate the complete process of constructing, implementing and evaluating an IPM fitted to sample data. We then look at very general approaches to parameterizing an IPM, using a wide range of statistical techniques (e.g. maximum likelihood methods, generalized additive models, nonparametric kernel density estimators). Methods for selecting models for parameterizing IPMs are briefly discussed. We conclude with key recommendations and a brief overview of applications that extend the basic model. The online Supporting Information provides commented R code for all our analyses. © 2014 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  4. A method to analyze molecular tagging velocimetry data using the Hough transform.

    PubMed

    Sanchez-Gonzalez, R; McManamen, B; Bowersox, R D W; North, S W

    2015-10-01

    The development of a method to analyze molecular tagging velocimetry data based on the Hough transform is presented. This method, based on line fitting, parameterizes the grid lines "written" into a flowfield. Initial proof-of-principle illustration of this method was performed to obtain two-component velocity measurements in the wake of a cylinder in a Mach 4.6 flow, using a data set derived from computational fluid dynamics simulations. The Hough transform is attractive for molecular tagging velocimetry applications since it is capable of discriminating spurious features that can have a biasing effect in the fitting process. Assessment of the precision and accuracy of the method were also performed to show the dependence on analysis window size and signal-to-noise levels. The accuracy of this Hough transform-based method to quantify intersection displacements was determined to be comparable to cross-correlation methods. The employed line parameterization avoids the assumption of linearity in the vicinity of each intersection, which is important in the limit of drastic grid deformations resulting from large velocity gradients common in high-speed flow applications. This Hough transform method has the potential to enable the direct and spatially accurate measurement of local vorticity, which is important in applications involving turbulent flowfields. Finally, two-component velocity determinations using the Hough transform from experimentally obtained images are presented, demonstrating the feasibility of the proposed analysis method.

  5. Methods of testing parameterizations: Vertical ocean mixing

    NASA Technical Reports Server (NTRS)

    Tziperman, Eli

    1992-01-01

    The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.

  6. Classification of mathematics deficiency using shape and scale analysis of 3D brain structures

    NASA Astrophysics Data System (ADS)

    Kurtek, Sebastian; Klassen, Eric; Gore, John C.; Ding, Zhaohua; Srivastava, Anuj

    2011-03-01

    We investigate the use of a recent technique for shape analysis of brain substructures in identifying learning disabilities in third-grade children. This Riemannian technique provides a quantification of differences in shapes of parameterized surfaces, using a distance that is invariant to rigid motions and re-parameterizations. Additionally, it provides an optimal registration across surfaces for improved matching and comparisons. We utilize an efficient gradient based method to obtain the optimal re-parameterizations of surfaces. In this study we consider 20 different substructures in the human brain and correlate the differences in their shapes with abnormalities manifested in deficiency of mathematical skills in 106 subjects. The selection of these structures is motivated in part by the past links between their shapes and cognitive skills, albeit in broader contexts. We have studied the use of both individual substructures and multiple structures jointly for disease classification. Using a leave-one-out nearest neighbor classifier, we obtained a 62.3% classification rate based on the shape of the left hippocampus. The use of multiple structures resulted in an improved classification rate of 71.4%.

  7. Comparative study of transient hydraulic tomography with varying parameterizations and zonations: Laboratory sandbox investigation

    NASA Astrophysics Data System (ADS)

    Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.

    2017-11-01

    Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.

  8. Evaluating the importance of characterizing soil structure and horizons in parameterizing a hydrologic process model

    USGS Publications Warehouse

    Mirus, Benjamin B.

    2015-01-01

    Incorporating the influence of soil structure and horizons into parameterizations of distributed surface water/groundwater models remains a challenge. Often, only a single soil unit is employed, and soil-hydraulic properties are assigned based on textural classification, without evaluating the potential impact of these simplifications. This study uses a distributed physics-based model to assess the influence of soil horizons and structure on effective parameterization. This paper tests the viability of two established and widely used hydrogeologic methods for simulating runoff and variably saturated flow through layered soils: (1) accounting for vertical heterogeneity by combining hydrostratigraphic units with contrasting hydraulic properties into homogeneous, anisotropic units and (2) use of established pedotransfer functions based on soil texture alone to estimate water retention and conductivity, without accounting for the influence of pedon structures and hysteresis. The viability of this latter method for capturing the seasonal transition from runoff-dominated to evapotranspiration-dominated regimes is also tested here. For cases tested here, event-based simulations using simplified vertical heterogeneity did not capture the state-dependent anisotropy and complex combinations of runoff generation mechanisms resulting from permeability contrasts in layered hillslopes with complex topography. Continuous simulations using pedotransfer functions that do not account for the influence of soil structure and hysteresis generally over-predicted runoff, leading to propagation of substantial water balance errors. Analysis suggests that identifying a dominant hydropedological unit provides the most acceptable simplification of subsurface layering and that modified pedotransfer functions with steeper soil-water retention curves might adequately capture the influence of soil structure and hysteresis on hydrologic response in headwater catchments.

  9. Reduction by invariants and projection of linear representations of Lie algebras applied to the construction of nonlinear realizations

    NASA Astrophysics Data System (ADS)

    Campoamor-Stursberg, R.

    2018-03-01

    A procedure for the construction of nonlinear realizations of Lie algebras in the context of Vessiot-Guldberg-Lie algebras of first-order systems of ordinary differential equations (ODEs) is proposed. The method is based on the reduction of invariants and projection of lowest-dimensional (irreducible) representations of Lie algebras. Applications to the description of parameterized first-order systems of ODEs related by contraction of Lie algebras are given. In particular, the kinematical Lie algebras in (2 + 1)- and (3 + 1)-dimensions are realized simultaneously as Vessiot-Guldberg-Lie algebras of parameterized nonlinear systems in R3 and R4, respectively.

  10. A Universal Ts-VI Triangle Method for the Continuous Retrieval of Evaporative Fraction From MODIS Products

    NASA Astrophysics Data System (ADS)

    Zhu, Wenbin; Jia, Shaofeng; Lv, Aifeng

    2017-10-01

    The triangle method based on the spatial relationship between remotely sensed land surface temperature (Ts) and vegetation index (VI) has been widely used for the estimates of evaporative fraction (EF). In the present study, a universal triangle method was proposed by transforming the Ts-VI feature space from a regional scale to a pixel scale. The retrieval of EF is only related to the boundary conditions at pixel scale, regardless of the Ts-VI configuration over the spatial domain. The boundary conditions of each pixel are composed of the theoretical dry edge determined by the surface energy balance principle and the wet edge determined by the average air temperature of open water. The universal triangle method was validated using the EF observations collected by the Energy Balance Bowen Ratio systems in the Southern Great Plains of the United States of America (USA). Two parameterization schemes of EF were used to demonstrate their applicability with Terra Moderate Resolution Imaging Spectroradiometer (MODIS) products over the whole year 2004. The results of this study show that the accuracy produced by both of these two parameterization schemes is comparable to that produced by the traditional triangle method, although the universal triangle method seems specifically suited to the parameterization scheme proposed in our previous research. The independence of the universal triangle method from the Ts-VI feature space makes it possible to conduct a continuous monitoring of evapotranspiration and soil moisture. That is just the ability the traditional triangle method does not possess.

  11. Parameterization of clear-sky surface irradiance and its implications for estimation of aerosol direct radiative effect and aerosol optical depth

    PubMed Central

    Xia, Xiangao

    2015-01-01

    Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m−2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm−2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available. PMID:26395310

  12. The multifacet graphically contracted function method. II. A general procedure for the parameterization of orthogonal matrices and its application to arc factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shepard, Ron; Brozell, Scott R.; Gidofalvi, Gergely

    2014-08-14

    Practical algorithms are presented for the parameterization of orthogonal matrices Q ∈ R {sup m×n} in terms of the minimal number of essential parameters (φ). Both square n = m and rectangular n < m situations are examined. Two separate kinds of parameterizations are considered, one in which the individual columns of Q are distinct, and the other in which only Span(Q) is significant. The latter is relevant to chemical applications such as the representation of the arc factors in the multifacet graphically contracted function method and the representation of orbital coefficients in SCF and DFT methods. The parameterizations aremore » represented formally using products of elementary Householder reflector matrices. Standard mathematical libraries, such as LAPACK, may be used to perform the basic low-level factorization, reduction, and other algebraic operations. Some care must be taken with the choice of phase factors in order to ensure stability and continuity. The transformation of gradient arrays between the Q and (φ) parameterizations is also considered. Operation counts for all factorizations and transformations are determined. Numerical results are presented which demonstrate the robustness, stability, and accuracy of these algorithms.« less

  13. Automated finite element modeling of the lumbar spine: Using a statistical shape model to generate a virtual population of models.

    PubMed

    Campbell, J Q; Petrella, A J

    2016-09-06

    Population-based modeling of the lumbar spine has the potential to be a powerful clinical tool. However, developing a fully parameterized model of the lumbar spine with accurate geometry has remained a challenge. The current study used automated methods for landmark identification to create a statistical shape model of the lumbar spine. The shape model was evaluated using compactness, generalization ability, and specificity. The primary shape modes were analyzed visually, quantitatively, and biomechanically. The biomechanical analysis was performed by using the statistical shape model with an automated method for finite element model generation to create a fully parameterized finite element model of the lumbar spine. Functional finite element models of the mean shape and the extreme shapes (±3 standard deviations) of all 17 shape modes were created demonstrating the robust nature of the methods. This study represents an advancement in finite element modeling of the lumbar spine and will allow population-based modeling in the future. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Principal axes estimation using the vibration modes of physics-based deformable models.

    PubMed

    Krinidis, Stelios; Chatzis, Vassilios

    2008-06-01

    This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.

  15. Evaluation of different parameterizations of the spatial heterogeneity of subsurface storage capacity for hourly runoff simulation in boreal mountainous watershed

    NASA Astrophysics Data System (ADS)

    Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur

    2015-03-01

    Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the parameterizations.

  16. The parameterization of microchannel-plate-based detection systems

    NASA Astrophysics Data System (ADS)

    Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Barrie, Alexander C.; Chornay, Dennis J.; MacDonald, Elizabeth A.; Holland, Matthew P.; Giles, Barbara L.; Pollock, Craig J.

    2016-10-01

    The most common instrument for low-energy plasmas consists of a top-hat electrostatic analyzer (ESA) geometry coupled with a microchannel-plate-based (MCP-based) detection system. While the electrostatic optics for such sensors are readily simulated and parameterized during the laboratory calibration process, the detection system is often less well characterized. Here we develop a comprehensive mathematical description of particle detection systems. As a function of instrument azimuthal angle, we parameterize (1) particle scattering within the ESA and at the surface of the MCP, (2) the probability distribution of MCP gain for an incident particle, (3) electron charge cloud spreading between the MCP and anode board, and (4) capacitive coupling between adjacent discrete anodes. Using the Dual Electron Spectrometers on the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission as an example, we demonstrate a method for extracting these fundamental detection system parameters from laboratory calibration. We further show that parameters that will evolve in flight, namely, MCP gain, can be determined through application of this model to specifically tailored in-flight calibration activities. This methodology provides a robust characterization of sensor suite performance throughout mission lifetime. The model developed in this work is not only applicable to existing sensors but also can be used as an analytical design tool for future particle instrumentation.

  17. Flow Charts: Visualization of Vector Fields on Arbitrary Surfaces

    PubMed Central

    Li, Guo-Shi; Tricoche, Xavier; Weiskopf, Daniel; Hansen, Charles

    2009-01-01

    We introduce a novel flow visualization method called Flow Charts, which uses a texture atlas approach for the visualization of flows defined over curved surfaces. In this scheme, the surface and its associated flow are segmented into overlapping patches, which are then parameterized and packed in the texture domain. This scheme allows accurate particle advection across multiple charts in the texture domain, providing a flexible framework that supports various flow visualization techniques. The use of surface parameterization enables flow visualization techniques requiring the global view of the surface over long time spans, such as Unsteady Flow LIC (UFLIC), particle-based Unsteady Flow Advection Convolution (UFAC), or dye advection. It also prevents visual artifacts normally associated with view-dependent methods. Represented as textures, Flow Charts can be naturally integrated into hardware accelerated flow visualization techniques for interactive performance. PMID:18599918

  18. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2006-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  19. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2005-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  20. Merged data models for multi-parameterized querying: Spectral data base meets GIS-based map archive

    NASA Astrophysics Data System (ADS)

    Naß, A.; D'Amore, M.; Helbert, J.

    2017-09-01

    Current and upcoming planetary missions deliver a huge amount of different data (remote sensing data, in-situ data, and derived products). Within this contribution present how different data (bases) can be managed and merged, to enable multi-parameterized querying based on the constant spatial context.

  1. Improved Overpressure Recording and Modeling for Near-Surface Explosion Forensics

    NASA Astrophysics Data System (ADS)

    Kim, K.; Schnurr, J.; Garces, M. A.; Rodgers, A. J.

    2017-12-01

    The accurate recording and analysis of air-blast acoustic waveforms is a key component of the forensic analysis of explosive events. Smartphone apps can enhance traditional technologies by providing scalable, cost-effective ubiquitous sensor solutions for monitoring blasts, undeclared activities, and inaccessible facilities. During a series of near-surface chemical high explosive tests, iPhone 6's running the RedVox infrasound recorder app were co-located with high-fidelity Hyperion overpressure sensors, allowing for direct comparison of the resolution and frequency content of the devices. Data from the traditional sensors is used to characterize blast signatures and to determine relative iPhone microphone amplitude and phase responses. A Wiener filter based source deconvolution method is applied, using a parameterized source function estimated from traditional overpressure sensor data, to estimate system responses. In addition, progress on a new parameterized air-blast model is presented. The model is based on the analysis of a large set of overpressure waveforms from several surface explosion test series. An appropriate functional form with parameters determined empirically from modern air-blast and acoustic data will allow for better parameterization of signals and the improved characterization of explosive sources.

  2. Resolution-dependent behavior of subgrid-scale vertical transport in the Zhang-McFarlane convection parameterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Heng; Gustafson, Jr., William I.; Hagos, Samson M.

    2015-04-18

    With this study, to better understand the behavior of quasi-equilibrium-based convection parameterizations at higher resolution, we use a diagnostic framework to examine the resolution-dependence of subgrid-scale vertical transport of moist static energy as parameterized by the Zhang-McFarlane convection parameterization (ZM). Grid-scale input to ZM is supplied by coarsening output from cloud-resolving model (CRM) simulations onto subdomains ranging in size from 8 × 8 to 256 × 256 km 2s.

  3. Evaluation of surface layer flux parameterizations using in-situ observations

    NASA Astrophysics Data System (ADS)

    Katz, Jeremy; Zhu, Ping

    2017-09-01

    Appropriate calculation of surface turbulent fluxes between the atmosphere and the underlying ocean/land surface is one of the major challenges in geosciences. In practice, the surface turbulent fluxes are estimated from the mean surface meteorological variables based on the bulk transfer model combined with the Monnin-Obukhov Similarity (MOS) theory. Few studies have been done to examine the extent to which such a flux parameterization can be applied to different weather and surface conditions. A novel validation method is developed in this study to evaluate the surface flux parameterization using in-situ observations collected at a station off the coast of Gulf of Mexico. The main findings are: (a) the theoretical prediction that uses MOS theory does not match well with those directly computed from the observations. (b) The largest spread in exchange coefficients is shown in strong stable conditions with calm winds. (c) Large turbulent eddies, which depend strongly on the mean flow pattern and surface conditions, tend to break the constant flux assumption in the surface layer.

  4. Anatomical parameterization for volumetric meshing of the liver

    NASA Astrophysics Data System (ADS)

    Vera, Sergio; González Ballester, Miguel A.; Gil, Debora

    2014-03-01

    A coordinate system describing the interior of organs is a powerful tool for a systematic localization of injured tissue. If the same coordinate values are assigned to specific anatomical landmarks, the coordinate system allows integration of data across different medical image modalities. Harmonic mappings have been used to produce parametric coordinate systems over the surface of anatomical shapes, given their flexibility to set values at specific locations through boundary conditions. However, most of the existing implementations in medical imaging restrict to either anatomical surfaces, or the depth coordinate with boundary conditions is given at sites of limited geometric diversity. In this paper we present a method for anatomical volumetric parameterization that extends current harmonic parameterizations to the interior anatomy using information provided by the volume medial surface. We have applied the methodology to define a common reference system for the liver shape and functional anatomy. This reference system sets a solid base for creating anatomical models of the patient's liver, and allows comparing livers from several patients in a common framework of reference.

  5. Uncertainties for two-dimensional models of solar rotation from helioseismic eigenfrequency splitting

    NASA Technical Reports Server (NTRS)

    Genovese, Christopher R.; Stark, Philip B.; Thompson, Michael J.

    1995-01-01

    Observed solar p-mode frequency splittings can be used to estimate angular velocity as a function of position in the solar interior. Formal uncertainties of such estimates depend on the method of estimation (e.g., least-squares), the distribution of errors in the observations, and the parameterization imposed on the angular velocity. We obtain lower bounds on the uncertainties that do not depend on the method of estimation; the bounds depend on an assumed parameterization, but the fact that they are lower bounds for the 'true' uncertainty does not. Ninety-five percent confidence intervals for estimates of the angular velocity from 1986 Big Bear Solar Observatory (BBSO) data, based on a 3659 element tensor-product cubic-spline parameterization, are everywhere wider than 120 nHz, and exceed 60,000 nHz near the core. When compared with estimates of the solar rotation, these bounds reveal that useful inferences based on pointwise estimates of the angular velocity using 1986 BBSO splitting data are not feasible over most of the Sun's volume. The discouraging size of the uncertainties is due principally to the fact that helioseismic measurements are insensitive to changes in the angular velocity at individual points, so estimates of point values based on splittings are extremely uncertain. Functionals that measure distributed 'smooth' properties are, in general, better constrained than estimates of the rotation at a point. For example, the uncertainties in estimated differences of average rotation between adjacent blocks of about 0.001 solar volumes across the base of the convective zone are much smaller, and one of several estimated differences we compute appears significant at the 95% level.

  6. Spatio-temporal Eigenvector Filtering: Application on Bioenergy Crop Impacts

    NASA Astrophysics Data System (ADS)

    Wang, M.; Kamarianakis, Y.; Georgescu, M.

    2017-12-01

    A suite of 10-year ensemble-based simulations was conducted to investigate the hydroclimatic impacts due to large-scale deployment of perennial bioenergy crops across the continental United States. Given the large size of the simulated dataset (about 60Tb), traditional hierarchical spatio-temporal statistical modelling cannot be implemented for the evaluation of physics parameterizations and biofuel impacts. In this work, we propose a filtering algorithm that takes into account the spatio-temporal autocorrelation structure of the data while avoiding spatial confounding. This method is used to quantify the robustness of simulated hydroclimatic impacts associated with bioenergy crops to alternative physics parameterizations and observational datasets. Results are evaluated against those obtained from three alternative Bayesian spatio-temporal specifications.

  7. A method for coupling a parameterization of the planetary boundary layer with a hydrologic model

    NASA Technical Reports Server (NTRS)

    Lin, J. D.; Sun, Shu Fen

    1986-01-01

    Deardorff's parameterization of the planetary boundary layer is adapted to drive a hydrologic model. The method converts the atmospheric conditions measured at the anemometer height at one site to the mean values in the planetary boundary layer; it then uses the planetary boundary layer parameterization and the hydrologic variables to calculate the fluxes of momentum, heat and moisture at the atmosphere-land interface for a different site. A simplified hydrologic model is used for a simulation study of soil moisture and ground temperature on three different land surface covers. The results indicate that this method can be used to drive a spatially distributed hydrologic model by using observed data available at a meteorological station located on or nearby the site.

  8. Soil erosion model predictions using parent material/soil texture-based parameters compared to using site-specific parameters

    Treesearch

    R. B. Foltz; W. J. Elliot; N. S. Wagenbrenner

    2011-01-01

    Forested areas disturbed by access roads produce large amounts of sediment. One method to predict erosion and, hence, manage forest roads is the use of physically based soil erosion models. A perceived advantage of a physically based model is that it can be parameterized at one location and applied at another location with similar soil texture or geological parent...

  9. The application of depletion curves for parameterization of subgrid variability of snow

    Treesearch

    C. H. Luce; D. G. Tarboton

    2004-01-01

    Parameterization of subgrid-scale variability in snow accumulation and melt is important for improvements in distributed snowmelt modelling. We have taken the approach of using depletion curves that relate fractional snowcovered area to element-average snow water equivalent to parameterize the effect of snowpack heterogeneity within a physically based mass and energy...

  10. Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somerville, R.C.J.; Iacobellis, S.F.

    2005-03-18

    Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less

  11. Highly parameterized model calibration with cloud computing: an example of regional flow model calibration in northeast Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.

    2014-05-01

    Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.

  12. Historical and projected carbon balance of mature black spruce ecosystems across north america: The role of carbon-nitrogen interactions

    USGS Publications Warehouse

    Clein, Joy S.; McGuire, A.D.; Zhang, X.; Kicklighter, D.W.; Melillo, J.M.; Wofsy, S.C.; Jarvis, P.G.; Massheder, J.M.

    2002-01-01

    The role of carbon (C) and nitrogen (N) interactions on sequestration of atmospheric CO2 in black spruce ecosystems across North America was evaluated with the Terrestrial Ecosystem Model (TEM) by applying parameterizations of the model in which C-N dynamics were either coupled or uncoupled. First, the performance of the parameterizations, which were developed for the dynamics of black spruce ecosystems at the Bonanza Creek Long-Term Ecological Research site in Alaska, were evaluated by simulating C dynamics at eddy correlation tower sites in the Boreal Ecosystem Atmosphere Study (BOREAS) for black spruce ecosystems in the northern study area (northern site) and the southern study area (southern site) with local climate data. We compared simulated monthly growing season (May to September) estimates of gross primary production (GPP), total ecosystem respiration (RESP), and net ecosystem production (NEP) from 1994 to 1997 to available field-based estimates at both sites. At the northern site, monthly growing season estimates of GPP and RESP for the coupled and uncoupled simulations were highly correlated with the field-based estimates (coupled: R2= 0.77, 0.88 for GPP and RESP; uncoupled: R2 = 0.67, 0.92 for GPP and RESP). Although the simulated seasonal pattern of NEP generally matched the field-based data, the correlations between field-based and simulated monthly growing season NEP were lower (R2 = 0.40, 0.00 for coupled and uncoupled simulations, respectively) in comparison to the correlations between field-based and simulated GPP and RESP. The annual NEP simulated by the coupled parameterization fell within the uncertainty of field-based estimates in two of three years. On the other hand, annual NEP simulated by the uncoupled parameterization only fell within the field-based uncertainty in one of three years. At the southern site, simulated NEP generally matched field-based NEP estimates, and the correlation between monthly growing season field-based and simulated NEP (R2 = 0.36, 0.20 for coupled and uncoupled simulations, respectively) was similar to the correlations at the northern site. To evaluate the role of N dynamics in C balance of black spruce ecosystems across North America, we simulated historical and projected C dynamics from 1900 to 2100 with a global-based climatology at 0.5?? resolution (latitude ?? longitude) with both the coupled and uncoupled parameterizations of TEM. From analyses at the northern site, several consistent patterns emerge. There was greater inter-annual variability in net primary production (NPP) simulated by the uncoupled parameterization as compared to the coupled parameterization, which led to substantial differences in inter-annual variability in NEP between the parameterizations. The divergence between NPP and heterotrophic respiration was greater in the uncoupled simulation, resulting in more C sequestration during the projected period. These responses were the result of fundamentally different responses of the coupled and uncoupled parameterizations to changes in CO2 and climate. Across North American black spruce ecosystems, the range of simulated decadal changes in C storage was substantially greater for the uncoupled parameterization than for the coupled parameterization. Analysis of the spatial variability in decadal responses of C dynamics revealed that C fluxes simulated by the coupled and uncoupled parameterizations have different sensitivities to climate and that the climate sensitivities of the fluxes change over the temporal scope of the simulations. The results of this study suggest that uncertainties can be reduced through (1) factorial studies focused on elucidating the role of C and N interactions in the response of mature black spruce ecosystems to manipulations of atmospheric CO2 and climate, (2) establishment of a network of continuous, long-term measurements of C dynamics across the range of mature black spruce ecosystems in North America, and (3) ancillary measureme

  13. Impacts of Light Use Efficiency and fPAR Parameterization on Gross Primary Production Modeling

    NASA Technical Reports Server (NTRS)

    Cheng, Yen-Ben; Zhang, Qingyuan; Lyapustin, Alexei I.; Wang, Yujie; Middleton, Elizabeth M.

    2014-01-01

    This study examines the impact of parameterization of two variables, light use efficiency (LUE) and the fraction of absorbed photosynthetically active radiation (fPAR or fAPAR), on gross primary production(GPP) modeling. Carbon sequestration by terrestrial plants is a key factor to a comprehensive under-standing of the carbon budget at global scale. In this context, accurate measurements and estimates of GPP will allow us to achieve improved carbon monitoring and to quantitatively assess impacts from cli-mate changes and human activities. Spaceborne remote sensing observations can provide a variety of land surface parameterizations for modeling photosynthetic activities at various spatial and temporal scales. This study utilizes a simple GPP model based on LUE concept and different land surface parameterizations to evaluate the model and monitor GPP. Two maize-soybean rotation fields in Nebraska, USA and the Bartlett Experimental Forest in New Hampshire, USA were selected for study. Tower-based eddy-covariance carbon exchange and PAR measurements were collected from the FLUXNET Synthesis Dataset. For the model parameterization, we utilized different values of LUE and the fPAR derived from various algorithms. We adapted the approach and parameters from the MODIS MOD17 Biome Properties Look-Up Table (BPLUT) to derive LUE. We also used a site-specific analytic approach with tower-based Net Ecosystem Exchange (NEE) and PAR to estimate maximum potential LUE (LUEmax) to derive LUE. For the fPAR parameter, the MODIS MOD15A2 fPAR product was used. We also utilized fAPAR chl, a parameter accounting for the fAPAR linked to the chlorophyll-containing canopy fraction. fAPAR chl was obtained by inversion of a radiative transfer model, which used the MODIS-based reflectances in bands 1-7 produced by Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm. fAPAR chl exhibited seasonal dynamics more similar with the flux tower based GPP than MOD15A2 fPAR, especially in the spring and fall at the agricultural sites. When using the MODIS MOD17-based parameters to estimate LUE, fAPAR chl generated better agreements with GPP (r2= 0.79-0.91) than MOD15A2 fPAR (r2= 0.57-0.84).However, underestimations of GPP were also observed, especially for the crop fields. When applying the site-specific LUE max value to estimate in situ LUE, the magnitude of estimated GPP was closer to in situ GPP; this method produced a slight overestimation for the MOD15A2 fPAR at the Bartlett forest. This study highlights the importance of accurate land surface parameterizations to achieve reliable carbon monitoring capabilities from remote sensing information.

  14. Evaluation of Planetary Boundary Layer Scheme Sensitivities for the Purpose of Parameter Estimation

    EPA Science Inventory

    Meteorological model errors caused by imperfect parameterizations generally cannot be overcome simply by optimizing initial and boundary conditions. However, advanced data assimilation methods are capable of extracting significant information about parameterization behavior from ...

  15. Assessing the Resolution Adaptability of the Zhang-McFarlane Cumulus Parameterization With Spatial and Temporal Averaging: RESOLUTION ADAPTABILITY OF ZM SCHEME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, Yuxing; Fan, Jiwen; Xiao, Heng

    Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32more » km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.« less

  16. Cross section parameterizations for cosmic ray nuclei. 1: Single nucleon removal

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Townsend, Lawrence W.

    1992-01-01

    Parameterizations of single nucleon removal from electromagnetic and strong interactions of cosmic rays with nuclei are presented. These parameterizations are based upon the most accurate theoretical calculations available to date. They should be very suitable for use in cosmic ray propagation through interstellar space, the Earth's atmosphere, lunar samples, meteorites, spacecraft walls and lunar and martian habitats.

  17. Comparison of Gravity Wave Temperature Variances from Ray-Based Spectral Parameterization of Convective Gravity Wave Drag with AIRS Observations

    NASA Technical Reports Server (NTRS)

    Choi, Hyun-Joo; Chun, Hye-Yeong; Gong, Jie; Wu, Dong L.

    2012-01-01

    The realism of ray-based spectral parameterization of convective gravity wave drag, which considers the updated moving speed of the convective source and multiple wave propagation directions, is tested against the Atmospheric Infrared Sounder (AIRS) onboard the Aqua satellite. Offline parameterization calculations are performed using the global reanalysis data for January and July 2005, and gravity wave temperature variances (GWTVs) are calculated at z = 2.5 hPa (unfiltered GWTV). AIRS-filtered GWTV, which is directly compared with AIRS, is calculated by applying the AIRS visibility function to the unfiltered GWTV. A comparison between the parameterization calculations and AIRS observations shows that the spatial distribution of the AIRS-filtered GWTV agrees well with that of the AIRS GWTV. However, the magnitude of the AIRS-filtered GWTV is smaller than that of the AIRS GWTV. When an additional cloud top gravity wave momentum flux spectrum with longer horizontal wavelength components that were obtained from the mesoscale simulations is included in the parameterization, both the magnitude and spatial distribution of the AIRS-filtered GWTVs from the parameterization are in good agreement with those of the AIRS GWTVs. The AIRS GWTV can be reproduced reasonably well by the parameterization not only with multiple wave propagation directions but also with two wave propagation directions of 45 degrees (northeast-southwest) and 135 degrees (northwest-southeast), which are optimally chosen for computational efficiency.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.

    As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less

  19. FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard C. J. Somerville

    2009-02-27

    Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments ofmore » cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.« less

  20. Multiscale approach for the construction of equilibrated all-atom models of a poly(ethylene glycol)-based hydrogel

    PubMed Central

    Li, Xianfeng; Murthy, N. Sanjeeva; Becker, Matthew L.; Latour, Robert A.

    2016-01-01

    A multiscale modeling approach is presented for the efficient construction of an equilibrated all-atom model of a cross-linked poly(ethylene glycol) (PEG)-based hydrogel using the all-atom polymer consistent force field (PCFF). The final equilibrated all-atom model was built with a systematic simulation toolset consisting of three consecutive parts: (1) building a global cross-linked PEG-chain network at experimentally determined cross-link density using an on-lattice Monte Carlo method based on the bond fluctuation model, (2) recovering the local molecular structure of the network by transitioning from the lattice model to an off-lattice coarse-grained (CG) model parameterized from PCFF, followed by equilibration using high performance molecular dynamics methods, and (3) recovering the atomistic structure of the network by reverse mapping from the equilibrated CG structure, hydrating the structure with explicitly represented water, followed by final equilibration using PCFF parameterization. The developed three-stage modeling approach has application to a wide range of other complex macromolecular hydrogel systems, including the integration of peptide, protein, and/or drug molecules as side-chains within the hydrogel network for the incorporation of bioactivity for tissue engineering, regenerative medicine, and drug delivery applications. PMID:27013229

  1. Parameterizations for ensemble Kalman inversion

    NASA Astrophysics Data System (ADS)

    Chada, Neil K.; Iglesias, Marco A.; Roininen, Lassi; Stuart, Andrew M.

    2018-05-01

    The use of ensemble methods to solve inverse problems is attractive because it is a derivative-free methodology which is also well-adapted to parallelization. In its basic iterative form the method produces an ensemble of solutions which lie in the linear span of the initial ensemble. Choice of the parameterization of the unknown field is thus a key component of the success of the method. We demonstrate how both geometric ideas and hierarchical ideas can be used to design effective parameterizations for a number of applied inverse problems arising in electrical impedance tomography, groundwater flow and source inversion. In particular we show how geometric ideas, including the level set method, can be used to reconstruct piecewise continuous fields, and we show how hierarchical methods can be used to learn key parameters in continuous fields, such as length-scales, resulting in improved reconstructions. Geometric and hierarchical ideas are combined in the level set method to find piecewise constant reconstructions with interfaces of unknown topology.

  2. A Bayesian Ensemble Approach for Epidemiological Projections

    PubMed Central

    Lindström, Tom; Tildesley, Michael; Webb, Colleen

    2015-01-01

    Mathematical models are powerful tools for epidemiology and can be used to compare control actions. However, different models and model parameterizations may provide different prediction of outcomes. In other fields of research, ensemble modeling has been used to combine multiple projections. We explore the possibility of applying such methods to epidemiology by adapting Bayesian techniques developed for climate forecasting. We exemplify the implementation with single model ensembles based on different parameterizations of the Warwick model run for the 2001 United Kingdom foot and mouth disease outbreak and compare the efficacy of different control actions. This allows us to investigate the effect that discrepancy among projections based on different modeling assumptions has on the ensemble prediction. A sensitivity analysis showed that the choice of prior can have a pronounced effect on the posterior estimates of quantities of interest, in particular for ensembles with large discrepancy among projections. However, by using a hierarchical extension of the method we show that prior sensitivity can be circumvented. We further extend the method to include a priori beliefs about different modeling assumptions and demonstrate that the effect of this can have different consequences depending on the discrepancy among projections. We propose that the method is a promising analytical tool for ensemble modeling of disease outbreaks. PMID:25927892

  3. Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.

    PubMed

    Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo

    2016-09-01

    In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.

  4. Clustering Tree-structured Data on Manifold

    PubMed Central

    Lu, Na; Miao, Hongyu

    2016-01-01

    Tree-structured data usually contain both topological and geometrical information, and are necessarily considered on manifold instead of Euclidean space for appropriate data parameterization and analysis. In this study, we propose a novel tree-structured data parameterization, called Topology-Attribute matrix (T-A matrix), so the data clustering task can be conducted on matrix manifold. We incorporate the structure constraints embedded in data into the non-negative matrix factorization method to determine meta-trees from the T-A matrix, and the signature vector of each single tree can then be extracted by meta-tree decomposition. The meta-tree space turns out to be a cone space, in which we explore the distance metric and implement the clustering algorithm based on the concepts like Fréchet mean. Finally, the T-A matrix based clustering (TAMBAC) framework is evaluated and compared using both simulated data and real retinal images to illus trate its efficiency and accuracy. PMID:26660696

  5. Parameterized Facial Expression Synthesis Based on MPEG-4

    NASA Astrophysics Data System (ADS)

    Raouzaiou, Amaryllis; Tsapatsoulis, Nicolas; Karpouzis, Kostas; Kollias, Stefanos

    2002-12-01

    In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.

  6. Node-Splitting Generalized Linear Mixed Models for Evaluation of Inconsistency in Network Meta-Analysis.

    PubMed

    Yu-Kang, Tu

    2016-12-01

    Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. Physics-based distributed snow models in the operational arena: Current and future challenges

    NASA Astrophysics Data System (ADS)

    Winstral, A. H.; Jonas, T.; Schirmer, M.; Helbig, N.

    2017-12-01

    The demand for modeling tools robust to climate change and weather extremes along with coincident increases in computational capabilities have led to an increase in the use of physics-based snow models in operational applications. Current operational applications include the WSL-SLF's across Switzerland, ASO's in California, and USDA-ARS's in Idaho. While the physics-based approaches offer many advantages there remain limitations and modeling challenges. The most evident limitation remains computation times that often limit forecasters to a single, deterministic model run. Other limitations however remain less conspicuous amidst the assumptions that these models require little to no calibration based on their foundation on physical principles. Yet all energy balance snow models seemingly contain parameterizations or simplifications of processes where validation data are scarce or present understanding is limited. At the research-basin scale where many of these models were developed these modeling elements may prove adequate. However when applied over large areas, spatially invariable parameterizations of snow albedo, roughness lengths and atmospheric exchange coefficients - all vital to determining the snowcover energy balance - become problematic. Moreover as we apply models over larger grid cells, the representation of sub-grid variability such as the snow-covered fraction adds to the challenges. Here, we will demonstrate some of the major sensitivities of distributed energy balance snow models to particular model constructs, the need for advanced and spatially flexible methods and parameterizations, and prompt the community for open dialogue and future collaborations to further modeling capabilities.

  8. Evaluation of Warm-Rain Microphysical Parameterizations in Cloudy Boundary Layer Transitions

    NASA Astrophysics Data System (ADS)

    Nelson, K.; Mechem, D. B.

    2014-12-01

    Common warm-rain microphysical parameterizations used for marine boundary layer (MBL) clouds are either tuned for specific cloud types (e.g., the Khairoutdinov and Kogan 2000 parameterization, "KK2000") or are altogether ill-posed (Kessler 1969). An ideal microphysical parameterization should be "unified" in the sense of being suitable across MBL cloud regimes that include stratocumulus, cumulus rising into stratocumulus, and shallow trade cumulus. The recent parameterization of Kogan (2013, "K2013") was formulated for shallow cumulus but has been shown in a large-eddy simulation environment to work quite well for stratocumulus as well. We report on our efforts to implement and test this parameterization into a regional forecast model (NRL COAMPS). Results from K2013 and KK2000 are compared with the operational Kessler parameterization for a 5-day period of the VOCALS-REx field campaign, which took place over the southeast Pacific. We focus on both the relative performance of the three parameterizations and also on how they compare to the VOCALS-REx observations from the NOAA R/V Ronald H. Brown, in particular estimates of boundary-layer depth, liquid water path (LWP), cloud base, and area-mean precipitation rate obtained from C-band radar.

  9. Adaptive Neural Network Based Control of Noncanonical Nonlinear Systems.

    PubMed

    Zhang, Yanjun; Tao, Gang; Chen, Mou

    2016-09-01

    This paper presents a new study on the adaptive neural network-based control of a class of noncanonical nonlinear systems with large parametric uncertainties. Unlike commonly studied canonical form nonlinear systems whose neural network approximation system models have explicit relative degree structures, which can directly be used to derive parameterized controllers for adaptation, noncanonical form nonlinear systems usually do not have explicit relative degrees, and thus their approximation system models are also in noncanonical forms. It is well-known that the adaptive control of noncanonical form nonlinear systems involves the parameterization of system dynamics. As demonstrated in this paper, it is also the case for noncanonical neural network approximation system models. Effective control of such systems is an open research problem, especially in the presence of uncertain parameters. This paper shows that it is necessary to reparameterize such neural network system models for adaptive control design, and that such reparameterization can be realized using a relative degree formulation, a concept yet to be studied for general neural network system models. This paper then derives the parameterized controllers that guarantee closed-loop stability and asymptotic output tracking for noncanonical form neural network system models. An illustrative example is presented with the simulation results to demonstrate the control design procedure, and to verify the effectiveness of such a new design method.

  10. Intercomparison of methods of coupling between convection and large-scale circulation: 2. Comparison over nonuniform surface conditions

    DOE PAGES

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...

    2016-03-18

    As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less

  11. Intercomparison of methods of coupling between convection and large‐scale circulation: 2. Comparison over nonuniform surface conditions

    PubMed Central

    Plant, R. S.; Woolnough, S. J.; Sessions, S.; Herman, M. J.; Sobel, A.; Wang, S.; Kim, D.; Cheng, A.; Bellon, G.; Peyrille, P.; Ferry, F.; Siebesma, P.; van Ulft, L.

    2016-01-01

    Abstract As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large‐scale dynamics in a set of cloud‐resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative‐convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large‐scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column‐relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large‐scale velocity profiles which are smoother and less top‐heavy compared to those produced by the WTG simulations. These large‐scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two‐way feedback between convection and the large‐scale circulation. PMID:27642501

  12. A Review of Element-Based Galerkin Methods for Numerical Weather Prediction

    DTIC Science & Technology

    2015-04-01

    with body forces to model the effects of gravity and the Earth’s rotation (i.e. Coriolis force). Although the gravitational force varies with both...more phenomena (e.g. resolving non-hydrostatic effects , incorporating more complex moisture parameterizations), their appetite for High Performance...operation effectively ). For instance, the ST-based model NOGAPS, used by the U. S. Navy, could not scale beyond 150 processes at typical resolutions [119

  13. The Grell-Freitas Convective Parameterization: Recent Developments and Applications Within the NASA GEOS Global Model

    NASA Astrophysics Data System (ADS)

    Freitas, S.; Grell, G. A.; Molod, A.

    2017-12-01

    We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization (Grell and Freitas, 2014) is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Scale dependence for deep convection is implemented either through using the method described by Arakawa et al (2011), or through lateral spreading of the subsidence terms. Aerosol effects are included though the dependence of autoconversion and evaporation on the CCN number concentration.Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, congestus, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Also, a beta-pdf is employed now to represent the normalized mass flux profile. This opens up an additional venue to apply stochasticism in the scheme.

  14. Optimization and uncertainty assessment of strongly nonlinear groundwater models with high parameter dimensionality

    NASA Astrophysics Data System (ADS)

    Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun

    2010-10-01

    Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.

  15. A subdivision-based parametric deformable model for surface extraction and statistical shape modeling of the knee cartilages

    NASA Astrophysics Data System (ADS)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien

    2006-03-01

    Subdivision surfaces and parameterization are desirable for many algorithms that are commonly used in Medical Image Analysis. However, extracting an accurate surface and parameterization can be difficult for many anatomical objects of interest, due to noisy segmentations and the inherent variability of the object. The thin cartilages of the knee are an example of this, especially after damage is incurred from injuries or conditions like osteoarthritis. As a result, the cartilages can have different topologies or exist in multiple pieces. In this paper we present a topology preserving (genus 0) subdivision-based parametric deformable model that is used to extract the surfaces of the patella and tibial cartilages in the knee. These surfaces have minimal thickness in areas without cartilage. The algorithm inherently incorporates several desirable properties, including: shape based interpolation, sub-division remeshing and parameterization. To illustrate the usefulness of this approach, the surfaces and parameterizations of the patella cartilage are used to generate a 3D statistical shape model.

  16. Modelling heterogeneous ice nucleation on mineral dust and soot with parameterizations based on laboratory experiments

    NASA Astrophysics Data System (ADS)

    Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.

    2016-12-01

    Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.

  17. Simulating imaging spectrometer data of a mixed old-growth forest: A parameterization of a 3D radiative transfer model based on airborne and terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Schneider, F. D.; Leiterer, R.; Morsdorf, F.; Gastellu-Etchegorry, J.; Lauret, N.; Pfeifer, N.; Schaepman, M. E.

    2013-12-01

    Remote sensing offers unique potential to study forest ecosystems by providing spatially and temporally distributed information that can be linked with key biophysical and biochemical variables. The estimation of biochemical constituents of leaves from remotely sensed data is of high interest revealing insight on photosynthetic processes, plant health, plant functional types, and speciation. However, the scaling of observations at the canopy level to the leaf level or vice versa is not trivial due to the structural complexity of forests. Thus, a common solution for scaling spectral information is the use of physically-based radiative transfer models. The discrete anisotropic radiative transfer model (DART), being one of the most complete coupled canopy-atmosphere 3D radiative transfer models, was parameterized based on airborne and in-situ measurements. At-sensor radiances were simulated and compared with measurements from an airborne imaging spectrometer. The study was performed on the Laegern site, a temperate mixed forest characterized by steep slopes, a heterogeneous spectral background, and deciduous and coniferous trees at different development stages (dominated by beech trees; 47°28'42.0' N, 8°21'51.8' E, 682 m asl, Switzerland). It is one of the few studies conducted on an old-growth forest. Particularly the 3D modeling of the complex canopy architecture is crucial to model the interaction of photons with the vegetation canopy and its background. Thus, we developed two forest reconstruction approaches: 1) based on a voxel grid, and 2) based on individual tree detection. Both methods are transferable to various forest ecosystems and applicable at scales between plot and landscape. Our results show that the newly developed voxel grid approach is favorable over a parameterization based on individual trees. In comparison to the actual imaging spectrometer data, the simulated images exhibit very similar spatial patterns, whereas absolute radiance values are partially differing depending on the respective wavelength. We conclude that our proposed method provides a representation of the 3D radiative regime within old-growth forests that is suitable for simulating most spectral and spatial features of imaging spectrometer data. It indicates the potential of simulating future Earth observation missions, such as ESA's Sentinel-2. However, the high spectral variability of leaf optical properties among species has to be addressed in future radiative transfer modeling. The results further reveal that research emphasis has to be put on the accurate parameterization of small-scale structures, such as the clumping of needles into shoots or the distribution of leaf angles.

  18. Influence of the Level Density Parametrization on the Effective GDR Width at High Spins

    NASA Astrophysics Data System (ADS)

    Mazurek, K.; Matejska, M.; Kmiecik, M.; Maj, A.; Dudek, J.

    Parameterizations of the nucleonic level densities are tested by computing the effective GDR strength-functions and GDR widths at high spins. Calculations are based on the thermal shape fluctuation method with the Lublin-Strasbourg Drop (LSD) model. Results for 106Sn, 147Eu, 176W, 194Hg are compared to the experimental data.

  19. Evaluation of Surface Flux Parameterizations with Long-Term ARM Observations

    DOE PAGES

    Liu, Gang; Liu, Yangang; Endo, Satoshi

    2013-02-01

    Surface momentum, sensible heat, and latent heat fluxes are critical for atmospheric processes such as clouds and precipitation, and are parameterized in a variety of models ranging from cloud-resolving models to large-scale weather and climate models. However, direct evaluation of the parameterization schemes for these surface fluxes is rare due to limited observations. This study takes advantage of the long-term observations of surface fluxes collected at the Southern Great Plains site by the Department of Energy Atmospheric Radiation Measurement program to evaluate the six surface flux parameterization schemes commonly used in the Weather Research and Forecasting (WRF) model and threemore » U.S. general circulation models (GCMs). The unprecedented 7-yr-long measurements by the eddy correlation (EC) and energy balance Bowen ratio (EBBR) methods permit statistical evaluation of all six parameterizations under a variety of stability conditions, diurnal cycles, and seasonal variations. The statistical analyses show that the momentum flux parameterization agrees best with the EC observations, followed by latent heat flux, sensible heat flux, and evaporation ratio/Bowen ratio. The overall performance of the parameterizations depends on atmospheric stability, being best under neutral stratification and deteriorating toward both more stable and more unstable conditions. Further diagnostic analysis reveals that in addition to the parameterization schemes themselves, the discrepancies between observed and parameterized sensible and latent heat fluxes may stem from inadequate use of input variables such as surface temperature, moisture availability, and roughness length. The results demonstrate the need for improving the land surface models and measurements of surface properties, which would permit the evaluation of full land surface models.« less

  20. Shape optimization using a NURBS-based interface-enriched generalized FEM

    DOE PAGES

    Najafi, Ahmad R.; Safdari, Masoud; Tortorelli, Daniel A.; ...

    2016-11-26

    This study presents a gradient-based shape optimization over a fixed mesh using a non-uniform rational B-splines-based interface-enriched generalized finite element method, applicable to multi-material structures. In the proposed method, non-uniform rational B-splines are used to parameterize the design geometry precisely and compactly by a small number of design variables. An analytical shape sensitivity analysis is developed to compute derivatives of the objective and constraint functions with respect to the design variables. Subtle but important new terms involve the sensitivity of shape functions and their spatial derivatives. As a result, verification and illustrative problems are solved to demonstrate the precision andmore » capability of the method.« less

  1. Stochastic Convection Parameterizations: The Eddy-Diffusivity/Mass-Flux (EDMF) Approach (Invited)

    NASA Astrophysics Data System (ADS)

    Teixeira, J.

    2013-12-01

    In this presentation it is argued that moist convection parameterizations need to be stochastic in order to be realistic - even in deterministic atmospheric prediction systems. A new unified convection and boundary layer parameterization (EDMF) that optimally combines the Eddy-Diffusivity (ED) approach for smaller-scale boundary layer mixing with the Mass-Flux (MF) approach for larger-scale plumes is discussed. It is argued that for realistic simulations stochastic methods have to be employed in this new unified EDMF. Positive results from the implementation of the EDMF approach in atmospheric models are presented.

  2. A rule based computer aided design system

    NASA Technical Reports Server (NTRS)

    Premack, T.

    1986-01-01

    A Computer Aided Design (CAD) system is presented which supports the iterative process of design, the dimensional continuity between mating parts, and the hierarchical structure of the parts in their assembled configuration. Prolog, an interactive logic programming language, is used to represent and interpret the data base. The solid geometry representing the parts is defined in parameterized form using the swept volume method. The system is demonstrated with a design of a spring piston.

  3. Toward computational models of magma genesis and geochemical transport in subduction zones

    NASA Astrophysics Data System (ADS)

    Katz, R.; Spiegelman, M.

    2003-04-01

    The chemistry of material erupted from subduction-related volcanoes records important information about the processes that lead to its formation at depth in the Earth. Self-consistent numerical simulations provide a useful tool for interpreting this data as they can explore the non-linear feedbacks between processes that control the generation and transport of magma. A model capable of addressing such issues should include three critical components: (1) a variable viscosity solid flow solver with smooth and accurate pressure and velocity fields, (2) a parameterization of mass transfer reactions between the solid and fluid phases and (3) a consistent fluid flow and reactive transport code. We report on progress on each of these parts. To handle variable-viscosity solid-flow in the mantle wedge, we are adapting a Patankar-based FAS multigrid scheme developed by Albers (2000, J. Comp. Phys.). The pressure field in this scheme is the solution to an elliptic equation on a staggered grid. Thus we expect computed pressure fields to have smooth gradient fields suitable for porous flow calculations, unlike those of commonly used penalty-method schemes. Use of a temperature and strain-rate dependent mantle rheology has been shown to have important consequences for the pattern of flow and the temperature structure in the wedge. For computing thermal structure we present a novel scheme that is a hybrid of Crank-Nicholson (CN) and Semi-Lagrangian (SL) methods. We have tested the SLCN scheme on advection across a broad range of Peclet numbers and show the results. This scheme is also useful for low-diffusivity chemical transport. We also describe our parameterization of hydrous mantle melting [Katz et. al., G3, 2002 in review]. This parameterization is designed to capture the melting behavior of peridotite--water systems over parameter ranges relevant to subduction. The parameterization incorporates data and intuition gained from laboratory experiments and thermodynamic calculations yet it remains flexible and computationally efficient. Given accurate solid-flow fields, a parameterization of hydrous melting and a method for calculating thermal structure (enforcing energy conservation), the final step is to integrate these components into a consistent framework for reactive-flow and chemical transport in deformable porous media. We present preliminary results for reactive flow in 2-D static and upwelling columns and discuss possible mechanical and chemical consequences of open system reactive melting with application to arcs.

  4. Entanglement and Berry Phase in a Parameterized Three-Qubit System

    NASA Astrophysics Data System (ADS)

    Shao, Wenyi; Du, Yangyang; Yang, Qi; Wang, Gangcheng; Sun, Chunfang; Xue, Kang

    2017-03-01

    In this paper, we construct a parameterized form of unitary breve {R}_{123}(θ 1,θ 2,φ) matrix through the Yang-Baxterization method. Acting such matrix on three-qubit natural basis as a quantum gate, we can obtain a set of entangled states, which possess the same entanglement value depending on the parameters 𝜃 1 and 𝜃 2. Particularly, such entangled states can produce a set of maximally entangled bases Greenberger-Horne-Zeilinger (GHZ) states with respect to 𝜃 1 = 𝜃 2 = π/2. Choosing a useful Hamiltonian, one can study the evolution of the eigenstates and investigate the result of Berry phase. It is not difficult to find that the Berry phase for this new three-qubit system consistent with the solid angle on the Bloch sphere.

  5. mrpy: Renormalized generalized gamma distribution for HMF and galaxy ensemble properties comparisons

    NASA Astrophysics Data System (ADS)

    Murray, Steven G.; Robotham, Aaron S. G.; Power, Chris

    2018-02-01

    mrpy calculates the MRP parameterization of the Halo Mass Function. It calculates basic statistics of the truncated generalized gamma distribution (TGGD) with the TGGD class, including mean, mode, variance, skewness, pdf, and cdf. It generates MRP quantities with the MRP class, such as differential number counts and cumulative number counts, and offers various methods for generating normalizations. It can generate the MRP-based halo mass function as a function of physical parameters via the mrp_b13 function, and fit MRP parameters to data in the form of arbitrary curves and in the form of a sample of variates with the SimFit class. mrpy also calculates analytic hessians and jacobians at any point, and allows the user to alternate parameterizations of the same form via the reparameterize module.

  6. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: Dispersion, induction, and basis set superposition error

    NASA Astrophysics Data System (ADS)

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.

    2012-10-01

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.

  7. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: dispersion, induction, and basis set superposition error.

    PubMed

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T; Dannenberg, J J

    2012-10-07

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.

  8. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: Dispersion, induction, and basis set superposition error

    PubMed Central

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.

    2012-01-01

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states. PMID:23039587

  9. Efficient use of mobile devices for quantification of pressure injury images.

    PubMed

    Garcia-Zapirain, Begonya; Sierra-Sosa, Daniel; Ortiz, David; Isaza-Monsalve, Mariano; Elmaghraby, Adel

    2018-01-01

    Pressure Injuries are chronic wounds that are formed due to the constriction of the soft tissues against bone prominences. In order to assess these injuries, the medical personnel carry out the evaluation and diagnosis using visual methods and manual measurements, which can be inaccurate and may generate discomfort in the patients. By using segmentation techniques, the Pressure Injuries can be extracted from an image and accurately parameterized, leading to a correct diagnosis. In general, these techniques are based on the solution of differential equations and the involved numerical methods are demanding in terms of computational resources. In previous work, we proposed a technique developed using toroidal parametric equations for image decomposition and segmentation without solving differential equations. In this paper, we present the development of a mobile application useful for the non-contact assessment of Pressure Injuries based on the toroidal decomposition from images. The usage of this technique allows us to achieve an accurate segmentation almost 8 times faster than Active Contours without Edges (ACWE) and Dynamic Contours methods. We describe the techniques and the implementation for Android devices using Python and Kivy. This application allows for the segmentation and parameterization of injuries, obtain relevant information for the diagnosis and tracking the evolution of patient's injuries.

  10. A volumetric conformal mapping approach for clustering white matter fibers in the brain

    PubMed Central

    Gupta, Vikash; Prasad, Gautam; Thompson, Paul

    2017-01-01

    The human brain may be considered as a genus-0 shape, topologically equivalent to a sphere. Various methods have been used in the past to transform the brain surface to that of a sphere using harmonic energy minimization methods used for cortical surface matching. However, very few methods have studied volumetric parameterization of the brain using a spherical embedding. Volumetric parameterization is typically used for complicated geometric problems like shape matching, morphing and isogeometric analysis. Using conformal mapping techniques, we can establish a bijective mapping between the brain and the topologically equivalent sphere. Our hypothesis is that shape analysis problems are simplified when the shape is defined in an intrinsic coordinate system. Our goal is to establish such a coordinate system for the brain. The efficacy of the method is demonstrated with a white matter clustering problem. Initial results show promise for future investigation in these parameterization technique and its application to other problems related to computational anatomy like registration and segmentation. PMID:29177252

  11. The effect of different methods to compute N on estimates of mixing in stratified flows

    NASA Astrophysics Data System (ADS)

    Fringer, Oliver; Arthur, Robert; Venayagamoorthy, Subhas; Koseff, Jeffrey

    2017-11-01

    The background stratification is typically well defined in idealized numerical models of stratified flows, although it is more difficult to define in observations. This may have important ramifications for estimates of mixing which rely on knowledge of the background stratification against which turbulence must work to mix the density field. Using direct numerical simulation data of breaking internal waves on slopes, we demonstrate a discrepancy in ocean mixing estimates depending on the method in which the background stratification is computed. Two common methods are employed to calculate the buoyancy frequency N, namely a three-dimensionally resorted density field (often used in numerical models) and a locally-resorted vertical density profile (often used in the field). We show that how N is calculated has a significant effect on the flux Richardson number Rf, which is often used to parameterize turbulent mixing, and the turbulence activity number Gi, which leads to errors when estimating the mixing efficiency using Gi-based parameterizations. Supported by ONR Grant N00014-08-1-0904 and LLNL Contract DE-AC52-07NA27344.

  12. Trans-dimensional and hierarchical Bayesian approaches toward rigorous estimation of seismic sources and structures in the Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kim, Seongryong; Tkalčić, Hrvoje; Mustać, Marija; Rhie, Junkee; Ford, Sean

    2016-04-01

    A framework is presented within which we provide rigorous estimations for seismic sources and structures in the Northeast Asia. We use Bayesian inversion methods, which enable statistical estimations of models and their uncertainties based on data information. Ambiguities in error statistics and model parameterizations are addressed by hierarchical and trans-dimensional (trans-D) techniques, which can be inherently implemented in the Bayesian inversions. Hence reliable estimation of model parameters and their uncertainties is possible, thus avoiding arbitrary regularizations and parameterizations. Hierarchical and trans-D inversions are performed to develop a three-dimensional velocity model using ambient noise data. To further improve the model, we perform joint inversions with receiver function data using a newly developed Bayesian method. For the source estimation, a novel moment tensor inversion method is presented and applied to regional waveform data of the North Korean nuclear explosion tests. By the combination of new Bayesian techniques and the structural model, coupled with meaningful uncertainties related to each of the processes, more quantitative monitoring and discrimination of seismic events is possible.

  13. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  14. An Evaluation of Lightning Flash Rate Parameterizations Based on Observations of Colorado Storms during DC3

    NASA Astrophysics Data System (ADS)

    Basarab, B.; Fuchs, B.; Rutledge, S. A.

    2013-12-01

    Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare to observed flash rates. For the 6 June storm, a preliminary analysis of aircraft observations of storm inflow and outflow is presented in order to place flash rates (and other lightning statistics) in the context of storm chemistry. An approach to a possibly improved LNOx parameterization scheme using different lightning metrics such as flash area will be discussed.

  15. Towards Linking 3D SAR and Lidar Models with a Spatially Explicit Individual Based Forest Model

    NASA Astrophysics Data System (ADS)

    Osmanoglu, B.; Ranson, J.; Sun, G.; Armstrong, A. H.; Fischer, R.; Huth, A.

    2017-12-01

    In this study, we present a parameterization of the FORMIND individual-based gap model (IBGM)for old growth Atlantic lowland rainforest in La Selva, Costa Rica for the purpose of informing multisensor remote sensing techniques for above ground biomass techniques. The model was successfully parameterized and calibrated for the study site; results show that the simulated forest reproduces the structural complexity of Costa Rican rainforest based on comparisons with CARBONO inventory plot data. Though the simulated stem numbers (378) slightly underestimated the plot data (418), particularly for canopy dominant intermediate shade tolerant trees and shade tolerant understory trees, overall there was a 9.7% difference. Aboveground biomass (kg/ha) showed a 0.1% difference between the simulated forest and inventory plot dataset. The Costa Rica FORMIND simulation was then used to parameterize a spatially explicit (3D) SAR and lidar backscatter models. The simulated forest stands were used to generate a Look Up Table as a tractable means to estimate aboveground forest biomass for these complex forests. Various combinations of lidar and radar variables were evaluated in the LUT inversion. To test the capability of future data for estimation of forest height and biomass, we considered data of 1) L- (or P-) band polarimetric data (backscattering coefficients of HH, HV and VV); 2) L-band dual-pol repeat-pass InSAR data (HH/HV backscattering coefficients and coherences, height of scattering phase center at HH and HV using DEM or surface height from lidar data as reference); 3) P-band polarimetric InSAR data (canopy height from inversion of PolInSAR data or use the coherences and height of scattering phase center at HH, HV and VV); 4) various height indices from waveform lidar data); and 5) surface and canopy top height from photon-counting lidar data. The methods for parameterizing the remote sensing models with the IBGM and developing Look Up Tables will be discussed. Results from various remote sensing scenarios will also be presented.

  16. Aerothermodynamic shape optimization of hypersonic blunt bodies

    NASA Astrophysics Data System (ADS)

    Eyi, Sinan; Yumuşak, Mine

    2015-07-01

    The aim of this study is to develop a reliable and efficient design tool that can be used in hypersonic flows. The flow analysis is based on the axisymmetric Euler/Navier-Stokes and finite-rate chemical reaction equations. The equations are coupled simultaneously and solved implicitly using Newton's method. The Jacobian matrix is evaluated analytically. A gradient-based numerical optimization is used. The adjoint method is utilized for sensitivity calculations. The objective of the design is to generate a hypersonic blunt geometry that produces the minimum drag with low aerodynamic heating. Bezier curves are used for geometry parameterization. The performances of the design optimization method are demonstrated for different hypersonic flow conditions.

  17. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less

  18. Short-term Time Step Convergence in a Climate Model

    DOE PAGES

    Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...

    2015-02-11

    A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less

  19. A Bayesian Approach for Measurements of Stray Neutrons at Proton Therapy Facilities: Quantifying Neutron Dose Uncertainty.

    PubMed

    Dommert, M; Reginatto, M; Zboril, M; Fiedler, F; Helmbrecht, S; Enghardt, W; Lutz, B

    2017-11-28

    Bonner sphere measurements are typically analyzed using unfolding codes. It is well known that it is difficult to get reliable estimates of uncertainties for standard unfolding procedures. An alternative approach is to analyze the data using Bayesian parameter estimation. This method provides reliable estimates of the uncertainties of neutron spectra leading to rigorous estimates of uncertainties of the dose. We extend previous Bayesian approaches and apply the method to stray neutrons in proton therapy environments by introducing a new parameterized model which describes the main features of the expected neutron spectra. The parameterization is based on information that is available from measurements and detailed Monte Carlo simulations. The validity of this approach has been validated with results of an experiment using Bonner spheres carried out at the experimental hall of the OncoRay proton therapy facility in Dresden. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. A Fast Vector Radiative Transfer Model for Atmospheric and Oceanic Remote Sensing

    NASA Astrophysics Data System (ADS)

    Ding, J.; Yang, P.; King, M. D.; Platnick, S. E.; Meyer, K.

    2017-12-01

    A fast vector radiative transfer model is developed in support of atmospheric and oceanic remote sensing. This model is capable of simulating the Stokes vector observed at the top of the atmosphere (TOA) and the terrestrial surface by considering absorption, scattering, and emission. The gas absorption is parameterized in terms of atmospheric gas concentrations, temperature, and pressure. The parameterization scheme combines a regression method and the correlated-K distribution method, and can easily integrate with multiple scattering computations. The approach is more than four orders of magnitude faster than a line-by-line radiative transfer model with errors less than 0.5% in terms of transmissivity. A two-component approach is utilized to solve the vector radiative transfer equation (VRTE). The VRTE solver separates the phase matrices of aerosol and cloud into forward and diffuse parts and thus the solution is also separated. The forward solution can be expressed by a semi-analytical equation based on the small-angle approximation, and serves as the source of the diffuse part. The diffuse part is solved by the adding-doubling method. The adding-doubling implementation is computationally efficient because the diffuse component needs much fewer spherical function expansion terms. The simulated Stokes vector at both the TOA and the surface have comparable accuracy compared with the counterparts based on numerically rigorous methods.

  1. Influence of exposure assessment and parameterization on exposure response. Aspects of epidemiologic cohort analysis using the Libby Amphibole asbestos worker cohort.

    PubMed

    Bateson, Thomas F; Kopylev, Leonid

    2015-01-01

    Recent meta-analyses of occupational epidemiology studies identified two important exposure data quality factors in predicting summary effect measures for asbestos-associated lung cancer mortality risk: sufficiency of job history data and percent coverage of work history by measured exposures. The objective was to evaluate different exposure parameterizations suggested in the asbestos literature using the Libby, MT asbestos worker cohort and to evaluate influences of exposure measurement error caused by historically estimated exposure data on lung cancer risks. Focusing on workers hired after 1959, when job histories were well-known and occupational exposures were predominantly based on measured exposures (85% coverage), we found that cumulative exposure alone, and with allowance of exponential decay, fit lung cancer mortality data similarly. Residence-time-weighted metrics did not fit well. Compared with previous analyses based on the whole cohort of Libby workers hired after 1935, when job histories were less well-known and exposures less frequently measured (47% coverage), our analyses based on higher quality exposure data yielded an effect size as much as 3.6 times higher. Future occupational cohort studies should continue to refine retrospective exposure assessment methods, consider multiple exposure metrics, and explore new methods of maintaining statistical power while minimizing exposure measurement error.

  2. Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE

    NASA Astrophysics Data System (ADS)

    Schneider, Uwe; Hälg, Roger A.; Baiocco, Giorgio; Lomax, Tony

    2016-08-01

    The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has been extended using the developed parameterizations in order to calculate the neutron energy, quality factor and RBE.

  3. Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE.

    PubMed

    Schneider, Uwe; Hälg, Roger A; Baiocco, Giorgio; Lomax, Tony

    2016-08-21

    The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has been extended using the developed parameterizations in order to calculate the neutron energy, quality factor and RBE.

  4. An image morphing technique based on optimal mass preserving mapping.

    PubMed

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2007-06-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods.

  5. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    PubMed Central

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  6. Effective Atomic Number, Mass Attenuation Coefficient Parameterization, and Implications for High-Energy X-Ray Cargo Inspection Systems

    NASA Astrophysics Data System (ADS)

    Langeveld, Willem G. J.

    The most widely used technology for the non-intrusive active inspection of cargo containers and trucks is x-ray radiography at high energies (4-9 MeV). Technologies such as dual-energy imaging, spectroscopy, and statistical waveform analysis can be used to estimate the effective atomic number (Zeff) of the cargo from the x-ray transmission data, because the mass attenuation coefficient depends on energy as well as atomic number Z. The estimated effective atomic number, Zeff, of the cargo then leads to improved detection capability of contraband and threats, including special nuclear materials (SNM) and shielding. In this context, the exact meaning of effective atomic number (for mixtures and compounds) is generally not well-defined. Physics-based parameterizations of the mass attenuation coefficient have been given in the past, but usually for a limited low-energy range. Definitions of Zeff have been based, in part, on such parameterizations. Here, we give an improved parameterization at low energies (20-1000 keV) which leads to a well-defined Zeff. We then extend this parameterization up to energies relevant for cargo inspection (10 MeV), and examine what happens to the Zeff definition at these higher energies.

  7. Testing a common ice-ocean parameterization with laboratory experiments

    NASA Astrophysics Data System (ADS)

    McConnochie, C. D.; Kerr, R. C.

    2017-07-01

    Numerical models of ice-ocean interactions typically rely upon a parameterization for the transport of heat and salt to the ice face that has not been satisfactorily validated by observational or experimental data. We compare laboratory experiments of ice-saltwater interactions to a common numerical parameterization and find a significant disagreement in the dependence of the melt rate on the fluid velocity. We suggest a resolution to this disagreement based on a theoretical analysis of the boundary layer next to a vertical heated plate, which results in a threshold fluid velocity of approximately 4 cm/s at driving temperatures between 0.5 and 4°C, above which the form of the parameterization should be valid.

  8. A second-order Budkyo-type parameterization of landsurface hydrology

    NASA Technical Reports Server (NTRS)

    Andreou, S. A.; Eagleson, P. S.

    1982-01-01

    A simple, second order parameterization of the water fluxes at a land surface for use as the appropriate boundary condition in general circulation models of the global atmosphere was developed. The derived parameterization incorporates the high nonlinearities in the relationship between the near surface soil moisture and the evaporation, runoff and percolation fluxes. Based on the one dimensional statistical dynamic derivation of the annual water balance, it makes the transition to short term prediction of the moisture fluxes, through a Taylor expansion around the average annual soil moisture. A comparison of the suggested parameterization is made with other existing techniques and available measurements. A thermodynamic coupling is applied in order to obtain estimations of the surface ground temperature.

  9. Development of a new physics-based internal coordinate mechanics force field and its application to protein loop modeling.

    PubMed

    Arnautova, Yelena A; Abagyan, Ruben A; Totrov, Maxim

    2011-02-01

    We report the development of internal coordinate mechanics force field (ICMFF), new force field parameterized using a combination of experimental data for crystals of small molecules and quantum mechanics calculations. The main features of ICMFF include: (a) parameterization for the dielectric constant relevant to the condensed state (ε = 2) instead of vacuum, (b) an improved description of hydrogen-bond interactions using duplicate sets of van der Waals parameters for heavy atom-hydrogen interactions, and (c) improved backbone covalent geometry and energetics achieved using novel backbone torsional potentials and inclusion of the bond angles at the C(α) atoms into the internal variable set. The performance of ICMFF was evaluated through loop modeling simulations for 4-13 residue loops. ICMFF was combined with a solvent-accessible surface area solvation model optimized using a large set of loop decoys. Conformational sampling was carried out using the biased probability Monte Carlo method. Average/median backbone root-mean-square deviations of the lowest energy conformations from the native structures were 0.25/0.21 Å for four residues loops, 0.84/0.46 Å for eight residue loops, and 1.16/0.73 Å for 12 residue loops. To our knowledge, these results are significantly better than or comparable with those reported to date for any loop modeling method that does not take crystal packing into account. Moreover, the accuracy of our method is on par with the best previously reported results obtained considering the crystal environment. We attribute this success to the high accuracy of the new ICM force field achieved by meticulous parameterization, to the optimized solvent model, and the efficiency of the search method. © 2010 Wiley-Liss, Inc.

  10. Atmospheric parameterization schemes for satellite cloud property retrieval during FIRE IFO 2

    NASA Technical Reports Server (NTRS)

    Titlow, James; Baum, Bryan A.

    1993-01-01

    Satellite cloud retrieval algorithms generally require atmospheric temperature and humidity profiles to determine such cloud properties as pressure and height. For instance, the CO2 slicing technique called the ratio method requires the calculation of theoretical upwelling radiances both at the surface and a prescribed number (40) of atmospheric levels. This technique has been applied to data from, for example, the High Resolution Infrared Radiometer Sounder (HIRS/2, henceforth HIRS) flown aboard the NOAA series of polar orbiting satellites and the High Resolution Interferometer Sounder (HIS). In this particular study, four NOAA-11 HIRS channels in the 15-micron region are used. The ratio method may be applied to various channel combinations to estimate cloud top heights using channels in the 15-mu m region. Presently, the multispectral, multiresolution (MSMR) scheme uses 4 HIRS channel combination estimates for mid- to high-level cloud pressure retrieval and Advanced Very High Resolution Radiometer (AVHRR) data for low-level (is greater than 700 mb) cloud level retrieval. In order to determine theoretical upwelling radiances, atmospheric temperature and water vapor profiles must be provided as well as profiles of other radiatively important gas absorber constituents such as CO2, O3, and CH4. The assumed temperature and humidity profiles have a large effect on transmittance and radiance profiles, which in turn are used with HIRS data to calculate cloud pressure, and thus cloud height and temperature. For large spatial scale satellite data analysis, atmospheric parameterization schemes for cloud retrieval algorithms are usually based on a gridded product such as that provided by the European Center for Medium Range Weather Forecasting (ECMWF) or the National Meteorological Center (NMC). These global, gridded products prescribe temperature and humidity profiles for a limited number of pressure levels (up to 14) in a vertical atmospheric column. The FIRE IFO 2 experiment provides an opportunity to investigate current atmospheric profile parameterization schemes, compare satellite cloud height results using both gridded products (ECMWF) and high vertical resolution sonde data from the National Weather Service (NWS) and Cross Chain Loran Atmospheric Sounding System (CLASS), and suggest modifications in atmospheric parameterization schemes based on these results.

  11. Development and Testing of Coupled Land-surface, PBL and Shallow/Deep Convective Parameterizations within the MM5

    NASA Technical Reports Server (NTRS)

    Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.

    2000-01-01

    The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.

  12. Parameterization of eddy sensible heat transports in a zonally averaged dynamic model of the atmosphere

    NASA Technical Reports Server (NTRS)

    Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean

    1990-01-01

    A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.

  13. A stochastic parameterization for deep convection using cellular automata

    NASA Astrophysics Data System (ADS)

    Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.

    2012-12-01

    Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in large-scale variables in regions where convective activity is large. A two month extended evaluation of the deterministic behaviour of the scheme indicate a neutral impact on forecast skill. References: Bengtsson, L., H. Körnich, E. Källén, and G. Svensson, 2011: Large-scale dynamical response to sub-grid scale organization provided by cellular automata. Journal of the Atmospheric Sciences, 68, 3132-3144. Frenkel, Y., A. Majda, and B. Khouider, 2011: Using the stochastic multicloud model to improve tropical convective parameterization: A paradigm example. Journal of the Atmospheric Sciences, doi: 10.1175/JAS-D-11-0148.1. Huang, X.-Y., 1988: The organization of moist convection by internal 365 gravity waves. Tellus A, 42, 270-285. Khouider, B., J. Biello, and A. Majda, 2010: A Stochastic Multicloud Model for Tropical Convection. Comm. Math. Sci., 8, 187-216. Palmer, T., 2011: Towards the Probabilistic Earth-System Simulator: A Vision for the Future of Climate and Weather Prediction. Quarterly Journal of the Royal Meteorological Society 138 (2012) 841-861 Plant, R. and G. Craig, 2008: A stochastic parameterization for deep convection based on equilibrium statistics. J. Atmos. Sci., 65, 87-105.

  14. Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.

    NASA Astrophysics Data System (ADS)

    Gubler, S.; Gruber, S.; Purves, R. S.

    2012-06-01

    As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR) and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR). In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB) stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM) in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night. We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD) and the relative root mean squared deviance (RMSD) of the clear-sky global SDR scatter between between -2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations to local conditions reduces MBD and RMSD strongly compared to using the published values of the parameters, resulting in relative MBD and RMSD of less than 5% respectively 10% for the best parameterizations. The best results to estimate cloud transmissivity during nighttime were obtained by linearly interpolating the average of the cloud transmissivity of the four hours of the preceeding afternoon and the following morning. Model uncertainty can be caused by different errors such as code implementation, errors in input data and in estimated parameters, etc. The influence of the latter (errors in input data and model parameter uncertainty) on model outputs is determined using Monte Carlo. Model uncertainty is provided as the relative standard deviation σrel of the simulated frequency distributions of the model outputs. An optimistic estimate of the relative uncertainty σrel resulted in 10% for the clear-sky direct, 30% for diffuse, 3% for global SDR, and 3% for the fitted all-sky LDR.

  15. How we compute N matters to estimates of mixing in stratified flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.

    We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less

  16. How we compute N matters to estimates of mixing in stratified flows

    DOE PAGES

    Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.; ...

    2017-10-13

    We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less

  17. The Great Chilean Tsunamis of 2010, 2014 and 2015 on the Coast and Offshore of Mexico: Comparative Features Based on Open-Ocean Energy Parameterization

    NASA Astrophysics Data System (ADS)

    Rabinovich, A.; Zaytsev, O.; Thomson, R.

    2016-12-01

    The three recent great earthquakes offshore of Chile on 27 February 2010 (Maule, Mw 8.8), 1 April 2014 (Iquique, Mw 8.2) and 16 September 2015 (Illapel, Mw 8.3) generated major trans-oceanic tsunamis that spread throughout the entire Pacific Ocean and were measured by numerous coastal tide gauges and open-ocean DART stations. Statistical and spectral analyses of the tsunami waves from the three events recorded on the Pacific coast of Mexico enabled us to compare the events and to identify coastal "hot spots", regions with maximum tsunami risk. Based on joint spectral analyses of tsunamis and background noise, we have developed a method for reconstructing the "true" tsunami spectra in the deep ocean. The "reconstructed" open-ocean tsunami spectra are in excellent agreement with the actual tsunami spectra evaluated from direct analysis of the DART records offshore of Mexico. We have further used the spectral estimates to parameterize the energy of the three Chilean tsunamis based on the total open-ocean tsunami energy and frequency content of the individual events.

  18. A statistical comparison of cirrus particle size distributions measured using the 2-D stereo probe during the TC4, SPARTICUS, and MACPEX flight campaigns with historical cirrus datasets

    NASA Astrophysics Data System (ADS)

    Schwartz, M. Christian

    2017-08-01

    This paper addresses two straightforward questions. First, how similar are the statistics of cirrus particle size distribution (PSD) datasets collected using the Two-Dimensional Stereo (2D-S) probe to cirrus PSD datasets collected using older Particle Measuring Systems (PMS) 2-D Cloud (2DC) and 2-D Precipitation (2DP) probes? Second, how similar are the datasets when shatter-correcting post-processing is applied to the 2DC datasets? To answer these questions, a database of measured and parameterized cirrus PSDs - constructed from measurements taken during the Small Particles in Cirrus (SPARTICUS); Mid-latitude Airborne Cirrus Properties Experiment (MACPEX); and Tropical Composition, Cloud, and Climate Coupling (TC4) flight campaigns - is used.Bulk cloud quantities are computed from the 2D-S database in three ways: first, directly from the 2D-S data; second, by applying the 2D-S data to ice PSD parameterizations developed using sets of cirrus measurements collected using the older PMS probes; and third, by applying the 2D-S data to a similar parameterization developed using the 2D-S data themselves. This is done so that measurements of the same cloud volumes by parameterized versions of the 2DC and 2D-S can be compared with one another. It is thereby seen - given the same cloud field and given the same assumptions concerning ice crystal cross-sectional area, density, and radar cross section - that the parameterized 2D-S and the parameterized 2DC predict similar distributions of inferred shortwave extinction coefficient, ice water content, and 94 GHz radar reflectivity. However, the parameterization of the 2DC based on uncorrected data predicts a statistically significantly higher number of total ice crystals and a larger ratio of small ice crystals to large ice crystals than does the parameterized 2D-S. The 2DC parameterization based on shatter-corrected data also predicts statistically different numbers of ice crystals than does the parameterized 2D-S, but the comparison between the two is nevertheless more favorable. It is concluded that the older datasets continue to be useful for scientific purposes, with certain caveats, and that continuing field investigations of cirrus with more modern probes is desirable.

  19. An analytically based numerical method for computing view factors in real urban environments

    NASA Astrophysics Data System (ADS)

    Lee, Doo-Il; Woo, Ju-Wan; Lee, Sang-Hyun

    2018-01-01

    A view factor is an important morphological parameter used in parameterizing in-canyon radiative energy exchange process as well as in characterizing local climate over urban environments. For realistic representation of the in-canyon radiative processes, a complete set of view factors at the horizontal and vertical surfaces of urban facets is required. Various analytical and numerical methods have been suggested to determine the view factors for urban environments, but most of the methods provide only sky-view factor at the ground level of a specific location or assume simplified morphology of complex urban environments. In this study, a numerical method that can determine the sky-view factors ( ψ ga and ψ wa ) and wall-view factors ( ψ gw and ψ ww ) at the horizontal and vertical surfaces is presented for application to real urban morphology, which are derived from an analytical formulation of the view factor between two blackbody surfaces of arbitrary geometry. The established numerical method is validated against the analytical sky-view factor estimation for ideal street canyon geometries, showing a consolidate confidence in accuracy with errors of less than 0.2 %. Using a three-dimensional building database, the numerical method is also demonstrated to be applicable in determining the sky-view factors at the horizontal (roofs and roads) and vertical (walls) surfaces in real urban environments. The results suggest that the analytically based numerical method can be used for the radiative process parameterization of urban numerical models as well as for the characterization of local urban climate.

  20. Modeling particle nucleation and growth over northern California during the 2010 CARES campaign

    NASA Astrophysics Data System (ADS)

    Lupascu, A.; Easter, R.; Zaveri, R.; Shrivastava, M.; Pekour, M.; Tomlinson, J.; Yang, Q.; Matsui, H.; Hodzic, A.; Zhang, Q.; Fast, J. D.

    2015-11-01

    Accurate representation of the aerosol lifecycle requires adequate modeling of the particle number concentration and size distribution in addition to their mass, which is often the focus of aerosol modeling studies. This paper compares particle number concentrations and size distributions as predicted by three empirical nucleation parameterizations in the Weather Research and Forecast coupled with chemistry (WRF-Chem) regional model using 20 discrete size bins ranging from 1 nm to 10 μm. Two of the parameterizations are based on H2SO4, while one is based on both H2SO4 and organic vapors. Budget diagnostic terms for transport, dry deposition, emissions, condensational growth, nucleation, and coagulation of aerosol particles have been added to the model and are used to analyze the differences in how the new particle formation parameterizations influence the evolving aerosol size distribution. The simulations are evaluated using measurements collected at surface sites and from a research aircraft during the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento, California. While all three parameterizations captured the temporal variation of the size distribution during observed nucleation events as well as the spatial variability in aerosol number, all overestimated by up to a factor of 2.5 the total particle number concentration for particle diameters greater than 10 nm. Using the budget diagnostic terms, we demonstrate that the combined H2SO4 and low-volatility organic vapor parameterization leads to a different diurnal variability of new particle formation and growth to larger sizes compared to the parameterizations based on only H2SO4. At the CARES urban ground site, peak nucleation rates are predicted to occur around 12:00 Pacific (local) standard time (PST) for the H2SO4 parameterizations, whereas the highest rates were predicted at 08:00 and 16:00 PST when low-volatility organic gases are included in the parameterization. This can be explained by higher anthropogenic emissions of organic vapors at these times as well as lower boundary-layer heights that reduce vertical mixing. The higher nucleation rates in the H2SO4-organic parameterization at these times were largely offset by losses due to coagulation. Despite the different budget terms for ultrafine particles, the 10-40 nm diameter particle number concentrations from all three parameterizations increased from 10:00 to 14:00 PST and then decreased later in the afternoon, consistent with changes in the observed size and number distribution. We found that newly formed particles could explain up to 20-30 % of predicted cloud condensation nuclei at 0.5 % supersaturation, depending on location and the specific nucleation parameterization. A sensitivity simulation using 12 discrete size bins ranging from 1 nm to 10 μm diameter gave a reasonable estimate of particle number and size distribution compared to the 20 size bin simulation, while reducing the associated computational cost by ~ 36 %.

  1. Application of new parameterizations of gas transfer velocity and their impact on regional and global marine CO 2 budgets

    NASA Astrophysics Data System (ADS)

    Fangohr, Susanne; Woolf, David K.

    2007-06-01

    One of the dominant sources of uncertainty in the calculation of air-sea flux of carbon dioxide on a global scale originates from the various parameterizations of the gas transfer velocity, k, that are in use. Whilst it is undisputed that most of these parameterizations have shortcomings and neglect processes which influence air-sea gas exchange and do not scale with wind speed alone, there is no general agreement about their relative accuracy. The most widely used parameterizations are based on non-linear functions of wind speed and, to a lesser extent, on sea surface temperature and salinity. Processes such as surface film damping and whitecapping are known to have an effect on air-sea exchange. More recently published parameterizations use friction velocity, sea surface roughness, and significant wave height. These new parameters can account to some extent for processes such as film damping and whitecapping and could potentially explain the spread of wind-speed based transfer velocities published in the literature. We combine some of the principles of two recently published k parameterizations [Glover, D.M., Frew, N.M., McCue, S.J. and Bock, E.J., 2002. A multiyear time series of global gas transfer velocity from the TOPEX dual frequency, normalized radar backscatter algorithm. In: Donelan, M.A., Drennan, W.M., Saltzman, E.S., and Wanninkhof, R. (Eds.), Gas Transfer at Water Surfaces, Geophys. Monograph 127. AGU,Washington, DC, 325-331; Woolf, D.K., 2005. Parameterization of gas transfer velocities and sea-state dependent wave breaking. Tellus, 57B: 87-94] to calculate k as the sum of a linear function of total mean square slope of the sea surface and a wave breaking parameter. This separates contributions from direct and bubble-mediated gas transfer as suggested by Woolf [Woolf, D.K., 2005. Parameterization of gas transfer velocities and sea-state dependent wave breaking. Tellus, 57B: 87-94] and allows us to quantify contributions from these two processes independently. We then apply our parameterization to a monthly TOPEX altimeter gridded 1.5° × 1.5° data set and compare our results to transfer velocities calculated using the popular wind-based k parameterizations by Wanninkhof [Wanninkhof, R., 1992. Relationship between wind speed and gas exchange over the ocean. J. Geophys. Res., 97: 7373-7382.] and Wanninkhof and McGillis [Wanninkhof, R. and McGillis, W., 1999. A cubic relationship between air-sea CO2 exchange and wind speed. Geophys. Res. Lett., 26(13): 1889-1892]. We show that despite good agreement of the globally averaged transfer velocities, global and regional fluxes differ by up to 100%. These discrepancies are a result of different spatio-temporal distributions of the processes involved in the parameterizations of k, indicating the importance of wave field parameters and a need for further validation.

  2. Boundary-layer cumulus over heterogeneous landscapes: A subgrid GCM parameterization. Final report, December 1991--November 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stull, R.B.; Tripoli, G.

    1996-01-08

    The authors developed single-column parameterizations for subgrid boundary-layer cumulus clouds. These give cloud onset time, cloud coverage, and ensemble distributions of cloud-base altitudes, cloud-top altitudes, cloud thickness, and the characteristics of cloudy and clear updrafts. They tested and refined the parameterizations against archived data from Spring and Summer 1994 and 1995 intensive operation periods (IOPs) at the Southern Great Plains (SGP) ARM CART site near Lamont, Oklahoma. The authors also found that: cloud-base altitudes are not uniform over a heterogeneous surface; tops of some cumulus clouds can be below the base-altitudes of other cumulus clouds; there is an overlap regionmore » near cloud base where clear and cloudy updrafts exist simultaneously; and the lognormal distribution of cloud sizes scales to the JFD of surface layer air and to the shape of the temperature profile above the boundary layer.« less

  3. Parametric soil water retention models: a critical evaluation of expressions for the full moisture range

    NASA Astrophysics Data System (ADS)

    Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane

    2018-02-01

    Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.

  4. Spectral cumulus parameterization based on cloud-resolving model

    NASA Astrophysics Data System (ADS)

    Baba, Yuya

    2018-02-01

    We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.

  5. Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs.

    NASA Astrophysics Data System (ADS)

    Belochitski, A.; Krueger, S. K.; Moorthi, S.; Bogenschutz, P.; Cheng, A.

    2017-12-01

    A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity, and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation, and cloudiness. Unlike other similar methods, comparatively few new prognostic variables needs to be introduced, making the technique computationally efficient. In the base version of SHOC it is SGS turbulent kinetic energy (TKE), and in the developmental version — SGS TKE, and variances of total water and moist static energy (MSE). SHOC is now incorporated into a version of GFS that will become a part of the NOAA Next Generation Global Prediction System based around NOAA GFDL's FV3 dynamical core, NOAA Environmental Modeling System (NEMS) coupled modeling infrastructure software, and a set novel physical parameterizations. Turbulent diffusion coefficients computed by SHOC are now used in place of those produced by the boundary layer turbulence and shallow convection parameterizations. Large scale microphysics scheme is no longer used to calculate cloud fraction or the large-scale condensation/deposition. Instead, SHOC provides these quantities. Radiative transfer parameterization uses cloudiness computed by SHOC. An outstanding problem with implementation of SHOC in the NCEP global models is excessively large high level tropical cloudiness. Comparison of the moments of the SGS PDF diagnosed by SHOC to the moments calculated in a GigaLES simulation of tropical deep convection case (GATE), shows that SHOC diagnoses too narrow PDF distributions of total cloud water and MSE in the areas of deep convective detrainment. A subsequent sensitivity study of SHOC's diagnosed cloud fraction (CF) to higher order input moments of the SGS PDF demonstrated that CF is improved if SHOC is provided with correct variances of total water and MSE. Consequently, SHOC was modified to include two new prognostic equations for variances of total water and MSE, and coupled with the Chikira-Sugiyama parameterization of deep convection to include effects of detrainment on the prognostic variances.

  6. Stochastic parameterization of shallow cumulus convection estimated from high-resolution model data

    NASA Astrophysics Data System (ADS)

    Dorrestijn, Jesse; Crommelin, Daan T.; Siebesma, A. Pier.; Jonker, Harm J. J.

    2013-02-01

    In this paper, we report on the development of a methodology for stochastic parameterization of convective transport by shallow cumulus convection in weather and climate models. We construct a parameterization based on Large-Eddy Simulation (LES) data. These simulations resolve the turbulent fluxes of heat and moisture and are based on a typical case of non-precipitating shallow cumulus convection above sea in the trade-wind region. Using clustering, we determine a finite number of turbulent flux pairs for heat and moisture that are representative for the pairs of flux profiles observed in these simulations. In the stochastic parameterization scheme proposed here, the convection scheme jumps randomly between these pre-computed pairs of turbulent flux profiles. The transition probabilities are estimated from the LES data, and they are conditioned on the resolved-scale state in the model column. Hence, the stochastic parameterization is formulated as a data-inferred conditional Markov chain (CMC), where each state of the Markov chain corresponds to a pair of turbulent heat and moisture fluxes. The CMC parameterization is designed to emulate, in a statistical sense, the convective behaviour observed in the LES data. The CMC is tested in single-column model (SCM) experiments. The SCM is able to reproduce the ensemble spread of the temperature and humidity that was observed in the LES data. Furthermore, there is a good similarity between time series of the fractions of the discretized fluxes produced by SCM and observed in LES.

  7. Empirical parameterization of setup, swash, and runup

    USGS Publications Warehouse

    Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H.

    2006-01-01

    Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.

  8. Remote Sensing Protocols for Parameterizing an Individual, Tree-Based, Forest Growth and Yield Model

    DTIC Science & Technology

    2014-09-01

    Leaf-Off Tree Crowns in Small Footprint, High Sampling Density LIDAR Data from Eastern Deciduous Forests in North America.” Remote Sensing of...William A. 2003. “Crown-Diameter Prediction Models for 87 Species of Stand- Grown Trees in the Eastern United States.” Southern Journal of Applied...ER D C/ CE RL T R- 14 -1 8 Base Facilities Environmental Quality Remote Sensing Protocols for Parameterizing an Individual, Tree -Based

  9. The Grell-Freitas Convection Parameterization: Recent Developments and Applications Within the NASA GEOS Global Model

    NASA Technical Reports Server (NTRS)

    Freitas, Saulo R.; Grell, Georg; Molod, Andrea; Thompson, Matthew A.

    2017-01-01

    We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, mid, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Here, we briefly introduce the recent developments, implementation, and preliminary results of this parameterization in the NASA GEOS modeling system.

  10. Modeling late rectal toxicities based on a parameterized representation of the 3D dose distribution

    NASA Astrophysics Data System (ADS)

    Buettner, Florian; Gulliford, Sarah L.; Webb, Steve; Partridge, Mike

    2011-04-01

    Many models exist for predicting toxicities based on dose-volume histograms (DVHs) or dose-surface histograms (DSHs). This approach has several drawbacks as firstly the reduction of the dose distribution to a histogram results in the loss of spatial information and secondly the bins of the histograms are highly correlated with each other. Furthermore, some of the complex nonlinear models proposed in the past lack a direct physical interpretation and the ability to predict probabilities rather than binary outcomes. We propose a parameterized representation of the 3D distribution of the dose to the rectal wall which explicitly includes geometrical information in the form of the eccentricity of the dose distribution as well as its lateral and longitudinal extent. We use a nonlinear kernel-based probabilistic model to predict late rectal toxicity based on the parameterized dose distribution and assessed its predictive power using data from the MRC RT01 trial (ISCTRN 47772397). The endpoints under consideration were rectal bleeding, loose stools, and a global toxicity score. We extract simple rules identifying 3D dose patterns related to a specifically low risk of complication. Normal tissue complication probability (NTCP) models based on parameterized representations of geometrical and volumetric measures resulted in areas under the curve (AUCs) of 0.66, 0.63 and 0.67 for predicting rectal bleeding, loose stools and global toxicity, respectively. In comparison, NTCP models based on standard DVHs performed worse and resulted in AUCs of 0.59 for all three endpoints. In conclusion, we have presented low-dimensional, interpretable and nonlinear NTCP models based on the parameterized representation of the dose to the rectal wall. These models had a higher predictive power than models based on standard DVHs and their low dimensionality allowed for the identification of 3D dose patterns related to a low risk of complication.

  11. Probabilistic modelling of overflow, surcharge and flooding in urban drainage using the first-order reliability method and parameterization of local rain series.

    PubMed

    Thorndahl, S; Willems, P

    2008-01-01

    Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.

  12. A Dynamically Computed Convective Time Scale for the Kain–Fritsch Convective Parameterization Scheme

    EPA Science Inventory

    Many convective parameterization schemes define a convective adjustment time scale τ as the time allowed for dissipation of convective available potential energy (CAPE). The Kain–Fritsch scheme defines τ based on an estimate of the advective time period for deep con...

  13. Studying ventricular abnormalities in mild cognitive impairment with hyperbolic Ricci flow and tensor-based morphometry.

    PubMed

    Shi, Jie; Stonnington, Cynthia M; Thompson, Paul M; Chen, Kewei; Gutman, Boris; Reschke, Cole; Baxter, Leslie C; Reiman, Eric M; Caselli, Richard J; Wang, Yalin

    2015-01-01

    Mild Cognitive Impairment (MCI) is a transitional stage between normal aging and dementia and people with MCI are at high risk of progression to dementia. MCI is attracting increasing attention, as it offers an opportunity to target the disease process during an early symptomatic stage. Structural magnetic resonance imaging (MRI) measures have been the mainstay of Alzheimer's disease (AD) imaging research, however, ventricular morphometry analysis remains challenging because of its complicated topological structure. Here we describe a novel ventricular morphometry system based on the hyperbolic Ricci flow method and tensor-based morphometry (TBM) statistics. Unlike prior ventricular surface parameterization methods, hyperbolic conformal parameterization is angle-preserving and does not have any singularities. Our system generates a one-to-one diffeomorphic mapping between ventricular surfaces with consistent boundary matching conditions. The TBM statistics encode a great deal of surface deformation information that could be inaccessible or overlooked by other methods. We applied our system to the baseline MRI scans of a set of MCI subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI: 71 MCI converters vs. 62 MCI stable). Although the combined ventricular area and volume features did not differ between the two groups, our fine-grained surface analysis revealed significant differences in the ventricular regions close to the temporal lobe and posterior cingulate, structures that are affected early in AD. Significant correlations were also detected between ventricular morphometry, neuropsychological measures, and a previously described imaging index based on fluorodeoxyglucose positron emission tomography (FDG-PET) scans. This novel ventricular morphometry method may offer a new and more sensitive approach to study preclinical and early symptomatic stage AD. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. STUDYING VENTRICULAR ABNORMALITIES IN MILD COGNITIVE IMPAIRMENT WITH HYPERBOLIC RICCI FLOW AND TENSOR-BASED MORPHOMETRY

    PubMed Central

    Shi, Jie; Stonnington, Cynthia M.; Thompson, Paul M.; Chen, Kewei; Gutman, Boris; Reschke, Cole; Baxter, Leslie C.; Reiman, Eric M.; Caselli, Richard J.; Wang, Yalin

    2014-01-01

    Mild Cognitive Impairment (MCI) is a transitional stage between normal aging and dementia and people with MCI are at high risk of progression to dementia. MCI is attracting increasing attention, as it offers an opportunity to target the disease process during an early symptomatic stage. Structural magnetic resonance imaging (MRI) measures have been the mainstay of Alzheimer’s disease (AD) imaging research, however, ventricular morphometry analysis remains challenging because of its complicated topological structure. Here we describe a novel ventricular morphometry system based on the hyperbolic Ricci flow method and tensor-based morphometry (TBM) statistics. Unlike prior ventricular surface parameterization methods, hyperbolic conformal parameterization is angle-preserving and does not have any singularities. Our system generates a one-to-one diffeomorphic mapping between ventricular surfaces with consistent boundary matching conditions. The TBM statistics encode a great deal of surface deformation information that could be inaccessible or overlooked by other methods. We applied our system to the baseline MRI scans of a set of MCI subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI: 71 MCI converters vs. 62 MCI stable). Although the combined ventricular area and volume features did not differ between the two groups, our fine-grained surface analysis revealed significant differences in the ventricular regions close to the temporal lobe and posterior cingulate, structures that are affected early in AD. Significant correlations were also detected between ventricular morphometry, neuropsychological measures, and a previously described imaging index based on fluorodeoxyglucose positron emission tomography (FDG-PET) scans. This novel ventricular morphometry method may offer a new and more sensitive approach to study preclinical and early symptomatic stage AD. PMID:25285374

  15. Prototype Mcs Parameterization for Global Climate Models

    NASA Astrophysics Data System (ADS)

    Moncrieff, M. W.

    2017-12-01

    Excellent progress has been made with observational, numerical and theoretical studies of MCS processes but the parameterization of those processes remain in a dire state and are missing from GCMs. The perceived complexity of the distribution, type, and intensity of organized precipitation systems has arguably daunted attention and stifled the development of adequate parameterizations. TRMM observations imply links between convective organization and large-scale meteorological features in the tropics and subtropics that are inadequately treated by GCMs. This calls for improved physical-dynamical treatment of organized convection to enable the next-generation of GCMs to reliably address a slew of challenges. The multiscale coherent structure parameterization (MCSP) paradigm is based on the fluid-dynamical concept of coherent structures in turbulent environments. The effects of vertical shear on MCS dynamics implemented as 2nd baroclinic convective heating and convective momentum transport is based on Lagrangian conservation principles, nonlinear dynamical models, and self-similarity. The prototype MCS parameterization, a minimalist proof-of-concept, is applied in the NCAR Community Climate Model, Version 5.5 (CAM 5.5). The MCSP generates convectively coupled tropical waves and large-scale precipitation features notably in the Indo-Pacific warm-pool and Maritime Continent region, a center-of-action for weather and climate variability around the globe.

  16. Exploring Several Methods of Groundwater Model Selection

    NASA Astrophysics Data System (ADS)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  17. Impact of climate seasonality on catchment yield: A parameterization for commonly-used water balance formulas

    NASA Astrophysics Data System (ADS)

    de Lavenne, Alban; Andréassian, Vazken

    2018-03-01

    This paper examines the hydrological impact of the seasonality of precipitation and maximum evaporation: seasonality is, after aridity, a second-order determinant of catchment water yield. Based on a data set of 171 French catchments (where aridity ranged between 0.2 and 1.2), we present a parameterization of three commonly-used water balance formulas (namely, Turc-Mezentsev, Tixeront-Fu and Oldekop formulas) to account for seasonality effects. We quantify the improvement of seasonality-based parameterization in terms of the reconstitution of both catchment streamflow and water yield. The significant improvement obtained (reduction of RMSE between 9 and 14% depending on the formula) demonstrates the importance of climate seasonality in the determination of long-term catchment water balance.

  18. Fundamental statistical relationships between monthly and daily meteorological variables: Temporal downscaling of weather based on a global observational dataset

    NASA Astrophysics Data System (ADS)

    Sommer, Philipp; Kaplan, Jed

    2016-04-01

    Accurate modelling of large-scale vegetation dynamics, hydrology, and other environmental processes requires meteorological forcing on daily timescales. While meteorological data with high temporal resolution is becoming increasingly available, simulations for the future or distant past are limited by lack of data and poor performance of climate models, e.g., in simulating daily precipitation. To overcome these limitations, we may temporally downscale monthly summary data to a daily time step using a weather generator. Parameterization of such statistical models has traditionally been based on a limited number of observations. Recent developments in the archiving, distribution, and analysis of "big data" datasets provide new opportunities for the parameterization of a temporal downscaling model that is applicable over a wide range of climates. Here we parameterize a WGEN-type weather generator using more than 50 million individual daily meteorological observations, from over 10'000 stations covering all continents, based on the Global Historical Climatology Network (GHCN) and Synoptic Cloud Reports (EECRA) databases. Using the resulting "universal" parameterization and driven by monthly summaries, we downscale mean temperature (minimum and maximum), cloud cover, and total precipitation, to daily estimates. We apply a hybrid gamma-generalized Pareto distribution to calculate daily precipitation amounts, which overcomes much of the inability of earlier weather generators to simulate high amounts of daily precipitation. Our globally parameterized weather generator has numerous applications, including vegetation and crop modelling for paleoenvironmental studies.

  19. Multi-Scale Modeling and the Eddy-Diffusivity/Mass-Flux (EDMF) Parameterization

    NASA Astrophysics Data System (ADS)

    Teixeira, J.

    2015-12-01

    Turbulence and convection play a fundamental role in many key weather and climate science topics. Unfortunately, current atmospheric models cannot explicitly resolve most turbulent and convective flow. Because of this fact, turbulence and convection in the atmosphere has to be parameterized - i.e. equations describing the dynamical evolution of the statistical properties of turbulence and convection motions have to be devised. Recently a variety of different models have been developed that attempt at simulating the atmosphere using variable resolution. A key problem however is that parameterizations are in general not explicitly aware of the resolution - the scale awareness problem. In this context, we will present and discuss a specific approach, the Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, that not only is in itself a multi-scale parameterization but it is also particularly well suited to deal with the scale-awareness problems that plague current variable-resolution models. It does so by representing small-scale turbulence using a classic Eddy-Diffusivity (ED) method, and the larger-scale (boundary layer and tropospheric-scale) eddies as a variety of plumes using the Mass-Flux (MF) concept.

  20. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    PubMed

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Value of eddy-covariance data for individual-based, forest gap models

    NASA Astrophysics Data System (ADS)

    Roedig, Edna; Cuntz, Matthias; Huth, Andreas

    2014-05-01

    Individual-based forest gap models simulate tree growth and carbon fluxes on large time scales. They are a well established tool to predict forest dynamics and successions. However, the effect of climatic variables on processes of such individual-based models is uncertain (e.g. the effect of temperature or soil moisture on the gross primary production (GPP)). Commonly, functional relationships and parameter values that describe the effect of climate variables on the model processes are gathered from various vegetation models of different spatial scales. Though, their accuracies and parameter values have not been validated for the specific model scales of individual-based forest gap models. In this study, we address this uncertainty by linking Eddy-covariance (EC) data and a forest gap model. The forest gap model FORMIND is applied on the Norwegian spruce monoculture forest at Wetzstein in Thuringia, Germany for the years 2003-2008. The original parameterizations of climatic functions are adapted according to the EC-data. The time step of the model is reduced to one day in order to adapt to the high resolution EC-data. The FORMIND model uses functional relationships on an individual level, whereas the EC-method measures eco-physiological responses at the ecosystem level. However, we assume that in homogeneous stands as in our study, functional relationships for both methods are comparable. The model is then validated at the spruce forest Waldstein, Germany. Results show that the functional relationships used in the model, are similar to those observed with the EC-method. The temperature reduction curve is well reflected in the EC-data, though parameter values differ from the originally expected values. For example at the freezing point, the observed GPP is 30% higher than predicted by the forest gap model. The response of observed GPP to soil moisture shows that the permanent wilting point is 7 vol-% lower than the value derived from the literature. The light response curve, integrated over the canopy and the forest stand, is underestimated compared to the measured data. The EC-method measures a yearly carbon balance of 13 mol(CO2)m-2 for the Wetzstein site. The model with the original parameterization overestimates the yearly carbon balance by nearly 5 mol(CO2)m-2 while the model with an EC-based parameterization fits the measured data very well. The parameter values derived from EC-data are applied on the spruce forest Waldstein and clearly improve estimates of the carbon balance.

  2. Newton Algorithms for Analytic Rotation: An Implicit Function Approach

    ERIC Educational Resources Information Center

    Boik, Robert J.

    2008-01-01

    In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…

  3. Parameterization of the Voice Source by Combining Spectral Decay and Amplitude Features of the Glottal Flow.

    ERIC Educational Resources Information Center

    Alku, Paavo; Vilkman, Erkki; Laukkanen, Anne-Maria

    1998-01-01

    A new method is presented for the parameterization of glottal volume velocity waveforms that have been estimated by inverse filtering acoustic speech pressure signals. The new technique combines two features of voice production: the AC value and the spectral decay of the glottal flow. Testing found the new parameter correlates strongly with the…

  4. Parameter Estimation and Sensitivity Analysis of an Urban Surface Energy Balance Parameterization at a Tropical Suburban Site

    NASA Astrophysics Data System (ADS)

    Harshan, S.; Roth, M.; Velasco, E.

    2014-12-01

    Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.

  5. Anisotropic shear dispersion parameterization for ocean eddy transport

    NASA Astrophysics Data System (ADS)

    Reckinger, Scott; Fox-Kemper, Baylor

    2015-11-01

    The effects of mesoscale eddies are universally treated isotropically in global ocean general circulation models. However, observations and simulations demonstrate that the mesoscale processes that the parameterization is intended to represent, such as shear dispersion, are typified by strong anisotropy. We extend the Gent-McWilliams/Redi mesoscale eddy parameterization to include anisotropy and test the effects of varying levels of anisotropy in 1-degree Community Earth System Model (CESM) simulations. Anisotropy has many effects on the simulated climate, including a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, impacts on the meridional overturning circulation and ocean energy and tracer uptake, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. A process-based parameterization to approximate the effects of unresolved shear dispersion is also used to set the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.

  6. Optimal lattice-structured materials

    DOE PAGES

    Messner, Mark C.

    2016-07-09

    This paper describes a method for optimizing the mesostructure of lattice-structured materials. These materials are periodic arrays of slender members resembling efficient, lightweight macroscale structures like bridges and frame buildings. Current additive manufacturing technologies can assemble lattice structures with length scales ranging from nanometers to millimeters. Previous work demonstrates that lattice materials have excellent stiffness- and strength-to-weight scaling, outperforming natural materials. However, there are currently no methods for producing optimal mesostructures that consider the full space of possible 3D lattice topologies. The inverse homogenization approach for optimizing the periodic structure of lattice materials requires a parameterized, homogenized material model describingmore » the response of an arbitrary structure. This work develops such a model, starting with a method for describing the long-wavelength, macroscale deformation of an arbitrary lattice. The work combines the homogenized model with a parameterized description of the total design space to generate a parameterized model. Finally, the work describes an optimization method capable of producing optimal mesostructures. Several examples demonstrate the optimization method. One of these examples produces an elastically isotropic, maximally stiff structure, here called the isotruss, that arguably outperforms the anisotropic octet truss topology.« less

  7. Evaluating and Improving Wind Forecasts over South China: The Role of Orographic Parameterization in the GRAPES Model

    NASA Astrophysics Data System (ADS)

    Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia

    2018-06-01

    Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.

  8. The zonally averaged transport characteristics of the atmosphere as determined by a general circulation model

    NASA Technical Reports Server (NTRS)

    Plumb, R. A.

    1985-01-01

    Two dimensional modeling has become an established technique for the simulation of the global structure of trace constituents. Such models are simpler to formulate and cheaper to operate than three dimensional general circulation models, while avoiding some of the gross simplifications of one dimensional models. Nevertheless, the parameterization of eddy fluxes required in a 2-D model is not a trivial problem. This fact has apparently led some to interpret the shortcomings of existing 2-D models as indicating that the parameterization procedure is wrong in principle. There are grounds to believe that these shortcomings result primarily from incorrect implementations of the predictions of eddy transport theory and that a properly based parameterization may provide a good basis for atmospheric modeling. The existence of these GCM-derived coefficients affords an unprecedented opportunity to test the validity of the flux-gradient parameterization. To this end, a zonally averaged (2-D) model was developed, using these coefficients in the transport parameterization. Results from this model for a number of contrived tracer experiments were compared with the parent GCM. The generally good agreement substantially validates the flus-gradient parameterization, and thus the basic principle of 2-D modeling.

  9. CloudSat 2C-ICE product update with a new Ze parameterization in lidar-only region.

    PubMed

    Deng, Min; Mace, Gerald G; Wang, Zhien; Berry, Elizabeth

    2015-12-16

    The CloudSat 2C-ICE data product is derived from a synergetic ice cloud retrieval algorithm that takes as input a combination of CloudSat radar reflectivity ( Z e ) and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation lidar attenuated backscatter profiles. The algorithm uses a variational method for retrieving profiles of visible extinction coefficient, ice water content, and ice particle effective radius in ice or mixed-phase clouds. Because of the nature of the measurements and to maintain consistency in the algorithm numerics, we choose to parameterize (with appropriately large specification of uncertainty) Z e and lidar attenuated backscatter in the regions of a cirrus layer where only the lidar provides data and where only the radar provides data, respectively. To improve the Z e parameterization in the lidar-only region, the relations among Z e , extinction, and temperature have been more thoroughly investigated using Atmospheric Radiation Measurement long-term millimeter cloud radar and Raman lidar measurements. This Z e parameterization provides a first-order estimation of Z e as a function extinction and temperature in the lidar-only regions of cirrus layers. The effects of this new parameterization have been evaluated for consistency using radiation closure methods where the radiative fluxes derived from retrieved cirrus profiles compare favorably with Clouds and the Earth's Radiant Energy System measurements. Results will be made publicly available for the entire CloudSat record (since 2006) in the most recent product release known as R05.

  10. Dynamic optimization of open-loop input signals for ramp-up current profiles in tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Ren, Zhigang; Xu, Chao; Lin, Qun; Loxton, Ryan; Teo, Kok Lay

    2016-03-01

    Establishing a good current spatial profile in tokamak fusion reactors is crucial to effective steady-state operation. The evolution of the current spatial profile is related to the evolution of the poloidal magnetic flux, which can be modeled in the normalized cylindrical coordinates using a parabolic partial differential equation (PDE) called the magnetic diffusion equation. In this paper, we consider the dynamic optimization problem of attaining the best possible current spatial profile during the ramp-up phase of the tokamak. We first use the Galerkin method to obtain a finite-dimensional ordinary differential equation (ODE) model based on the original magnetic diffusion PDE. Then, we combine the control parameterization method with a novel time-scaling transformation to obtain an approximate optimal parameter selection problem, which can be solved using gradient-based optimization techniques such as sequential quadratic programming (SQP). This control parameterization approach involves approximating the tokamak input signals by piecewise-linear functions whose slopes and break-points are decision variables to be optimized. We show that the gradient of the objective function with respect to the decision variables can be computed by solving an auxiliary dynamic system governing the state sensitivity matrix. Finally, we conclude the paper with simulation results for an example problem based on experimental data from the DIII-D tokamak in San Diego, California.

  11. The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures

    NASA Technical Reports Server (NTRS)

    Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)

    2000-01-01

    The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.

  12. Adaptive Aft Signature Shaping of a Low-Boom Supersonic Aircraft Using Off-Body Pressures

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Li, Wu

    2012-01-01

    The design and optimization of a low-boom supersonic aircraft using the state-of-the- art o -body aerodynamics and sonic boom analysis has long been a challenging problem. The focus of this paper is to demonstrate an e ective geometry parameterization scheme and a numerical optimization approach for the aft shaping of a low-boom supersonic aircraft using o -body pressure calculations. A gradient-based numerical optimization algorithm that models the objective and constraints as response surface equations is used to drive the aft ground signature toward a ramp shape. The design objective is the minimization of the variation between the ground signature and the target signature subject to several geometric and signature constraints. The target signature is computed by using a least-squares regression of the aft portion of the ground signature. The parameterization and the deformation of the geometry is performed with a NASA in- house shaping tool. The optimization algorithm uses the shaping tool to drive the geometric deformation of a horizontal tail with a parameterization scheme that consists of seven camber design variables and an additional design variable that describes the spanwise location of the midspan section. The demonstration cases show that numerical optimization using the state-of-the-art o -body aerodynamic calculations is not only feasible and repeatable but also allows the exploration of complex design spaces for which a knowledge-based design method becomes less effective.

  13. Modeling particle nucleation and growth over northern California during the 2010 CARES campaign

    NASA Astrophysics Data System (ADS)

    Lupascu, A.; Easter, R.; Zaveri, R.; Shrivastava, M.; Pekour, M.; Tomlinson, J.; Yang, Q.; Matsui, H.; Hodzic, A.; Zhang, Q.; Fast, J. D.

    2015-07-01

    Accurate representation of the aerosol lifecycle requires adequate modeling of the particle number concentration and size distribution in addition to their mass, which is often the focus of aerosol modeling studies. This paper compares particle number concentrations and size distributions as predicted by three empirical nucleation parameterizations in the Weather Research and Forecast coupled with chemistry (WRF-Chem) regional model using 20 discrete size bins ranging from 1 nm to 10 μm. Two of the parameterizations are based on H2SO4 while one is based on both H2SO4 and organic vapors. Budget diagnostic terms for transport, dry deposition, emissions, condensational growth, nucleation, and coagulation of aerosol particles have been added to the model and are used to analyze the differences in how the new particle formation parameterizations influence the evolving aerosol size distribution. The simulations are evaluated using measurements collected at surface sites and from a research aircraft during the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento, California. While all three parameterizations captured the temporal variation of the size distribution during observed nucleation events as well as the spatial variability in aerosol number, all overestimated by up to a factor of 2.5 the total particle number concentration for particle diameters greater than 10 nm. Using the budget diagnostic terms, we demonstrate that the combined H2SO4 and low-volatility organic vapors parameterization leads to a different diurnal variability of new particle formation and growth to larger sizes compared to the parameterizations based on only H2SO4. At the CARES urban ground site, peak nucleation rates were predicted to occur around 12:00 Pacific (local) standard time (PST) for the H2SO4 parameterizations, whereas the highest rates were predicted at 08:00 and 16:00 PST when low-volatility organic gases are included in the parameterization. This can be explained by higher anthropogenic emissions of organic vapors at these times as well as lower boundary layer heights that reduce vertical mixing. The higher nucleation rates in the H2SO4-organic parameterization at these times were largely offset by losses due to coagulation. Despite the different budget terms for ultrafine particles, the 10-40 nm diameter particle number concentrations from all three parameterizations increased from 10:00 to 14:00 PST and then decreased later in the afternoon, consistent with changes in the observed size and number distribution. Differences among the three simulations for the 40-100 nm particle diameter range are mostly associated with the timing of the peak total tendencies that shift the morning increase and afternoon decrease in particle number concentration by up to two hours. We found that newly formed particles could explain up to 20-30 % of predicted cloud condensation nuclei at 0.5 % supersaturation, depending on location and the specific nucleation parameterization. A sensitivity simulation using 12 discrete size bins ranging from 1 nm to 10 μm diameter gave a reasonable estimate of particle number and size distribution compared to the 20 size bin simulation, while reducing the associated computational cost by ∼ 36 %.

  14. NEW FRONTIERS IN DRUGGABILITY

    PubMed Central

    Kozakov, Dima; Hall, David R.; Napoleon, Raeanne L.; Yueh, Christine; Whitty, Adrian; Vajda, Sandor

    2016-01-01

    A powerful early approach to evaluating the druggability of proteins involved determining the hit rate in NMR-based screening of a library of small compounds. Here we show that a computational analog of this method, based on mapping proteins using small molecules as probes, can reliably reproduce druggability results from NMR-based screening, and can provide a more meaningful assessment in cases where the two approaches disagree. We apply the method to a large set of proteins. The results show that, because the method is based on the biophysics of binding rather than on empirical parameterization, meaningful information can be gained about classes of proteins and classes of compounds beyond those resembling validated targets and conventionally druglike ligands. In particular, the method identifies targets that, while not druggable by druglike compounds, may become druggable using compound classes such as macrocycles or other large molecules beyond the rule-of-five limit. PMID:26230724

  15. Exploring Stratocumulus Cloud-Top Entrainment Processes and Parameterizations by Using Doppler Cloud Radar Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albrecht, Bruce; Fang, Ming; Ghate, Virendra

    2016-02-01

    Observations from an upward-pointing Doppler cloud radar are used to examine cloud-top entrainment processes and parameterizations in a non-precipitating continental stratocumulus cloud deck maintained by time varying surface buoyancy fluxes and cloud-top radiative cooling. Radar and ancillary observations were made at the Atmospheric Radiation Measurement (ARM)’s Southern Great Plains (SGP) site located near Lamont, Oklahoma of unbroken, non-precipitating stratocumulus clouds observed for a 14-hour period starting 0900 Central Standard Time on 25 March 2005. The vertical velocity variance and energy dissipation rate (EDR) terms in a parameterized turbulence kinetic energy (TKE) budget of the entrainment zone are estimated using themore » radar vertical velocity and the radar spectrum width observations from the upward-pointing millimeter cloud radar (MMCR) operating at the SGP site. Hourly averages of the vertical velocity variance term in the TKE entrainment formulation correlates strongly (r=0.72) to the dissipation rate term in the entrainment zone. However, the ratio of the variance term to the dissipation decreases at night due to decoupling of the boundary layer. When the night -time decoupling is accounted for, the correlation between the variance and the EDR term increases (r=0.92). To obtain bulk coefficients for the entrainment parameterizations derived from the TKE budget, independent estimate of entrainment were obtained from an inversion height budget using ARM SGP observations of the local time derivative and the horizontal advection of the cloud-top height. The large-scale vertical velocity at the inversion needed for this budget from EMWF reanalysis. This budget gives a mean entrainment rate for the observing period of 0.76±0.15 cm/s. This mean value is applied to the TKE budget parameterizations to obtain the bulk coefficients needed in these parameterizations. These bulk coefficients are compared with those from previous and are used to in the parameterizations to give hourly estimates of the entrainment rates using the radar derived vertical velocity variance and dissipation rates. Hourly entrainment rates were estimated from a convective velocity w* parameterization depends on the local surface buoyancy fluxes and the calculated radiative flux divergence, parameterization using a bulk coefficient obtained from the mean inversion height budget. The hourly rates from the cloud turbulence estimates and the w* parameterization, which is independent of the radar observations, are compared with the hourly we values from the budget. All show rough agreement with each other and capture the entrainment variability associated with substantial changes in the surface flux and radiative divergence at cloud top. Major uncertainties in the hourly estimates from the height budget and w* are discussed. The results indicate a strong potential for making entrainment rate estimates directly from the radar vertical velocity variance and the EDR measurements—a technique that has distinct advantages over other methods for estimating entrainment rates. Calculations based on the EDR alone can provide high temporal resolution (for averaging intervals as small as 10 minutes) of the entrainment processes and do not require an estimate of the boundary layer depth, which can be difficult to define when the boundary layer is decoupled.« less

  16. New Parameterizations for Neutral and Ion-Induced Sulfuric Acid-Water Particle Formation in Nucleation and Kinetic Regimes

    NASA Astrophysics Data System (ADS)

    Määttänen, Anni; Merikanto, Joonas; Henschel, Henning; Duplissy, Jonathan; Makkonen, Risto; Ortega, Ismael K.; Vehkamäki, Hanna

    2018-01-01

    We have developed new parameterizations of electrically neutral homogeneous and ion-induced sulfuric acid-water particle formation for large ranges of environmental conditions, based on an improved model that has been validated against a particle formation rate data set produced by Cosmics Leaving OUtdoor Droplets (CLOUD) experiments at European Organization for Nuclear Research (CERN). The model uses a thermodynamically consistent version of the Classical Nucleation Theory normalized using quantum chemical data. Unlike the earlier parameterizations for H2SO4-H2O nucleation, the model is applicable to extreme dry conditions where the one-component sulfuric acid limit is approached. Parameterizations are presented for the critical cluster sulfuric acid mole fraction, the critical cluster radius, the total number of molecules in the critical cluster, and the particle formation rate. If the critical cluster contains only one sulfuric acid molecule, a simple formula for kinetic particle formation can be used: this threshold has also been parameterized. The parameterization for electrically neutral particle formation is valid for the following ranges: temperatures 165-400 K, sulfuric acid concentrations 104-1013 cm-3, and relative humidities 0.001-100%. The ion-induced particle formation parameterization is valid for temperatures 195-400 K, sulfuric acid concentrations 104-1016 cm-3, and relative humidities 10-5-100%. The new parameterizations are thus applicable for the full range of conditions in the Earth's atmosphere relevant for binary sulfuric acid-water particle formation, including both tropospheric and stratospheric conditions. They are also suitable for describing particle formation in the atmosphere of Venus.

  17. Parameterized isoprene and monoterpene emissions from the boreal forest floor: Implementation into a 1D chemistry-transport model and investigation of the influence on atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Mogensen, Ditte; Aaltonen, Hermanni; Aalto, Juho; Bäck, Jaana; Kieloaho, Antti-Jussi; Gierens, Rosa; Smolander, Sampo; Kulmala, Markku; Boy, Michael

    2015-04-01

    Volatile organic compounds (VOCs) are emitted from the biosphere and can work as precursor gases for aerosol particles that can affect the climate (e.g. Makkonen et al., ACP, 2012). VOC emissions from needles and leaves have gained the most attention, however other parts of the ecosystem also have the ability to emit a vast amount of VOCs. This, often neglected, source can be important e.g. at periods where leaves are absent. Both sources and drivers related to forest floor emission of VOCs are currently limited. It is thought that the sources are mainly due to degradation of organic matter (Isidorov and Jdanova, Chemosphere, 2002), living roots (Asensio et al., Soil Biol. Biochem., 2008) and ground vegetation. The drivers are biotic (e.g. microbes) and abiotic (e.g. temperature and moisture). However, the relative importance of the sources and the drivers individually are currently poorly understood. Further, the relative importance of these factors is highly dependent on the tree species occupying the area of interest. The emission of isoprene and monoterpenes where measured from the boreal forest floor at the SMEAR II station in Southern Finland (Hari and Kulmala, Boreal Env. Res., 2005) during the snow-free period in 2010-2012. We used a dynamic method with 3 automated chambers analyzed by Proton Transfer Reaction - Mass Spectrometer (Aaltonen et al., Plant Soil, 2013). Using this data, we have developed empirical parameterizations for the emission of isoprene and monoterpenes from the forest floor. These parameterizations depends on abiotic factors, however, since the parameterizations are based on field measurements, biotic features are captured. Further, we have used the 1D chemistry-transport model SOSAA (Boy et al., ACP, 2011) to test the seasonal relative importance of inclusion of these parameterizations of the forest floor compared to the canopy crown emissions, on the atmospheric reactivity throughout the canopy.

  18. Gravity Waves Generated by Convection: A New Idealized Model Tool and Direct Validation with Satellite Observations

    NASA Astrophysics Data System (ADS)

    Alexander, M. Joan; Stephan, Claudia

    2015-04-01

    In climate models, gravity waves remain too poorly resolved to be directly modelled. Instead, simplified parameterizations are used to include gravity wave effects on model winds. A few climate models link some of the parameterized waves to convective sources, providing a mechanism for feedback between changes in convection and gravity wave-driven changes in circulation in the tropics and above high-latitude storms. These convective wave parameterizations are based on limited case studies with cloud-resolving models, but they are poorly constrained by observational validation, and tuning parameters have large uncertainties. Our new work distills results from complex, full-physics cloud-resolving model studies to essential variables for gravity wave generation. We use the Weather Research Forecast (WRF) model to study relationships between precipitation, latent heating/cooling and other cloud properties to the spectrum of gravity wave momentum flux above midlatitude storm systems. Results show the gravity wave spectrum is surprisingly insensitive to the representation of microphysics in WRF. This is good news for use of these models for gravity wave parameterization development since microphysical properties are a key uncertainty. We further use the full-physics cloud-resolving model as a tool to directly link observed precipitation variability to gravity wave generation. We show that waves in an idealized model forced with radar-observed precipitation can quantitatively reproduce instantaneous satellite-observed features of the gravity wave field above storms, which is a powerful validation of our understanding of waves generated by convection. The idealized model directly links observations of surface precipitation to observed waves in the stratosphere, and the simplicity of the model permits deep/large-area domains for studies of wave-mean flow interactions. This unique validated model tool permits quantitative studies of gravity wave driving of regional circulation and provides a new method for future development of realistic convective gravity wave parameterizations.

  19. Teaching and communicating dispersion in hydrogeology, with emphasis on the applicability of the Fickian model

    NASA Astrophysics Data System (ADS)

    Kitanidis, P. K.

    2017-08-01

    The process of dispersion in porous media is the effect of combined variability in fluid velocity and concentration at scales smaller than the ones resolved that contributes to spreading and mixing. It is usually introduced in textbooks and taught in classes through the Fick-Scheidegger parameterization, which is introduced as a scientific law of universal validity. This parameterization is based on observations in bench-scale laboratory experiments using homogeneous media. Fickian means that dispersive flux is proportional to the gradient of the resolved concentration while the Scheidegger parameterization is a particular way to compute the dispersion coefficients. The unresolved scales are thus associated with the pore-grain geometry that is ignored when the composite pore-grain medium is replaced by a homogeneous continuum. However, the challenge faced in practice is how to account for dispersion in numerical models that discretize the domain into blocks, often cubic meters in size, that contain multiple geologic facies. Although the Fick-Scheidegger parameterization is by far the one most commonly used, its validity has been questioned. This work presents a method of teaching dispersion that emphasizes the physical basis of dispersion and highlights the conditions under which a Fickian dispersion model is justified. In particular, we show that Fickian dispersion has a solid physical basis provided that an equilibrium condition is met. The issue of the Scheidegger parameterization is more complex but it is shown that the approximation that the dispersion coefficients should scale linearly with the mean velocity is often reasonable, at least as a practical approximation, but may not necessarily be always appropriate. Generally in Hydrogeology, the Scheidegger feature of constant dispersivity is considered as a physical law and inseparable from the Fickian model, but both perceptions are wrong. We also explain why Fickian dispersion fails under certain conditions, such as dispersion inside and directly upstream of a contaminant source. Other issues discussed are the relevance of column tests and confusion regarding the meaning of terms dispersion and Fickian.

  20. Stellar Atmospheric Parameterization Based on Deep Learning

    NASA Astrophysics Data System (ADS)

    Pan, R. Y.; Li, X. R.

    2016-07-01

    Deep learning is a typical learning method widely studied in machine learning, pattern recognition, and artificial intelligence. This work investigates the stellar atmospheric parameterization problem by constructing a deep neural network with five layers. The proposed scheme is evaluated on both real spectra from Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with Kurucz's New Opacity Distribution Function (NEWODF) model. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for the effective temperature (T_{eff}/K), 0.0058 for lg (T_{eff}/K), 0.1706 for surface gravity (lg (g/(cm\\cdot s^{-2}))), and 0.1294 dex for metallicity ([Fe/H]), respectively; On the theoretic spectra, the MAEs are 15.34 for T_{eff}/K, 0.0011 for lg (T_{eff}/K), 0.0214 for lg (g/(cm\\cdot s^{-2})), and 0.0121 dex for [Fe/H], respectively.

  1. Characterization of image heterogeneity using 2D Minkowski functionals increases the sensitivity of detection of a targeted MRI contrast agent.

    PubMed

    Canuto, Holly C; McLachlan, Charles; Kettunen, Mikko I; Velic, Marko; Krishnan, Anant S; Neves, Andre' A; de Backer, Maaike; Hu, D-E; Hobson, Michael P; Brindle, Kevin M

    2009-05-01

    A targeted Gd(3+)-based contrast agent has been developed that detects tumor cell death by binding to the phosphatidylserine (PS) exposed on the plasma membrane of dying cells. Although this agent has been used to detect tumor cell death in vivo, the differences in signal intensity between treated and untreated tumors was relatively small. As cell death is often spatially heterogeneous within tumors, we investigated whether an image analysis technique that parameterizes heterogeneity could be used to increase the sensitivity of detection of this targeted contrast agent. Two-dimensional (2D) Minkowski functionals (MFs) provided an automated and reliable method for parameterization of image heterogeneity, which does not require prior assumptions about the number of regions or features in the image, and were shown to increase the sensitivity of detection of the contrast agent as compared to simple signal intensity analysis. (c) 2009 Wiley-Liss, Inc.

  2. Engine performance analysis and optimization of a dual-mode scramjet with varied inlet conditions

    NASA Astrophysics Data System (ADS)

    Tian, Lu; Chen, Li-Hong; Chen, Qiang; Zhong, Feng-Quan; Chang, Xin-Yu

    2016-02-01

    A dual-mode scramjet can operate in a wide range of flight conditions. Higher thrust can be generated by adopting suitable combustion modes. Based on the net thrust, an analysis and preliminary optimal design of a kerosene-fueled parameterized dual-mode scramjet at a crucial flight Mach number of 6 were investigated by using a modified quasi-one-dimensional method and simulated annealing strategy. Engine structure and heat release distributions, affecting the engine thrust, were chosen as analytical parameters for varied inlet conditions (isolator entrance Mach number: 1.5-3.5). Results show that different optimal heat release distributions and structural conditions can be obtained at five different inlet conditions. The highest net thrust of the parameterized dual-mode engine can be achieved by a subsonic combustion mode at an isolator entrance Mach number of 2.5. Additionally, the effects of heat release and scramjet structure on net thrust have been discussed. The present results and the developed analytical method can provide guidance for the design and optimization of high-performance dual-mode scramjets.

  3. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model.

    PubMed

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M; Phifer, Jeremy R; Paluch, Andrew S

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of [Formula: see text] log units (ranking 15 out of 62 entries), the correlation coefficient (R) was [Formula: see text] (ranking 35), and [Formula: see text] of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  4. Multiclass Data Segmentation Using Diffuse Interface Methods on Graphs

    DTIC Science & Technology

    2014-01-01

    interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general- izations of the graph...Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph-based energy function that...over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence results for two algorithms

  5. A new fractional snow-covered area parameterization for the Community Land Model and its effect on the surface energy balance

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Lawrence, D. M.

    2011-11-01

    One function of the Community Land Model (CLM4) is the determination of surface albedo in the Community Earth System Model (CESM1). Because the typical spatial scales of CESM1 simulations are large compared to the scales of variability of surface properties such as snow cover and vegetation, unresolved surface heterogeneity is parameterized. Fractional snow-covered area, or snow-covered fraction (SCF), within a CLM4 grid cell is parameterized as a function of grid cell mean snow depth and snow density. This parameterization is based on an analysis of monthly averaged SCF and snow depth that showed a seasonal shift in the snow depth-SCF relationship. In this paper, we show that this shift is an artifact of the monthly sampling and that the current parameterization does not reflect the relationship observed between snow depth and SCF at the daily time scale. We demonstrate that the snow depth analysis used in the original study exhibits a bias toward early melt when compared to satellite-observed SCF. This bias results in a tendency to overestimate SCF as a function of snow depth. Using a more consistent, higher spatial and temporal resolution snow depth analysis reveals a clear hysteresis between snow accumulation and melt seasons. Here, a new SCF parameterization based on snow water equivalent is developed to capture the observed seasonal snow depth-SCF evolution. The effects of the new SCF parameterization on the surface energy budget are described. In CLM4, surface energy fluxes are calculated assuming a uniform snow cover. To more realistically simulate environments having patchy snow cover, we modify the model by computing the surface fluxes separately for snow-free and snow-covered fractions of a grid cell. In this configuration, the form of the parameterized snow depth-SCF relationship is shown to greatly affect the surface energy budget. The direct exposure of the snow-free surfaces to the atmosphere leads to greater heat loss from the ground during autumn and greater heat gain during spring. The net effect is to reduce annual mean soil temperatures by up to 3°C in snow-affected regions.

  6. A new fractional snow-covered area parameterization for the Community Land Model and its effect on the surface energy balance

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Lawrence, D. M.

    2012-11-01

    One function of the Community Land Model (CLM4) is the determination of surface albedo in the Community Earth System Model (CESM1). Because the typical spatial scales of CESM1 simulations are large compared to the scales of variability of surface properties such as snow cover and vegetation, unresolved surface heterogeneity is parameterized. Fractional snow-covered area, or snow-covered fraction (SCF), within a CLM4 grid cell is parameterized as a function of grid cell mean snow depth and snow density. This parameterization is based on an analysis of monthly averaged SCF and snow depth that showed a seasonal shift in the snow depth-SCF relationship. In this paper, we show that this shift is an artifact of the monthly sampling and that the current parameterization does not reflect the relationship observed between snow depth and SCF at the daily time scale. We demonstrate that the snow depth analysis used in the original study exhibits a bias toward early melt when compared to satellite-observed SCF. This bias results in a tendency to overestimate SCF as a function of snow depth. Using a more consistent, higher spatial and temporal resolution snow depth analysis reveals a clear hysteresis between snow accumulation and melt seasons. Here, a new SCF parameterization based on snow water equivalent is developed to capture the observed seasonal snow depth-SCF evolution. The effects of the new SCF parameterization on the surface energy budget are described. In CLM4, surface energy fluxes are calculated assuming a uniform snow cover. To more realistically simulate environments having patchy snow cover, we modify the model by computing the surface fluxes separately for snow-free and snow-covered fractions of a grid cell. In this configuration, the form of the parameterized snow depth-SCF relationship is shown to greatly affect the surface energy budget. The direct exposure of the snow-free surfaces to the atmosphere leads to greater heat loss from the ground during autumn and greater heat gain during spring. The net effect is to reduce annual mean soil temperatures by up to 3°C in snow-affected regions.

  7. Changes in physiological attributes of ponderosa pine from seedling to mature tree

    Treesearch

    Nancy E. Grulke; William A. Retzlaff

    2001-01-01

    Plant physiological models are generally parameterized from many different sources of data, including chamber experiments and plantations, from seedlings to mature trees. We obtained a comprehensive data set for a natural stand of ponderosa pine (Pinus ponderosa Laws.) and used these data to parameterize the physiologically based model, TREGRO....

  8. High Resolution Electro-Optical Aerosol Phase Function Database PFNDAT2006

    DTIC Science & Technology

    2006-08-01

    snow models use the gamma distribution (equation 12) with m = 0. 3.4.1 Rain Model The most widely used analytical parameterization for raindrop size ...Uijlenhoet and Stricker (22), as the result of an analytical derivation based on a theoretical parameterization for the raindrop size distribution ...6 2.2 Particle Size Distribution Models

  9. Current state of aerosol nucleation parameterizations for air-quality and climate modeling

    NASA Astrophysics Data System (ADS)

    Semeniuk, Kirill; Dastoor, Ashu

    2018-04-01

    Aerosol nucleation parameterization models commonly used in 3-D air quality and climate models have serious limitations. This includes classical nucleation theory based variants, empirical models and other formulations. Recent work based on detailed and extensive laboratory measurements and improved quantum chemistry computation has substantially advanced the state of nucleation parameterizations. In terms of inorganic nucleation involving BHN and THN including ion effects these new models should be considered as worthwhile replacements for the old models. However, the contribution of organic species to nucleation remains poorly quantified. New particle formation consists of a distinct post-nucleation growth regime which is characterized by a strong Kelvin curvature effect and is thus dependent on availability of very low volatility organic species or sulfuric acid. There have been advances in the understanding of the multiphase chemistry of biogenic and anthropogenic organic compounds which facilitate to overcome the initial aerosol growth barrier. Implementation of processes influencing new particle formation is challenging in 3-D models and there is a lack of comprehensive parameterizations. This review considers the existing models and recent innovations.

  10. Evaluation of different methods to model near-surface turbulent fluxes for a mountain glacier in the Cariboo Mountains, BC, Canada

    NASA Astrophysics Data System (ADS)

    Radić, Valentina; Menounos, Brian; Shea, Joseph; Fitzpatrick, Noel; Tessema, Mekdes A.; Déry, Stephen J.

    2017-12-01

    As part of surface energy balance models used to simulate glacier melting, choosing parameterizations to adequately estimate turbulent heat fluxes is extremely challenging. This study aims to evaluate a set of four aerodynamic bulk methods (labeled as C methods), commonly used to estimate turbulent heat fluxes for a sloped glacier surface, and two less commonly used bulk methods developed from katabatic flow models. The C methods differ in their parameterizations of the bulk exchange coefficient that relates the fluxes to the near-surface measurements of mean wind speed, air temperature, and humidity. The methods' performance in simulating 30 min sensible- and latent-heat fluxes is evaluated against the measured fluxes from an open-path eddy-covariance (OPEC) method. The evaluation is performed at a point scale of a mountain glacier, using one-level meteorological and OPEC observations from multi-day periods in the 2010 and 2012 summer seasons. The analysis of the two independent seasons yielded the same key findings, which include the following: first, the bulk method, with or without the commonly used Monin-Obukhov (M-O) stability functions, overestimates the turbulent heat fluxes over the observational period, mainly due to a substantial overestimation of the friction velocity. This overestimation is most pronounced during the katabatic flow conditions, corroborating the previous findings that the M-O theory works poorly in the presence of a low wind speed maximum. Second, the method based on a katabatic flow model (labeled as the KInt method) outperforms any C method in simulating the friction velocity; however, the C methods outperform the KInt method in simulating the sensible-heat fluxes. Third, the best overall performance is given by a hybrid method, which combines the KInt approach with the C method; i.e., it parameterizes eddy viscosity differently than eddy diffusivity. An error analysis reveals that the uncertainties in the measured meteorological variables and the roughness lengths produce errors in the modeled fluxes that are smaller than the differences between the modeled and observed fluxes. This implies that further advances will require improvement to model theory rather than better measurements of input variables. Further data from different glaciers are needed to investigate any universality of these findings.

  11. R-parametrization and its role in classification of linear multivariable feedback systems

    NASA Technical Reports Server (NTRS)

    Chen, Robert T. N.

    1988-01-01

    A classification of all the compensators that stabilize a given general plant in a linear, time-invariant multi-input, multi-output feedback system is developed. This classification, along with the associated necessary and sufficient conditions for stability of the feedback system, is achieved through the introduction of a new parameterization, referred to as R-Parameterization, which is a dual of the familiar Q-Parameterization. The classification is made to the stability conditions of the compensators and the plant by themselves; and necessary and sufficient conditions are based on the stability of Q and R themselves.

  12. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for carbon cycle studies

    USGS Publications Warehouse

    He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.

  13. Toward Improved Parameterization of a Meso-Scale Hydrologic Model in a Discontinuous Permafrost, Boreal Forest Ecosystem

    NASA Astrophysics Data System (ADS)

    Endalamaw, A. M.; Bolton, W. R.; Young, J. M.; Morton, D.; Hinzman, L. D.

    2013-12-01

    The sub-arctic environment can be characterized as being located in the zone of discontinuous permafrost. Although the distribution of permafrost is site specific, it dominates many of the hydrologic and ecologic responses and functions including vegetation distribution, stream flow, soil moisture, and storage processes. In this region, the boundaries that separate the major ecosystem types (deciduous dominated and coniferous dominated ecosystems) as well as permafrost (permafrost verses non-permafrost) occur over very short spatial scales. One of the goals of this research project is to improve parameterizations of meso-scale hydrologic models in this environment. Using the Caribou-Poker Creeks Research Watershed (CPCRW) as the test area, simulations of the headwater catchments of varying permafrost and vegetation distributions were performed. CPCRW, located approximately 50 km northeast of Fairbanks, Alaska, is located within the zone of discontinuous permafrost and the boreal forest ecosystem. The Variable Infiltration Capacity (VIC) model was selected as the hydrologic model. In CPCRW, permafrost and coniferous vegetation is generally found on north facing slopes and valley bottoms. Permafrost free soils and deciduous vegetation is generally found on south facing slopes. In this study, hydrologic simulations using fine scale vegetation and soil parameterizations - based upon slope and aspect analysis at a 50 meter resolution - were conducted. Simulations were also conducted using downscaled vegetation from the Scenarios Network for Alaska and Arctic Planning (SNAP) (1 km resolution) and soil data sets from the Food and Agriculture Organization (FAO) (approximately 9 km resolution). Preliminary simulation results show that soil and vegetation parameterizations based upon fine scale slope/aspect analysis increases the R2 values (0.5 to 0.65 in the high permafrost (53%) basin; 0.43 to 0.56 in the low permafrost (2%) basin) relative to parameterization based on coarse scale data. These results suggest that using fine resolution parameterizations can be used to improve meso-scale hydrological modeling in this region.

  14. Linear units improve articulation between social and physical constructs: An example from caregiver parameterization for children supported by complex medical technologies

    NASA Astrophysics Data System (ADS)

    Bezruczko, N.; Stanley, T.; Battle, M.; Latty, C.

    2016-11-01

    Despite broad sweeping pronouncements by international research organizations that social sciences are being integrated into global research programs, little attention has been directed toward obstacles blocking productive collaborations. In particular, social sciences routinely implement nonlinear, ordinal measures, which fundamentally inhibit integration with overarching scientific paradigms. The widely promoted general linear model in contemporary social science methods is largely based on untransformed scores and ratings, which are neither objective nor linear. This issue has historically separated physical and social sciences, which this report now asserts is unnecessary. In this research, nonlinear, subjective caregiver ratings of confidence to care for children supported by complex, medical technologies were transformed to an objective scale defined by logits (N=70). Transparent linear units from this transformation provided foundational insights into measurement properties of a social- humanistic caregiving construct, which clarified physical and social caregiver implications. Parameterized items and ratings were also subjected to multivariate hierarchical analysis, then decomposed to demonstrate theoretical coherence (R2 >.50), which provided further support for convergence of mathematical parameterization, physical expectations, and a social-humanistic construct. These results present substantial support for improving integration of social sciences with contemporary scientific research programs by emphasizing construction of common variables with objective, linear units.

  15. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.

    PubMed

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-10-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.

  16. Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.

    PubMed

    Hack, C Eric

    2006-04-17

    Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.

  17. Usage of Parameterized Fatigue Spectra and Physics-Based Systems Engineering Models for Wind Turbine Component Sizing: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, Taylor; Guo, Yi; Veers, Paul

    Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less

  18. Relativistic three-dimensional Lippmann-Schwinger cross sections for space radiation applications

    NASA Astrophysics Data System (ADS)

    Werneth, C. M.; Xu, X.; Norman, R. B.; Maung, K. M.

    2017-12-01

    Radiation transport codes require accurate nuclear cross sections to compute particle fluences inside shielding materials. The Tripathi semi-empirical reaction cross section, which includes over 60 parameters tuned to nucleon-nucleus (NA) and nucleus-nucleus (AA) data, has been used in many of the world's best-known transport codes. Although this parameterization fits well to reaction cross section data, the predictive capability of any parameterization is questionable when it is used beyond the range of the data to which it was tuned. Using uncertainty analysis, it is shown that a relativistic three-dimensional Lippmann-Schwinger (LS3D) equation model based on Multiple Scattering Theory (MST) that uses 5 parameterizations-3 fundamental parameterizations to nucleon-nucleon (NN) data and 2 nuclear charge density parameterizations-predicts NA and AA reaction cross sections as well as the Tripathi cross section parameterization for reactions in which the kinetic energy of the projectile in the laboratory frame (TLab) is greater than 220 MeV/n. The relativistic LS3D model has the additional advantage of being able to predict highly accurate total and elastic cross sections. Consequently, it is recommended that the relativistic LS3D model be used for space radiation applications in which TLab > 220MeV /n .

  19. Impact of Vegetation Cover Fraction Parameterization schemes on Land Surface Temperature Simulation in the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Lv, M.; Li, C.; Lu, H.; Yang, K.; Chen, Y.

    2017-12-01

    The parameterization of vegetation cover fraction (VCF) is an important component of land surface models. This paper investigates the impacts of three VCF parameterization schemes on land surface temperature (LST) simulation by the Common Land Model (CoLM) in the Tibetan Plateau (TP). The first scheme is a simple land cover (LC) based method; the second one is based on remote sensing observation (hereafter named as RNVCF) , in which multi-year climatology VCFs is derived from Moderate-resolution Imaging Spectroradiometer (MODIS) NDVI (Normalized Difference Vegetation Index); the third VCF parameterization scheme derives VCF from the LAI simulated by LSM and clump index at every model time step (hereafter named as SMVCF). Simulated land surface temperature(LST) and soil temperature by CoLM with three VCF parameterization schemes were evaluated by using satellite LST observation and in situ soil temperature observation, respectively, during the period of 2010 to 2013. The comparison against MODIS Aqua LST indicates that (1) CTL produces large biases for both four seasons in early afternoon (about 13:30, local solar time), while the mean bias in spring reach to 12.14K; (2) RNVCF and SMVCF reduce the mean bias significantly, especially in spring as such reduce is about 6.5K. Surface soil temperature observed at 5 cm depth from three soil moisture and temperature monitoring networks is also employed to assess the skill of three VCF schemes. The three networks, crossing TP from West to East, have different climate and vegetation conditions. In the Ngari network, located in the Western TP with an arid climate, there are not obvious differences among three schemes. In Naqu network, located in central TP with a semi-arid climate condition, CTL shows a severe overestimates (12.1 K), but such overestimations can be reduced by 79% by RNVCF and 87% by SMVCF. In the third humid network (Maqu in eastern TP), CoLM performs similar to Naqu. However, at both Naqu and Maqu networks, RNVCF shows significant overestimation in summer, perhaps due to RNVCF ignores the growing characteristics of vegetation (mainly grass) in these two regions. Our results demonstrate that VCF schemes have significant influence on LSM performance, and indicate that it is important to consider vegetation growing characteristics in VCF schemes for different LCs.

  20. Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2016-07-01

    Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.

  1. Parameterization of single-scattering properties of snow

    NASA Astrophysics Data System (ADS)

    Räisänen, P.; Kokhanovsky, A.; Guyot, G.; Jourdan, O.; Nousiainen, T.

    2015-02-01

    Snow consists of non-spherical grains of various shapes and sizes. Still, in many radiative transfer applications, single-scattering properties of snow have been based on the assumption of spherical grains. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat phase function typical of deformed non-spherical particles, this is still a rather ad-hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ = 0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function P11 as functions of the size parameter and the real and imaginary parts of the refractive index. The parameterizations are analytic and simple to use in radiative transfer models. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons to spheres and distorted Koch fractals.

  2. Parameterization of single-scattering properties of snow

    NASA Astrophysics Data System (ADS)

    Räisänen, P.; Kokhanovsky, A.; Guyot, G.; Jourdan, O.; Nousiainen, T.

    2015-06-01

    Snow consists of non-spherical grains of various shapes and sizes. Still, in many radiative transfer applications, single-scattering properties of snow have been based on the assumption of spherical grains. More recently, second-generation Koch fractals have been employed. While they produce a relatively flat phase function typical of deformed non-spherical particles, this is still a rather ad hoc choice. Here, angular scattering measurements for blowing snow conducted during the CLimate IMpacts of Short-Lived pollutants In the Polar region (CLIMSLIP) campaign at Ny Ålesund, Svalbard, are used to construct a reference phase function for snow. Based on this phase function, an optimized habit combination (OHC) consisting of severely rough (SR) droxtals, aggregates of SR plates and strongly distorted Koch fractals is selected. The single-scattering properties of snow are then computed for the OHC as a function of wavelength λ and snow grain volume-to-projected area equivalent radius rvp. Parameterization equations are developed for λ = 0.199-2.7 μm and rvp = 10-2000 μm, which express the single-scattering co-albedo β, the asymmetry parameter g and the phase function P11 as functions of the size parameter and the real and imaginary parts of the refractive index. The parameterizations are analytic and simple to use in radiative transfer models. Compared to the reference values computed for the OHC, the accuracy of the parameterization is very high for β and g. This is also true for the phase function parameterization, except for strongly absorbing cases (β > 0.3). Finally, we consider snow albedo and reflected radiances for the suggested snow optics parameterization, making comparisons to spheres and distorted Koch fractals.

  3. Parameterized reduced order models from a single mesh using hyper-dual numbers

    NASA Astrophysics Data System (ADS)

    Brake, M. R. W.; Fike, J. A.; Topping, S. D.

    2016-06-01

    In order to assess the predicted performance of a manufactured system, analysts must consider random variations (both geometric and material) in the development of a model, instead of a single deterministic model of an idealized geometry with idealized material properties. The incorporation of random geometric variations, however, potentially could necessitate the development of thousands of nearly identical solid geometries that must be meshed and separately analyzed, which would require an impractical number of man-hours to complete. This research advances a recent approach to uncertainty quantification by developing parameterized reduced order models. These parameterizations are based upon Taylor series expansions of the system's matrices about the ideal geometry, and a component mode synthesis representation for each linear substructure is used to form an efficient basis with which to study the system. The numerical derivatives required for the Taylor series expansions are obtained via hyper-dual numbers, and are compared to parameterized models constructed with finite difference formulations. The advantage of using hyper-dual numbers is two-fold: accuracy of the derivatives to machine precision, and the need to only generate a single mesh of the system of interest. The theory is applied to a stepped beam system in order to demonstrate proof of concept. The results demonstrate that the hyper-dual number multivariate parameterization of geometric variations, which largely are neglected in the literature, are accurate for both sensitivity and optimization studies. As model and mesh generation can constitute the greatest expense of time in analyzing a system, the foundation to create a parameterized reduced order model based off of a single mesh is expected to reduce dramatically the necessary time to analyze multiple realizations of a component's possible geometry.

  4. Evaluation of Aerosol-cloud Interaction in the GISS Model E Using ARM Observations

    NASA Technical Reports Server (NTRS)

    DeBoer, G.; Bauer, S. E.; Toto, T.; Menon, Surabi; Vogelmann, A. M.

    2013-01-01

    Observations from the US Department of Energy's Atmospheric Radiation Measurement (ARM) program are used to evaluate the ability of the NASA GISS ModelE global climate model in reproducing observed interactions between aerosols and clouds. Included in the evaluation are comparisons of basic meteorology and aerosol properties, droplet activation, effective radius parameterizations, and surface-based evaluations of aerosol-cloud interactions (ACI). Differences between the simulated and observed ACI are generally large, but these differences may result partially from vertical distribution of aerosol in the model, rather than the representation of physical processes governing the interactions between aerosols and clouds. Compared to the current observations, the ModelE often features elevated droplet concentrations for a given aerosol concentration, indicating that the activation parameterizations used may be too aggressive. Additionally, parameterizations for effective radius commonly used in models were tested using ARM observations, and there was no clear superior parameterization for the cases reviewed here. This lack of consensus is demonstrated to result in potentially large, statistically significant differences to surface radiative budgets, should one parameterization be chosen over another.

  5. High throughput film dosimetry in homogeneous and heterogeneous media for a small animal irradiator

    PubMed Central

    Wack, L.; Ngwa, W.; Tryggestad, E.; Tsiamas, P.; Berbeco, R.; Ng, S.K.; Hesser, J.

    2013-01-01

    Purpose We have established a high-throughput Gafchromic film dosimetry protocol for narrow kilo-voltage beams in homogeneous and heterogeneous media for small-animal radiotherapy applications. The kV beam characterization is based on extensive Gafchromic film dosimetry data acquired in homogeneous and heterogeneous media. An empirical model is used for parameterization of depth and off-axis dependence of measured data. Methods We have modified previously published methods of film dosimetry to suit the specific tasks of the study. Unlike film protocols used in previous studies, our protocol employs simultaneous multichannel scanning and analysis of up to nine Gafchromic films per scan. A scanner and background correction were implemented to improve accuracy of the measurements. Measurements were taken in homogeneous and inhomogeneous phantoms at 220 kVp and a field size of 5 × 5 mm2. The results were compared against Monte Carlo simulations. Results Dose differences caused by variations in background signal were effectively removed by the corrections applied. Measurements in homogeneous phantoms were used to empirically characterize beam data in homogeneous and heterogeneous media. Film measurements in inhomogeneous phantoms and their empirical parameterization differed by about 2%–3%. The model differed from MC by about 1% (water, lung) to 7% (bone). Good agreement was found for measured and modelled off-axis ratios. Conclusions EBT2 films are a valuable tool for characterization of narrow kV beams, though care must be taken to eliminate disturbances caused by varying background signals. The usefulness of the empirical beam model in interpretation and parameterization of film data was demonstrated. PMID:23510532

  6. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    PubMed Central

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis. PMID:26125967

  7. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    PubMed

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  8. Polymorphous computing fabric

    DOEpatents

    Wolinski, Christophe Czeslaw [Los Alamos, NM; Gokhale, Maya B [Los Alamos, NM; McCabe, Kevin Peter [Los Alamos, NM

    2011-01-18

    Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

  9. Parameterizing time in electronic health record studies.

    PubMed

    Hripcsak, George; Albers, David J; Perotte, Adler

    2015-07-01

    Fields like nonlinear physics offer methods for analyzing time series, but many methods require that the time series be stationary-no change in properties over time.Objective Medicine is far from stationary, but the challenge may be able to be ameliorated by reparameterizing time because clinicians tend to measure patients more frequently when they are ill and are more likely to vary. We compared time parameterizations, measuring variability of rate of change and magnitude of change, and looking for homogeneity of bins of temporal separation between pairs of time points. We studied four common laboratory tests drawn from 25 years of electronic health records on 4 million patients. We found that sequence time-that is, simply counting the number of measurements from some start-produced more stationary time series, better explained the variation in values, and had more homogeneous bins than either traditional clock time or a recently proposed intermediate parameterization. Sequence time produced more accurate predictions in a single Gaussian process model experiment. Of the three parameterizations, sequence time appeared to produce the most stationary series, possibly because clinicians adjust their sampling to the acuity of the patient. Parameterizing by sequence time may be applicable to association and clustering experiments on electronic health record data. A limitation of this study is that laboratory data were derived from only one institution. Sequence time appears to be an important potential parameterization. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work properly cited. For commercial re-use, please contact journals.permissions@oup.com.

  10. Fresh clouds: A parameterized updraft method for calculating cloud densities in one-dimensional models

    NASA Astrophysics Data System (ADS)

    Wong, Michael H.; Atreya, Sushil K.; Kuhn, William R.; Romani, Paul N.; Mihalka, Kristen M.

    2015-01-01

    Models of cloud condensation under thermodynamic equilibrium in planetary atmospheres are useful for several reasons. These equilibrium cloud condensation models (ECCMs) calculate the wet adiabatic lapse rate, determine saturation-limited mixing ratios of condensing species, calculate the stabilizing effect of latent heat release and molecular weight stratification, and locate cloud base levels. Many ECCMs trace their heritage to Lewis (Lewis, J.S. [1969]. Icarus 10, 365-378) and Weidenschilling and Lewis (Weidenschilling, S.J., Lewis, J.S. [1973]. Icarus 20, 465-476). Calculation of atmospheric structure and gas mixing ratios are correct in these models. We resolve errors affecting the cloud density calculation in these models by first calculating a cloud density rate: the change in cloud density with updraft length scale. The updraft length scale parameterizes the strength of the cloud-forming updraft, and converts the cloud density rate from the ECCM into cloud density. The method is validated by comparison with terrestrial cloud data. Our parameterized updraft method gives a first-order prediction of cloud densities in a “fresh” cloud, where condensation is the dominant microphysical process. Older evolved clouds may be better approximated by another 1-D method, the diffusive-precipitative Ackerman and Marley (Ackerman, A.S., Marley, M.S. [2001]. Astrophys. J. 556, 872-884) model, which represents a steady-state equilibrium between precipitation and condensation of vapor delivered by turbulent diffusion. We re-evaluate observed cloud densities in the Galileo Probe entry site (Ragent, B. et al. [1998]. J. Geophys. Res. 103, 22891-22910), and show that the upper and lower observed clouds at ∼0.5 and ∼3 bars are consistent with weak (cirrus-like) updrafts under conditions of saturated ammonia and water vapor, respectively. The densest observed cloud, near 1.3 bar, requires unexpectedly strong updraft conditions, or higher cloud density rates. The cloud density rate in this layer may be augmented by a composition with non-NH4SH components (possibly including adsorbed NH3).

  11. Regulation-Structured Dynamic Metabolic Model Provides a Potential Mechanism for Delayed Enzyme Response in Denitrification Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Hyun-Seob; Thomas, Dennis G.; Stegen, James C.

    In a recent study of denitrification dynamics in hyporheic zone sediments, we observed a significant time lag (up to several days) in enzymatic response to the changes in substrate concentration. To explore an underlying mechanism and understand the interactive dynamics between enzymes and nutrients, we developed a trait-based model that associates a community’s traits with functional enzymes, instead of typically used species guilds (or functional guilds). This enzyme-based formulation allows to collectively describe biogeochemical functions of microbial communities without directly parameterizing the dynamics of species guilds, therefore being scalable to complex communities. As a key component of modeling, we accountedmore » for microbial regulation occurring through transcriptional and translational processes, the dynamics of which was parameterized based on the temporal profiles of enzyme concentrations measured using a new signature peptide-based method. The simulation results using the resulting model showed several days of a time lag in enzymatic responses as observed in experiments. Further, the model showed that the delayed enzymatic reactions could be primarily controlled by transcriptional responses and that the dynamics of transcripts and enzymes are closely correlated. The developed model can serve as a useful tool for predicting biogeochemical processes in natural environments, either independently or through integration with hydrologic flow simulators.« less

  12. Parameterization-based tracking for the P2 experiment

    NASA Astrophysics Data System (ADS)

    Sorokin, Iurii

    2017-08-01

    The P2 experiment in Mainz aims to determine the weak mixing angle θW at low momentum transfer by measuring the parity-violating asymmetry of elastic electronproton scattering. In order to achieve the intended precision of Δ(sin2 θW)/sin2θW = 0:13% within the planned 10 000 hours of running the experiment has to operate at the rate of 1011 detected electrons per second. Although it is not required to measure the kinematic parameters of each individual electron, every attempt is made to achieve the highest possible throughput in the track reconstruction chain. In the present work a parameterization-based track reconstruction method is described. It is a variation of track following, where the results of the computation-heavy steps, namely the propagation of a track to the further detector plane, and the fitting, are pre-calculated, and expressed in terms of parametric analytic functions. This makes the algorithm extremely fast, and well-suited for an implementation on an FPGA. The method also takes implicitly into account the actual phase space distribution of the tracks already at the stage of candidate construction. Compared to a simple algorithm, that does not use such information, this allows reducing the combinatorial background by many orders of magnitude, down to O(1) background candidate per one signal track. The method is developed specifically for the P2 experiment in Mainz, and the presented implementation is tightly coupled to the experimental conditions.

  13. Parameterized reduced-order models using hyper-dual numbers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fike, Jeffrey A.; Brake, Matthew Robert

    2013-10-01

    The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less

  14. Coordinated Parameterization Development and Large-Eddy Simulation for Marine and Arctic Cloud-Topped Boundary Layers

    NASA Technical Reports Server (NTRS)

    Bretherton, Christopher S.

    2002-01-01

    The goal of this project was to compare observations of marine and arctic boundary layers with: (1) parameterization systems used in climate and weather forecast models; and (2) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type, and thickness as functions of large scale conditions that are predicted by global climate models. The principal achievements of the project were as follows: (1) Development of a novel boundary layer parameterization for large-scale models that better represents the physical processes in marine boundary layer clouds; and (2) Comparison of column output from the ECMWF global forecast model with observations from the SHEBA experiment. Overall the forecast model did predict most of the major precipitation events and synoptic variability observed over the year of observation of the SHEBA ice camp.

  15. A note on: "A Gaussian-product stochastic Gent-McWilliams parameterization"

    NASA Astrophysics Data System (ADS)

    Jansen, Malte F.

    2017-02-01

    This note builds on a recent article by Grooms (2016), which introduces a new stochastic parameterization for eddy buoyancy fluxes. The closure proposed by Grooms accounts for the fact that eddy fluxes arise as the product of two approximately Gaussian variables, which in turn leads to a distinctly non-Gaussian distribution. The directionality of the stochastic eddy fluxes, however, remains somewhat ad-hoc and depends on the reference frame of the chosen coordinate system. This note presents a modification of the approach proposed by Grooms, which eliminates this shortcoming. Eddy fluxes are computed based on a stochastic mixing length model, which leads to a frame invariant formulation. As in the original closure proposed by Grooms, eddy fluxes are proportional to the product of two Gaussian variables, and the parameterization reduces to the Gent and McWilliams parameterization for the mean buyoancy fluxes.

  16. Final Technical Report for "Reducing tropical precipitation biases in CESM"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Vincent

    In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we have created a climate model that contains a unified cloud parameterization (“CLUBB”) and a unified microphysics parameterization (“MG2”). In this model, all cloud types --- including marine stratocumulus, shallow cumulus, and deep cumulus --- are represented with a single equation set. This model improves themore » representation of convection in the Tropics. The model has been compared with ARM observations. The chief benefit of the project is to provide a climate model that is based on a more theoretically rigorous formulation.« less

  17. Atmospheric solar heating rate in the water vapor bands

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah

    1986-01-01

    The total absorption of solar radiation by water vapor in clear atmospheres is parameterized as a simple function of the scaled water vapor amount. For applications to cloudy and hazy atmospheres, the flux-weighted k-distribution functions are computed for individual absorption bands and for the total near-infrared region. The parameterization is based upon monochromatic calculations and follows essentially the scaling approximation of Chou and Arking, but the effect of temperature variation with height is taken into account in order to enhance the accuracy. Furthermore, the spectral range is extended to cover the two weak bands centered at 0.72 and 0.82 micron. Comparisons with monochromatic calculations show that the atmospheric heating rate and the surface radiation can be accurately computed from the parameterization. Comparisons are also made with other parameterizations. It is found that the absorption of solar radiation can be computed reasonably well using the Goody band model and the Curtis-Godson approximation.

  18. Nonnegative definite EAP and ODF estimation via a unified multi-shell HARDI reconstruction.

    PubMed

    Cheng, Jian; Jiang, Tianzi; Deriche, Rachid

    2012-01-01

    In High Angular Resolution Diffusion Imaging (HARDI), Orientation Distribution Function (ODF) and Ensemble Average Propagator (EAP) are two important Probability Density Functions (PDFs) which reflect the water diffusion and fiber orientations. Spherical Polar Fourier Imaging (SPFI) is a recent model-free multi-shell HARDI method which estimates both EAP and ODF from the diffusion signals with multiple b values. As physical PDFs, ODFs and EAPs are nonnegative definite respectively in their domains S2 and R3. However, existing ODF/EAP estimation methods like SPFI seldom consider this natural constraint. Although some works considered the nonnegative constraint on the given discrete samples of ODF/EAP, the estimated ODF/EAP is not guaranteed to be nonnegative definite in the whole continuous domain. The Riemannian framework for ODFs and EAPs has been proposed via the square root parameterization based on pre-estimated ODFs and EAPs by other methods like SPFI. However, there is no work on how to estimate the square root of ODF/EAP called as the wavefuntion directly from diffusion signals. In this paper, based on the Riemannian framework for ODFs/EAPs and Spherical Polar Fourier (SPF) basis representation, we propose a unified model-free multi-shell HARDI method, named as Square Root Parameterized Estimation (SRPE), to simultaneously estimate both the wavefunction of EAPs and the nonnegative definite ODFs and EAPs from diffusion signals. The experiments on synthetic data and real data showed SRPE is more robust to noise and has better EAP reconstruction than SPFI, especially for EAP profiles at large radius.

  19. A linear parameter-varying multiobjective control law design based on youla parametrization for a flexible blended wing body aircraft

    NASA Astrophysics Data System (ADS)

    Demourant, F.; Ferreres, G.

    2013-12-01

    This article presents a methodology for a linear parameter-varying (LPV) multiobjective flight control law design for a blended wing body (BWB) aircraft and results. So, the method is a direct design of a parametrized control law (with respect to some measured flight parameters) through a multimodel convex design to optimize a set of specifications on the full-flight domain and different mass cases. The methodology is based on the Youla parameterization which is very useful since closed loop specifications are affine with respect to Youla parameter. The LPV multiobjective design method is detailed and applied to the BWB flexible aircraft example.

  20. The concentration dependence of the galaxy–halo connection: Modeling assembly bias with abundance matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehmann, Benjamin V.; Mao, Yao -Yuan; Becker, Matthew R.

    Empirical methods for connecting galaxies to their dark matter halos have become essential for interpreting measurements of the spatial statistics of galaxies. In this work, we present a novel approach for parameterizing the degree of concentration dependence in the abundance matching method. Furthermore, this new parameterization provides a smooth interpolation between two commonly used matching proxies: the peak halo mass and the peak halo maximal circular velocity. This parameterization controls the amount of dependence of galaxy luminosity on halo concentration at a fixed halo mass. Effectively this interpolation scheme enables abundance matching models to have adjustable assembly bias in the resulting galaxy catalogs. With the newmore » $$400\\,\\mathrm{Mpc}\\,{h}^{-1}$$ DarkSky Simulation, whose larger volume provides lower sample variance, we further show that low-redshift two-point clustering and satellite fraction measurements from SDSS can already provide a joint constraint on this concentration dependence and the scatter within the abundance matching framework.« less

  1. The concentration dependence of the galaxy–halo connection: Modeling assembly bias with abundance matching

    DOE PAGES

    Lehmann, Benjamin V.; Mao, Yao -Yuan; Becker, Matthew R.; ...

    2016-12-28

    Empirical methods for connecting galaxies to their dark matter halos have become essential for interpreting measurements of the spatial statistics of galaxies. In this work, we present a novel approach for parameterizing the degree of concentration dependence in the abundance matching method. Furthermore, this new parameterization provides a smooth interpolation between two commonly used matching proxies: the peak halo mass and the peak halo maximal circular velocity. This parameterization controls the amount of dependence of galaxy luminosity on halo concentration at a fixed halo mass. Effectively this interpolation scheme enables abundance matching models to have adjustable assembly bias in the resulting galaxy catalogs. With the newmore » $$400\\,\\mathrm{Mpc}\\,{h}^{-1}$$ DarkSky Simulation, whose larger volume provides lower sample variance, we further show that low-redshift two-point clustering and satellite fraction measurements from SDSS can already provide a joint constraint on this concentration dependence and the scatter within the abundance matching framework.« less

  2. Acoustic and elastic waveform inversion best practices

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan T.

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.

  3. 10 Ways to Improve the Representation of MCSs in Climate Models

    NASA Astrophysics Data System (ADS)

    Schumacher, C.

    2017-12-01

    1. The first way to improve the representation of mesoscale convective systems (MCSs) in global climate models (GCMs) is to recognize that MCSs are important to climate. That may be obvious to most of the people attending this session, but it cannot be taken for granted in the wider community. The fact that MCSs produce large amounts of the global rainfall and that they dramatically impact the atmosphere via transports of heat, moisture, and momentum must be continuously stressed. 2-4. There has traditionally been three approaches to representing MCSs and/or their impacts in GCMs. The first is to focus on improving cumulus parameterizations by implementing things like cold pools that are assumed to better organize convection. The second is to focus on including mesoscale processes in the cumulus parameterization such as mesoscale vertical motions. The third is to just buy your way out with higher resolution using techniques like super-parameterization or global cloud-resolving model runs. All of these approaches have their pros and cons, but none of them satisfactorily solve the MCS climate modeling problem. 5-10. Looking forward, there is active discussion and new ideas in the modeling community on how to better represent convective organization in models. A number of ideas are a dramatic shift from the traditional plume-based cumulus parameterizations of most GCMs, such as implementing mesoscale parmaterizations based on their physical impacts (e.g., via heating), on empirical relationships based on big data/machine learning, or on stochastic approaches. Regardless of the technique employed, smart evaluation processes using observations are paramount to refining and constraining the inevitable tunable parameters in any parameterization.

  4. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations

    PubMed Central

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2014-01-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios. PMID:24729986

  5. Development of Turbulent Biological Closure Parameterizations

    DTIC Science & Technology

    2011-09-30

    LONG-TERM GOAL: The long-term goals of this project are: (1) to develop a theoretical framework to quantify turbulence induced NPZ interactions. (2) to apply the theory to develop parameterizations to be used in realistic environmental physical biological coupling numerical models. OBJECTIVES: Connect the Goodman and Robinson (2008) statistically based pdf theory to Advection Diffusion Reaction (ADR) modeling of NPZ interaction.

  6. A Survey of Shape Parameterization Techniques

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1999-01-01

    This paper provides a survey of shape parameterization techniques for multidisciplinary optimization and highlights some emerging ideas. The survey focuses on the suitability of available techniques for complex configurations, with suitability criteria based on the efficiency, effectiveness, ease of implementation, and availability of analytical sensitivities for geometry and grids. The paper also contains a section on field grid regeneration, grid deformation, and sensitivity analysis techniques.

  7. Uniting statistical and individual-based approaches for animal movement modelling.

    PubMed

    Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel

    2014-01-01

    The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems.

  8. Uniting Statistical and Individual-Based Approaches for Animal Movement Modelling

    PubMed Central

    Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel

    2014-01-01

    The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems. PMID:24979047

  9. Local gravity field modeling using spherical radial basis functions and a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mahbuby, Hany; Safari, Abdolreza; Foroughi, Ismael

    2017-05-01

    Spherical Radial Basis Functions (SRBFs) can express the local gravity field model of the Earth if they are parameterized optimally on or below the Bjerhammar sphere. This parameterization is generally defined as the shape of the base functions, their number, center locations, bandwidths, and scale coefficients. The number/location and bandwidths of the base functions are the most important parameters for accurately representing the gravity field; once they are determined, the scale coefficients can then be computed accordingly. In this study, the point-mass kernel, as the simplest shape of SRBFs, is chosen to evaluate the synthesized free-air gravity anomalies over the rough area in Auvergne and GNSS/Leveling points (synthetic height anomalies) are used to validate the results. A two-step automatic approach is proposed to determine the optimum distribution of the base functions. First, the location of the base functions and their bandwidths are found using the genetic algorithm; second, the conjugate gradient least squares method is employed to estimate the scale coefficients. The proposed methodology shows promising results. On the one hand, when using the genetic algorithm, the base functions do not need to be set to a regular grid and they can move according to the roughness of topography. In this way, the models meet the desired accuracy with a low number of base functions. On the other hand, the conjugate gradient method removes the bias between derived quasigeoid heights from the model and from the GNSS/leveling points; this means there is no need for a corrector surface. The numerical test on the area of interest revealed an RMS of 0.48 mGal for the differences between predicted and observed gravity anomalies, and a corresponding 9 cm for the differences in GNSS/leveling points.

  10. On the usage of classical nucleation theory in predicting the impact of bacteria on weather and climate

    NASA Astrophysics Data System (ADS)

    Sahyoun, Maher; Woetmann Nielsen, Niels; Havskov Sørensen, Jens; Finster, Kai; Bay Gosewinkel Karlson, Ulrich; Šantl-Temkiv, Tina; Smith Korsholm, Ulrik

    2014-05-01

    Bacteria, e.g. Pseudomonas syringae, have previously been found efficient in nucleating ice heterogeneously at temperatures close to -2°C in laboratory tests. Therefore, ice nucleation active (INA) bacteria may be involved in the formation of precipitation in mixed phase clouds, and could potentially influence weather and climate. Investigations into the impact of INA bacteria on climate have shown that emissions were too low to significantly impact the climate (Hoose et al., 2010). The goal of this study is to clarify the reason for finding the marginal impact on climate when INA bacteria were considered, by investigating the usability of ice nucleation rate parameterization based on classical nucleation theory (CNT). For this purpose, two parameterizations of heterogeneous ice nucleation were compared. Both parameterizations were implemented and tested in a 1-d version of the operational weather model (HIRLAM) (Lynch et al., 2000; Unden et al., 2002) in two different meteorological cases. The first parameterization is based on CNT and denoted CH08 (Chen et al., 2008). This parameterization is a function of temperature and the size of the IN. The second parameterization, denoted HAR13, was derived from nucleation measurements of SnomaxTM (Hartmann et al., 2013). It is a function of temperature and the number of protein complexes on the outer membranes of the cell. The fraction of cloud droplets containing each type of IN as percentage in the cloud droplets population were used and the sensitivity of cloud ice production in each parameterization was compared. In this study, HAR13 produces more cloud ice and precipitation than CH08 when the bacteria fraction increases. In CH08, the increase of the bacteria fraction leads to decreasing the cloud ice mixing ratio. The ice production using HAR13 was found to be more sensitive to the change of the bacterial fraction than CH08 which did not show a similar sensitivity. As a result, this may explain the marginal impact of IN bacteria in climate models when CH08 was used. The number of cell fragments containing proteins appears to be a more important parameter to consider than the size of the cell when parameterizing the heterogeneous freezing of bacteria.

  11. Finescale parameterizations of energy dissipation in a region of strong internal tides and sheared flow, the Lucky-Strike segment of the Mid-Atlantic Ridge

    NASA Astrophysics Data System (ADS)

    Pasquet, Simon; Bouruet-Aubertot, Pascale; Reverdin, Gilles; Turnherr, Andreas; Laurent, Lou St.

    2016-06-01

    The relevance of finescale parameterizations of dissipation rate of turbulent kinetic energy is addressed using finescale and microstructure measurements collected in the Lucky Strike segment of the Mid-Atlantic Ridge (MAR). There, high amplitude internal tides and a strongly sheared mean flow sustain a high level of dissipation rate and turbulent mixing. Two sets of parameterizations are considered: the first ones (Gregg, 1989; Kunze et al., 2006) were derived to estimate dissipation rate of turbulent kinetic energy induced by internal wave breaking, while the second one aimed to estimate dissipation induced by shear instability of a strongly sheared mean flow and is a function of the Richardson number (Kunze et al., 1990; Polzin, 1996). The latter parameterization has low skill in reproducing the observed dissipation rate when shear unstable events are resolved presumably because there is no scale separation between the duration of unstable events and the inverse growth rate of unstable billows. Instead GM based parameterizations were found to be relevant although slight biases were observed. Part of these biases result from the small value of the upper vertical wavenumber integration limit in the computation of shear variance in Kunze et al. (2006) parameterization that does not take into account internal wave signal of high vertical wavenumbers. We showed that significant improvement is obtained when the upper integration limit is set using a signal to noise ratio criterion and that the spatial structure of dissipation rates is reproduced with this parameterization.

  12. Evaluation of Methods to Estimate the Surface Downwelling Longwave Flux during Arctic Winter

    NASA Technical Reports Server (NTRS)

    Chiacchio, Marc; Francis, Jennifer; Stackhouse, Paul, Jr.

    2002-01-01

    Surface longwave radiation fluxes dominate the energy budget of nighttime polar regions, yet little is known about the relative accuracy of existing satellite-based techniques to estimate this parameter. We compare eight methods to estimate the downwelling longwave radiation flux and to validate their performance with measurements from two field programs in thc Arctic: the Coordinated Eastern Arctic Experiment (CEAREX ) conducted in the Barents Sea during the autumn and winter of 1988, and the Lead Experiment performed in the Beaufort Sea in the spring of 1992. Five of the eight methods were developed for satellite-derived quantities, and three are simple parameterizations based on surface observations. All of the algorithms require information about cloud fraction, which is provided from the NASA-NOAA Television and Infrared Observation Satellite (TIROS) Operational Vertical Sounder (TOVS) polar pathfinder dataset (Path-P): some techniques ingest temperature and moisture profiles (also from Path-P): one-half of the methods assume that clouds are opaque and have a constant geometric thickness of 50 hPa, and three include no thickness information whatsoever. With a somewhat limited validation dataset, the following primary conclusions result: (1) all methods exhibit approximately the same correlations with measurements and rms differences, but the biases range from -34 W sq m (16% of the mean) to nearly 0; (2) the error analysis described here indicates that the assumption of a 50-hPa cloud thickness is too thin by a factor of 2 on average in polar nighttime conditions; (3) cloud-overlap techniques. which effectively increase mean cloud thickness, significantly improve the results; (4) simple Arctic-specific parameterizations performed poorly, probably because they were developed with surface-observed cloud fractions; and (5) the single algorithm that includes an estimate of cloud thickness exhibits the smallest differences from observations.

  13. Comments on “A Unified Representation of Deep Moist Convection in Numerical Modeling of the Atmosphere. Part I”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man

    2015-06-01

    Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less

  14. Global Binary Optimization on Graphs for Classification of High Dimensional Data

    DTIC Science & Technology

    2014-09-01

    Buades et al . in [10] introduce a new non-local means algorithm for image denoising and compare it to some of the best methods. In [28], Grady de...scribes a random walk algorithm for image seg- mentation using the solution to a Dirichlet prob- lem. Elmoataz et al . present generalizations of the...graph Laplacian [19] for image denoising and man- ifold smoothing. Couprie et al . in [16] propose a parameterized graph-based energy function that unifies

  15. Soil Conservation Service Curve Number method: How to mend a wrong soil moisture accounting procedure?

    NASA Astrophysics Data System (ADS)

    Michel, Claude; Andréassian, Vazken; Perrin, Charles

    2005-02-01

    This paper unveils major inconsistencies in the age-old and yet efficient Soil Conservation Service Curve Number (SCS-CN) procedure. Our findings are based on an analysis of the continuous soil moisture accounting procedure implied by the SCS-CN equation. It is shown that several flaws plague the original SCS-CN procedure, the most important one being a confusion between intrinsic parameter and initial condition. A change of parameterization and a more complete assessment of the initial condition lead to a renewed SCS-CN procedure, while keeping the acknowledged efficiency of the original method.

  16. On the joint inversion of geophysical data for models of the coupled core-mantle system

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1991-01-01

    Joint inversion of magnetic, earth rotation, geoid, and seismic data for a unified model of the coupled core-mantle system is proposed and shown to be possible. A sample objective function is offered and simplified by targeting results from independent inversions and summary travel time residuals instead of original observations. These data are parameterized in terms of a very simple, closed model of the topographically coupled core-mantle system. Minimization of the simplified objective function leads to a nonlinear inverse problem; an iterative method for solution is presented. Parameterization and method are emphasized; numerical results are not presented.

  17. Structural test of the parameterized-backbone method for protein design.

    PubMed

    Plecs, Joseph J; Harbury, Pehr B; Kim, Peter S; Alber, Tom

    2004-09-03

    Designing new protein folds requires a method for simultaneously optimizing the conformation of the backbone and the side-chains. One approach to this problem is the use of a parameterized backbone, which allows the systematic exploration of families of structures. We report the crystal structure of RH3, a right-handed, three-helix coiled coil that was designed using a parameterized backbone and detailed modeling of core packing. This crystal structure was determined using another rationally designed feature, a metal-binding site that permitted experimental phasing of the X-ray data. RH3 adopted the intended fold, which has not been observed previously in biological proteins. Unanticipated structural asymmetry in the trimer was a principal source of variation within the RH3 structure. The sequence of RH3 differs from that of a previously characterized right-handed tetramer, RH4, at only one position in each 11 amino acid sequence repeat. This close similarity indicates that the design method is sensitive to the core packing interactions that specify the protein structure. Comparison of the structures of RH3 and RH4 indicates that both steric overlap and cavity formation provide strong driving forces for oligomer specificity.

  18. Infrared radiation parameterizations for the minor CO2 bands and for several CFC bands in the window region

    NASA Technical Reports Server (NTRS)

    Kratz, David P.; Chou, Ming-Dah; Yan, Michael M.-H.

    1993-01-01

    Fast and accurate parameterizations have been developed for the transmission functions of the CO2 9.4- and 10.4-micron bands, as well as the CFC-11, CFC-12, and CFC-22 bands located in the 8-12-micron region. The parameterizations are based on line-by-line calculations of transmission functions for the CO2 bands and on high spectral resolution laboratory measurements of the absorption coefficients for the CFC bands. Also developed are the parameterizations for the H2O transmission functions for the corresponding spectral bands. Compared to the high-resolution calculations, fluxes at the tropopause computed with the parameterizations are accurate to within 10 percent when overlapping of gas absorptions within a band is taken into account. For individual gas absorption, the accuracy is of order 0-2 percent. The climatic effects of these trace gases have been studied using a zonally averaged multilayer energy balance model, which includes seasonal cycles and a simplified deep ocean. With the trace gas abundances taken to follow the Intergovernmental Panel on Climate Change Low Emissions 'B' scenario, the transient response of the surface temperature is simulated for the period 1900-2060.

  19. Parameterization of planetary wave breaking in the middle atmosphere

    NASA Technical Reports Server (NTRS)

    Garcia, Rolando R.

    1991-01-01

    A parameterization of planetary wave breaking in the middle atmosphere has been developed and tested in a numerical model which includes governing equations for a single wave and the zonal-mean state. The parameterization is based on the assumption that wave breaking represents a steady-state equilibrium between the flux of wave activity and its dissipation by nonlinear processes, and that the latter can be represented as linear damping of the primary wave. With this and the additional assumption that the effect of breaking is to prevent further amplitude growth, the required dissipation rate is readily obtained from the steady-state equation for wave activity; diffusivity coefficients then follow from the dissipation rate. The assumptions made in the derivation are equivalent to those commonly used in parameterizations for gravity wave breaking, but the formulation in terms of wave activity helps highlight the central role of the wave group velocity in determining the dissipation rate. Comparison of model results with nonlinear calculations of wave breaking and with diagnostic determinations of stratospheric diffusion coefficients reveals remarkably good agreement, and suggests that the parameterization could be useful for simulating inexpensively, but realistically, the effects of planetary wave transport.

  20. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model

    NASA Astrophysics Data System (ADS)

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M.; Phifer, Jeremy R.; Paluch, Andrew S.

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of 2.2± 0.2 log units (ranking 15 out of 62 entries), the correlation coefficient ( R) was 0.6± 0.1 (ranking 35), and 72± 6 % of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  1. A Graphical User Interface for Parameterizing Biochemical Models of Photosynthesis and Chlorophyll Fluorescence

    NASA Astrophysics Data System (ADS)

    Kornfeld, A.; Van der Tol, C.; Berry, J. A.

    2015-12-01

    Recent advances in optical remote sensing of photosynthesis offer great promise for estimating gross primary productivity (GPP) at leaf, canopy and even global scale. These methods -including solar-induced chlorophyll fluorescence (SIF) emission, fluorescence spectra, and hyperspectral features such as the red edge and the photochemical reflectance index (PRI) - can be used to greatly enhance the predictive power of global circulation models (GCMs) by providing better constraints on GPP. The way to use measured optical data to parameterize existing models such as SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) is not trivial, however. We have therefore extended a biochemical model to include fluorescence and other parameters in a coupled treatment. To help parameterize the model, we then use nonlinear curve-fitting routines to determine the parameter set that enables model results to best fit leaf-level gas exchange and optical data measurements. To make the tool more accessible to all practitioners, we have further designed a graphical user interface (GUI) based front-end to allow researchers to analyze data with a minimum of effort while, at the same time, allowing them to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. Here we discuss the tool and its effectiveness, using recently-gathered leaf-level data.

  2. Quality Assessment of the Cobel-Isba Numerical Forecast System of Fog and Low Clouds

    NASA Astrophysics Data System (ADS)

    Bergot, Thierry

    2007-06-01

    Short-term forecasting of fog is a difficult issue which can have a large societal impact. Fog appears in the surface boundary layer and is driven by the interactions between land surface and the lower layers of the atmosphere. These interactions are still not well parameterized in current operational NWP models, and a new methodology based on local observations, an adaptive assimilation scheme and a local numerical model is tested. The proposed numerical forecast method of foggy conditions has been run during three years at Paris-CdG international airport. This test over a long-time period allows an in-depth evaluation of the forecast quality. This study demonstrates that detailed 1-D models, including detailed physical parameterizations and high vertical resolution, can reasonably represent the major features of the life cycle of fog (onset, development and dissipation) up to +6 h. The error on the forecast onset and burn-off time is typically 1 h. The major weakness of the methodology is related to the evolution of low clouds (stratus lowering). Even if the occurrence of fog is well forecasted, the value of the horizontal visibility is only crudely forecasted. Improvements in the microphysical parameterization and in the translation algorithm converting NWP prognostic variables into a corresponding horizontal visibility seems necessary to accurately forecast the value of the visibility.

  3. Assessing uncertainty in published risk estimates using ...

    EPA Pesticide Factsheets

    Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality as an example. The objective is to characterize model uncertainty by evaluating estimates across published epidemiologic studies of the same cohort.Methods: This analysis was based on 5 studies analyzing a cohort of 2,357 workers employed from 1950-74 in a chromate production plant in Maryland. Cox and Poisson models were the only model forms considered by study authors to assess the effect of Cr(VI) on lung cancer mortality. All models adjusted for smoking and included a 5-year exposure lag, however other latency periods and model covariates such as age and race were considered. Published effect estimates were standardized to the same units and normalized by their variances to produce a standardized metric to compare variability within and between model forms. A total of 5 similarly parameterized analyses were considered across model form, and 16 analyses with alternative parameterizations were considered within model form (10 Cox; 6 Poisson). Results: Across Cox and Poisson model forms, adjusted cumulative exposure coefficients (betas) for 5 similar analyses ranged from 2.47 to 4.33 (mean=2.97, σ2=0.63). Within the 10 Cox models, coefficients ranged from 2.53 to 4.42 (mean=3.29, σ2=0.

  4. Research on Finite Element Model Generating Method of General Gear Based on Parametric Modelling

    NASA Astrophysics Data System (ADS)

    Lei, Yulong; Yan, Bo; Fu, Yao; Chen, Wei; Hou, Liguo

    2017-06-01

    Aiming at the problems of low efficiency and poor quality of gear meshing in the current mainstream finite element software, through the establishment of universal gear three-dimensional model, and explore the rules of unit and node arrangement. In this paper, a finite element model generation method of universal gear based on parameterization is proposed. Visual Basic program is used to realize the finite element meshing, give the material properties, and set the boundary / load conditions and other pre-processing work. The dynamic meshing analysis of the gears is carried out with the method proposed in this pape, and compared with the calculated values to verify the correctness of the method. The method greatly shortens the workload of gear finite element pre-processing, improves the quality of gear mesh, and provides a new idea for the FEM pre-processing.

  5. Use of Mass- and Area-Dimensional Power Laws for Determining Precipitation Particle Terminal Velocities.

    NASA Astrophysics Data System (ADS)

    Mitchell, David L.

    1996-06-01

    Based on boundary layer theory and a comparison of empirical power laws relating the Reynolds and Best numbers, it was apparent that the primary variables governing a hydrometeor's terminal velocity were its mass, its area projected to the flow, and its maximum dimension. The dependence of terminal velocities on surface roughness appeared secondary, with surface roughness apparently changing significantly only during phase changes (i.e., ice to liquid). In the theoretical analysis, a new, comprehensive expression for the drag force, which is valid for both inertial and viscous-dominated flow, was derived.A hydrometeor's mass and projected area were simply and accurately represented in terms of its maximum dimension by using dimensional power laws. Hydrometeor terminal velocities were calculated by using mass- and area-dimensional power laws to parameterize the Best number, X. Using a theoretical relationship general for all particle types, the Reynolds number, Re, was then calculated from the Best number. Terminal velocities were calculated from Re.Alternatively, four Re-X power-law expressions were extracted from the theoretical Re-X relationship. These expressions collectively describe the terminal velocities of all ice particle types. These were parameterized using mass- and area-dimensional power laws, yielding four theoretically based power-law expressions predicting fall speeds in terms of ice particle maximum dimension. When parameterized for a given ice particle type, the theoretical fall speed power law can be compared directly with empirical fall speed-dimensional power laws in the literature for the appropriate Re range. This provides a means of comparing theory with observations.Terminal velocities predicted by this method were compared with fall speeds given by empirical fall speed expressions for the same ice particle type, which were curve fits to measured fall speeds. Such comparisons were done for nine types of ice particles. Fall speeds predicted by this method differed from those based on measurements by no more than 20%.The features that distinguish this method of determining fall speeds from others are that it does not represent particles as spheroids, it is general for any ice particle shape and size, it is conceptually and mathematically simple, it appears accurate, and it provides for physical insight. This method also allows fall speeds to be determined from aircraft measurements of ice particle mass and projected area, rather than directly measuring fall speeds. This approach may be useful for ice crystals characterizing cirrus clouds, for which direct fall speed measurements are difficult.

  6. Shortcomings with Tree-Structured Edge Encodings for Neural Networks

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2004-01-01

    In evolutionary algorithms a common method for encoding neural networks is to use a tree structured assembly procedure for constructing them. Since node operators have difficulties in specifying edge weights and these operators are execution-order dependent, an alternative is to use edge operators. Here we identify three problems with edge operators: in the initialization phase most randomly created genotypes produce an incorrect number of inputs and outputs; variation operators can easily change the number of input/output (I/O) units; and units have a connectivity bias based on their order of creation. Instead of creating I/O nodes as part of the construction process we propose using parameterized operators to connect to preexisting I/O units. Results from experiments show that these parameterized operators greatly improve the probability of creating and maintaining networks with the correct number of I/O units, remove the connectivity bias with I/O units and produce better controllers for a goal-scoring task.

  7. Influence of natural surfactants on short wind waves in the coastal Peruvian waters

    NASA Astrophysics Data System (ADS)

    Kiefhaber, D.; Zappa, C. J.; Jähne, B.

    2015-07-01

    Results from measurements of wave slope statistics during the R/V Meteor M91 cruise in the coastal upwelling regions off the coast of Peru are reported. Wave slope probability distributions were measured with an instrument based on the reflection of light at the water surface and a method very similar to the Cox and Munk (1954b) sun glitter technique. During the cruise, the mean square slope (mss) of the waves was found to be very variable, despite the limited range of encountered wind speeds. The Cox and Munk (1954b) parameterization for clean water is found to overestimate mss, but most measurements fall in the range spanned by their clean water and slick parameterizations. The observed variability of mss is attributed to the wave damping effect of surface films, generated by increased biological production in the upwelling zones. The small footprint and high temporal resolution of the measurement allows for tracking abrupt changes in conditions caused by the often patchy structure of the surface films.

  8. Systematic methods for the design of a class of fuzzy logic controllers

    NASA Astrophysics Data System (ADS)

    Yasin, Saad Yaser

    2002-09-01

    Fuzzy logic control, a relatively new branch of control, can be used effectively whenever conventional control techniques become inapplicable or impractical. Various attempts have been made to create a generalized fuzzy control system and to formulate an analytically based fuzzy control law. In this study, two methods, the left and right parameterization method and the normalized spline-base membership function method, were utilized for formulating analytical fuzzy control laws in important practical control applications. The first model was used to design an idle speed controller, while the second was used to control an inverted control problem. The results of both showed that a fuzzy logic control system based on the developed models could be used effectively to control highly nonlinear and complex systems. This study also investigated the application of fuzzy control in areas not fully utilizing fuzzy logic control. Three important practical applications pertaining to the automotive industries were studied. The first automotive-related application was the idle speed of spark ignition engines, using two fuzzy control methods: (1) left and right parameterization, and (2) fuzzy clustering techniques and experimental data. The simulation and experimental results showed that a conventional controller-like performance fuzzy controller could be designed based only on experimental data and intuitive knowledge of the system. In the second application, the automotive cruise control problem, a fuzzy control model was developed using parameters adaptive Proportional plus Integral plus Derivative (PID)-type fuzzy logic controller. Results were comparable to those using linearized conventional PID and linear quadratic regulator (LQR) controllers and, in certain cases and conditions, the developed controller outperformed the conventional PID and LQR controllers. The third application involved the air/fuel ratio control problem, using fuzzy clustering techniques, experimental data, and a conversion algorithm, to develop a fuzzy-based control algorithm. Results were similar to those obtained by recently published conventional control based studies. The influence of the fuzzy inference operators and parameters on performance and stability of the fuzzy logic controller was studied Results indicated that, the selections of certain parameters or combinations of parameters, affect greatly the performance and stability of the fuzzy controller. Diagnostic guidelines used to tune or change certain factors or parameters to improve controller performance were developed based on knowledge gained from conventional control methods and knowledge gained from the experimental and the simulation results of this study.

  9. Exploring a new method for the retrieval of urban thermophysical properties using thermal infrared remote sensing and deterministic modeling

    NASA Astrophysics Data System (ADS)

    De Ridder, K.; Bertrand, C.; Casanova, G.; Lefebvre, W.

    2012-09-01

    Increasingly, mesoscale meteorological and climate models are used to predict urban weather and climate. Yet, large uncertainties remain regarding values of some urban surface properties. In particular, information concerning urban values for thermal roughness length and thermal admittance is scarce. In this paper, we present a method to estimate values for thermal admittance in combination with an optimal scheme for thermal roughness length, based on METEOSAT-8/SEVIRI thermal infrared imagery in conjunction with a deterministic atmospheric model containing a simple urbanized land surface scheme. Given the spatial resolution of the SEVIRI sensor, the resulting parameter values are applicable at scales of the order of 5 km. As a study case we focused on the city of Paris, for the day of 29 June 2006. Land surface temperature was calculated from SEVIRI thermal radiances using a new split-window algorithm specifically designed to handle urban conditions, as described inAppendix A, including a correction for anisotropy effects. Land surface temperature was also calculated in an ensemble of simulations carried out with the ARPS mesoscale atmospheric model, combining different thermal roughness length parameterizations with a range of thermal admittance values. Particular care was taken to spatially match the simulated land surface temperature with the SEVIRI field of view, using the so-called point spread function of the latter. Using Bayesian inference, the best agreement between simulated and observed land surface temperature was obtained for the Zilitinkevich (1970) and Brutsaert (1975) thermal roughness length parameterizations, the latter with the coefficients obtained by Kanda et al. (2007). The retrieved thermal admittance values associated with either thermal roughness parameterization were, respectively, 1843 ± 108 J m-2 s-1/2 K-1 and 1926 ± 115 J m-2 s-1/2 K-1.

  10. Prescription of land-surface boundary conditions in GISS GCM 2: A simple method based on high-resolution vegetation data bases

    NASA Technical Reports Server (NTRS)

    Matthews, E.

    1984-01-01

    A simple method was developed for improved prescription of seasonal surface characteristics and parameterization of land-surface processes in climate models. This method, developed for the Goddard Institute for Space Studies General Circulation Model II (GISS GCM II), maintains the spatial variability of fine-resolution land-cover data while restricting to 8 the number of vegetation types handled in the model. This was achieved by: redefining the large number of vegetation classes in the 1 deg x 1 deg resolution Matthews (1983) vegetation data base as percentages of 8 simple types; deriving roughness length, field capacity, masking depth and seasonal, spectral reflectivity for the 8 types; and aggregating these surface features from the 1 deg x 1 deg resolution to coarser model resolutions, e.g., 8 deg latitude x 10 deg longitude or 4 deg latitude x 5 deg longitude.

  11. Subgrid-scale parameterization and low-frequency variability: a response theory approach

    NASA Astrophysics Data System (ADS)

    Demaeyer, Jonathan; Vannitsem, Stéphane

    2016-04-01

    Weather and climate models are limited in the possible range of resolved spatial and temporal scales. However, due to the huge space- and time-scale ranges involved in the Earth System dynamics, the effects of many sub-grid processes should be parameterized. These parameterizations have an impact on the forecasts or projections. It could also affect the low-frequency variability present in the system (such as the one associated to ENSO or NAO). An important question is therefore to know what is the impact of stochastic parameterizations on the Low-Frequency Variability generated by the system and its model representation. In this context, we consider a stochastic subgrid-scale parameterization based on the Ruelle's response theory and proposed in Wouters and Lucarini (2012). We test this approach in the context of a low-order coupled ocean-atmosphere model, detailed in Vannitsem et al. (2015), for which a part of the atmospheric modes is considered as unresolved. A natural separation of the phase-space into a slow invariant set and its fast complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts. References: Vannitsem S, Demaeyer J, De Cruz L, Ghil M. 2015. Low-frequency variability and heat transport in a low-order nonlinear coupled ocean-atmosphere model. Physica D: Nonlinear Phenomena 309: 71-85. Wouters J, Lucarini V. 2012. Disentangling multi-level systems: averaging, correlations and memory. Journal of Statistical Mechanics: Theory and Experiment 2012(03): P03 003.

  12. Impact of different parameterization schemes on simulation of mesoscale convective system over south-east India

    NASA Astrophysics Data System (ADS)

    Madhulatha, A.; Rajeevan, M.

    2018-02-01

    Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.

  13. SU-F-BRB-16: A Spreadsheet Based Automatic Trajectory GEnerator (SAGE): An Open Source Tool for Automatic Creation of TrueBeam Developer Mode Robotic Trajectories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Etmektzoglou, A; Mishra, P; Svatos, M

    Purpose: To automate creation and delivery of robotic linac trajectories with TrueBeam Developer Mode, an open source spreadsheet-based trajectory generation tool has been developed, tested and made freely available. The computing power inherent in a spreadsheet environment plus additional functions programmed into the tool insulate users from the underlying schema tedium and allow easy calculation, parameterization, graphical visualization, validation and finally automatic generation of Developer Mode XML scripts which are directly loadable on a TrueBeam linac. Methods: The robotic control system platform that allows total coordination of potentially all linac moving axes with beam (continuous, step-and-shoot, or combination thereof) becomesmore » available in TrueBeam Developer Mode. Many complex trajectories are either geometric or can be described in analytical form, making the computational power, graphing and programmability available in a spreadsheet environment an easy and ideal vehicle for automatic trajectory generation. The spreadsheet environment allows also for parameterization of trajectories thus enabling the creation of entire families of trajectories using only a few variables. Standard spreadsheet functionality has been extended for powerful movie-like dynamic graphic visualization of the gantry, table, MLC, room, lasers, 3D observer placement and beam centerline all as a function of MU or time, for analysis of the motions before requiring actual linac time. Results: We used the tool to generate and deliver extended SAD “virtual isocenter” trajectories of various shapes such as parameterized circles and ellipses. We also demonstrated use of the tool in generating linac couch motions that simulate respiratory motion using analytical parameterized functions. Conclusion: The SAGE tool is a valuable resource to experiment with families of complex geometric trajectories for a TrueBeam Linac. It makes Developer Mode more accessible as a vehicle to quickly translate research ideas into machine readable scripts without programming knowledge. As an open source initiative, it also enables researcher collaboration on future developments. I am a full time employee at Varian Medical Systems, Palo Alto, California.« less

  14. Low-Power Embedded DSP Core for Communication Systems

    NASA Astrophysics Data System (ADS)

    Tsao, Ya-Lan; Chen, Wei-Hao; Tan, Ming Hsuan; Lin, Maw-Ching; Jou, Shyh-Jye

    2003-12-01

    This paper proposes a parameterized digital signal processor (DSP) core for an embedded digital signal processing system designed to achieve demodulation/synchronization with better performance and flexibility. The features of this DSP core include parameterized data path, dual MAC unit, subword MAC, and optional function-specific blocks for accelerating communication system modulation operations. This DSP core also has a low-power structure, which includes the gray-code addressing mode, pipeline sharing, and advanced hardware looping. Users can select the parameters and special functional blocks based on the character of their applications and then generating a DSP core. The DSP core has been implemented via a cell-based design method using a synthesizable Verilog code with TSMC 0.35[InlineEquation not available: see fulltext.]m SPQM and 0.25[InlineEquation not available: see fulltext.]m 1P5M library. The equivalent gate count of the core area without memory is approximately 50 k. Moreover, the maximum operating frequency of a[InlineEquation not available: see fulltext.] version is 100 MHz (0.35[InlineEquation not available: see fulltext.]m) and 140 MHz (0.25[InlineEquation not available: see fulltext.]m).

  15. Recent developments and assessment of a three-dimensional PBL parameterization for improved wind forecasting over complex terrain

    NASA Astrophysics Data System (ADS)

    Kosovic, B.; Jimenez, P. A.; Haupt, S. E.; Martilli, A.; Olson, J.; Bao, J. W.

    2017-12-01

    At present, the planetary boundary layer (PBL) parameterizations available in most numerical weather prediction (NWP) models are one-dimensional. One-dimensional parameterizations are based on the assumption of horizontal homogeneity. This homogeneity assumption is appropriate for grid cell sizes greater than 10 km. However, for mesoscale simulations of flows in complex terrain with grid cell sizes below 1 km, the assumption of horizontal homogeneity is violated. Applying a one-dimensional PBL parameterization to high-resolution mesoscale simulations in complex terrain could result in significant error. For high-resolution mesoscale simulations of flows in complex terrain, we have therefore developed and implemented a three-dimensional (3D) PBL parameterization in the Weather Research and Forecasting (WRF) model. The implementation of the 3D PBL scheme is based on the developments outlined by Mellor and Yamada (1974, 1982). Our implementation in the Weather Research and Forecasting (WRF) model uses a pure algebraic model (level 2) to diagnose the turbulent fluxes. To evaluate the performance of the 3D PBL model, we use observations from the Wind Forecast Improvement Project 2 (WFIP2). The WFIP2 field study took place in the Columbia River Gorge area from 2015-2017. We focus on selected cases when physical phenomena of significance for wind energy applications such as mountain waves, topographic wakes, and gap flows were observed. Our assessment of the 3D PBL parameterization also considers a large-eddy simulation (LES). We carried out a nested LES with grid cell sizes of 30 m and 10 m covering a large fraction of the WFIP2 study area. Both LES domains were discretized using 6000 x 3000 x 200 grid cells in zonal, meridional, and vertical direction, respectively. The LES results are used to assess the relative magnitude of horizontal gradients of turbulent stresses and fluxes in comparison to vertical gradients. The presentation will highlight the advantages of the 3D PBL scheme in regions of complex terrain.

  16. Simultaneous and synergistic profiling of cloud and drizzle properties using ground-based observations

    NASA Astrophysics Data System (ADS)

    Rusli, Stephanie P.; Donovan, David P.; Russchenberg, Herman W. J.

    2017-12-01

    Despite the importance of radar reflectivity (Z) measurements in the retrieval of liquid water cloud properties, it remains nontrivial to interpret Z due to the possible presence of drizzle droplets within the clouds. So far, there has been no published work that utilizes Z to identify the presence of drizzle above the cloud base in an optimized and a physically consistent manner. In this work, we develop a retrieval technique that exploits the synergy of different remote sensing systems to carry out this task and to subsequently profile the microphysical properties of the cloud and drizzle in a unified framework. This is accomplished by using ground-based measurements of Z, lidar attenuated backscatter below as well as above the cloud base, and microwave brightness temperatures. Fast physical forward models coupled to cloud and drizzle structure parameterization are used in an optimal-estimation-type framework in order to retrieve the best estimate for the cloud and drizzle property profiles. The cloud retrieval is first evaluated using synthetic signals generated from large-eddy simulation (LES) output to verify the forward models used in the retrieval procedure and the vertical parameterization of the liquid water content (LWC). From this exercise it is found that, on average, the cloud properties can be retrieved within 5 % of the mean truth. The full cloud-drizzle retrieval method is then applied to a selected ACCEPT (Analysis of the Composition of Clouds with Extended Polarization Techniques) campaign dataset collected in Cabauw, the Netherlands. An assessment of the retrieval products is performed using three independent methods from the literature; each was specifically developed to retrieve only the cloud properties, the drizzle properties below the cloud base, or the drizzle fraction within the cloud. One-to-one comparisons, taking into account the uncertainties or limitations of each retrieval, show that our results are consistent with what is derived using the three independent methods.

  17. A parameterization of the heterogeneous hydrolysis of N2O5 for mass-based aerosol models: improvement of particulate nitrate prediction

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Wolke, Ralf; Ran, Liang; Birmili, Wolfram; Spindler, Gerald; Schröder, Wolfram; Su, Hang; Cheng, Yafang; Tegen, Ina; Wiedensohler, Alfred

    2018-01-01

    The heterogeneous hydrolysis of N2O5 on the surface of deliquescent aerosol leads to HNO3 formation and acts as a major sink of NOx in the atmosphere during night-time. The reaction constant of this heterogeneous hydrolysis is determined by temperature (T), relative humidity (RH), aerosol particle composition, and the surface area concentration (S). However, these parameters were not comprehensively considered in the parameterization of the heterogeneous hydrolysis of N2O5 in previous mass-based 3-D aerosol modelling studies. In this investigation, we propose a sophisticated parameterization (NewN2O5) of N2O5 heterogeneous hydrolysis with respect to T, RH, aerosol particle compositions, and S based on laboratory experiments. We evaluated closure between NewN2O5 and a state-of-the-art parameterization based on a sectional aerosol treatment. The comparison showed a good linear relationship (R = 0.91) between these two parameterizations. NewN2O5 was incorporated into a 3-D fully online coupled model, COSMO-MUSCAT, with the mass-based aerosol treatment. As a case study, we used the data from the HOPE Melpitz campaign (10-25 September 2013) to validate model performance. Here, we investigated the improvement of nitrate prediction over western and central Europe. The modelled particulate nitrate mass concentrations ([NO3-]) were validated by filter measurements over Germany (Neuglobsow, Schmücke, Zingst, and Melpitz). The modelled [NO3-] was significantly overestimated for this period by a factor of 5-19, with the corrected NH3 emissions (reduced by 50 %) and the original parameterization of N2O5 heterogeneous hydrolysis. The NewN2O5 significantly reduces the overestimation of [NO3-] by ˜ 35 %. Particularly, the overestimation factor was reduced to approximately 1.4 in our case study (12, 17-18 and 25 September 2013) when [NO3-] was dominated by local chemical formations. In our case, the suppression of organic coating was negligible over western and central Europe, with an influence on [NO3-] of less than 2 % on average and 20 % at the most significant moment. To obtain a significant impact of the organic coating effect, N2O5, SOA, and NH3 need to be present when RH is high and T is low. However, those conditions were rarely fulfilled simultaneously over western and central Europe. Hence, the organic coating effect on the reaction probability of N2O5 may not be as significant as expected over western and central Europe.

  18. A Novel Shape Parameterization Approach

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1999-01-01

    This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.

  19. Error in Radar-Derived Soil Moisture due to Roughness Parameterization: An Analysis Based on Synthetical Surface Profiles

    PubMed Central

    Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.

    2009-01-01

    In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956

  20. Whys and Hows of the Parameterized Interval Analyses: A Guide for the Perplexed

    NASA Astrophysics Data System (ADS)

    Elishakoff, I.

    2013-10-01

    Novel elements of the parameterized interval analysis developed in [1, 2] are emphasized in this response, to Professor E.D. Popova, or possibly to others who may be perplexed by the parameterized interval analysis. It is also shown that the overwhelming majority of comments by Popova [3] are based on a misreading of our paper [1]. Partial responsibility for this misreading can be attributed to the fact that explanations provided in [1] were laconic. These could have been more extensive in view of the novelty of our approach [1, 2]. It is our duty, therefore, to reiterate, in this response, the whys and hows of parameterization of intervals, introduced in [1] to incorporate the possibly available information on dependencies between various intervals describing the problem at hand. This possibility appears to have been discarded by the standard interval analysis, which may, as a result, lead to overdesign, leading to the possible divorce of engineers from the otherwise beautiful interval analysis.

  1. Importance of parametrizing constraints in quantum-mechanical variational calculations

    NASA Technical Reports Server (NTRS)

    Chung, Kwong T.; Bhatia, A. K.

    1992-01-01

    In variational calculations of quantum mechanics, constraints are sometimes imposed explicitly on the wave function. These constraints, which are deduced by physical arguments, are often not uniquely defined. In this work, the advantage of parametrizing constraints and letting the variational principle determine the best possible constraint for the problem is pointed out. Examples are carried out to show the surprising effectiveness of the variational method if constraints are parameterized. It is also shown that misleading results may be obtained if a constraint is not parameterized.

  2. Practical quality control tools for curves and surfaces

    NASA Technical Reports Server (NTRS)

    Small, Scott G.

    1992-01-01

    Curves (geometry) and surfaces created by Computer Aided Geometric Design systems in the engineering environment must satisfy two basic quality criteria: the geometric shape must have the desired engineering properties; and the objects must be parameterized in a way which does not cause computational difficulty for geometric processing and engineering analysis. Interactive techniques are described which are in use at Boeing to evaluate the quality of aircraft geometry prior to Computational Fluid Dynamic analysis, including newly developed methods for examining surface parameterization and its effects.

  3. 3D models mapping optimization through an integrated parameterization approach: cases studies from Ravenna

    NASA Astrophysics Data System (ADS)

    Cipriani, L.; Fantini, F.; Bertacchi, S.

    2014-06-01

    Image-based modelling tools based on SfM algorithms gained great popularity since several software houses provided applications able to achieve 3D textured models easily and automatically. The aim of this paper is to point out the importance of controlling models parameterization process, considering that automatic solutions included in these modelling tools can produce poor results in terms of texture utilization. In order to achieve a better quality of textured models from image-based modelling applications, this research presents a series of practical strategies aimed at providing a better balance between geometric resolution of models from passive sensors and their corresponding (u,v) map reference systems. This aspect is essential for the achievement of a high-quality 3D representation, since "apparent colour" is a fundamental aspect in the field of Cultural Heritage documentation. Complex meshes without native parameterization have to be "flatten" or "unwrapped" in the (u,v) parameter space, with the main objective to be mapped with a single image. This result can be obtained by using two different strategies: the former automatic and faster, while the latter manual and time-consuming. Reverse modelling applications provide automatic solutions based on splitting the models by means of different algorithms, that produce a sort of "atlas" of the original model in the parameter space, in many instances not adequate and negatively affecting the overall quality of representation. Using in synergy different solutions, ranging from semantic aware modelling techniques to quad-dominant meshes achieved using retopology tools, it is possible to obtain a complete control of the parameterization process.

  4. On the Relationship between Observed NLDN Lightning ...

    EPA Pesticide Factsheets

    Lightning-produced nitrogen oxides (NOX=NO+NO2) in the middle and upper troposphere play an essential role in the production of ozone (O3) and influence the oxidizing capacity of the troposphere. Despite much effort in both observing and modeling lightning NOX during the past decade, considerable uncertainties still exist with the quantification of lightning NOX production and distribution in the troposphere. It is even more challenging for regional chemistry and transport models to accurately parameterize lightning NOX production and distribution in time and space. The Community Multiscale Air Quality Model (CMAQ) parameterizes the lightning NO emissions using local scaling factors adjusted by the convective precipitation rate that is predicted by the upstream meteorological model; the adjustment is based on the observed lightning strikes from the National Lightning Detection Network (NLDN). For this parameterization to be valid, the existence of an a priori reasonable relationship between the observed lightning strikes and the modeled convective precipitation rates is needed. In this study, we will present an analysis leveraged on the observed NLDN lightning strikes and CMAQ model simulations over the continental United States for a time period spanning over a decade. Based on the analysis, new parameterization scheme for lightning NOX will be proposed and the results will be evaluated. The proposed scheme will be beneficial to modeling exercises where the obs

  5. Z-Index Parameterization for Volumetric CT Image Reconstruction via 3-D Dictionary Learning.

    PubMed

    Bai, Ti; Yan, Hao; Jia, Xun; Jiang, Steve; Wang, Ge; Mou, Xuanqin

    2017-12-01

    Despite the rapid developments of X-ray cone-beam CT (CBCT), image noise still remains a major issue for the low dose CBCT. To suppress the noise effectively while retain the structures well for low dose CBCT image, in this paper, a sparse constraint based on the 3-D dictionary is incorporated into a regularized iterative reconstruction framework, defining the 3-D dictionary learning (3-DDL) method. In addition, by analyzing the sparsity level curve associated with different regularization parameters, a new adaptive parameter selection strategy is proposed to facilitate our 3-DDL method. To justify the proposed method, we first analyze the distributions of the representation coefficients associated with the 3-D dictionary and the conventional 2-D dictionary to compare their efficiencies in representing volumetric images. Then, multiple real data experiments are conducted for performance validation. Based on these results, we found: 1) the 3-D dictionary-based sparse coefficients have three orders narrower Laplacian distribution compared with the 2-D dictionary, suggesting the higher representation efficiencies of the 3-D dictionary; 2) the sparsity level curve demonstrates a clear Z-shape, and hence referred to as Z-curve, in this paper; 3) the parameter associated with the maximum curvature point of the Z-curve suggests a nice parameter choice, which could be adaptively located with the proposed Z-index parameterization (ZIP) method; 4) the proposed 3-DDL algorithm equipped with the ZIP method could deliver reconstructions with the lowest root mean squared errors and the highest structural similarity index compared with the competing methods; 5) similar noise performance as the regular dose FDK reconstruction regarding the standard deviation metric could be achieved with the proposed method using (1/2)/(1/4)/(1/8) dose level projections. The contrast-noise ratio is improved by ~2.5/3.5 times with respect to two different cases under the (1/8) dose level compared with the low dose FDK reconstruction. The proposed method is expected to reduce the radiation dose by a factor of 8 for CBCT, considering the voted strongly discriminated low contrast tissues.

  6. Regularized wave equation migration for imaging and data reconstruction

    NASA Astrophysics Data System (ADS)

    Kaplan, Sam T.

    The reflection seismic experiment results in a measurement (reflection seismic data) of the seismic wavefield. The linear Born approximation to the seismic wavefield leads to a forward modelling operator that we use to approximate reflection seismic data in terms of a scattering potential. We consider approximations to the scattering potential using two methods: the adjoint of the forward modelling operator (migration), and regularized numerical inversion using the forward and adjoint operators. We implement two parameterizations of the forward modelling and migration operators: source-receiver and shot-profile. For both parameterizations, we find requisite Green's function using the split-step approximation. We first develop the forward modelling operator, and then find the adjoint (migration) operator by recognizing a Fredholm integral equation of the first kind. The resulting numerical system is generally under-determined, requiring prior information to find a solution. In source-receiver migration, the parameterization of the scattering potential is understood using the migration imaging condition, and this encourages us to apply sparse prior models to the scattering potential. To that end, we use both a Cauchy prior and a mixed Cauchy-Gaussian prior, finding better resolved estimates of the scattering potential than are given by the adjoint. In shot-profile migration, the parameterization of the scattering potential has its redundancy in multiple active energy sources (i.e. shots). We find that a smallest model regularized inverse representation of the scattering potential gives a more resolved picture of the earth, as compared to the simpler adjoint representation. The shot-profile parameterization allows us to introduce a joint inversion to further improve the estimate of the scattering potential. Moreover, it allows us to introduce a novel data reconstruction algorithm so that limited data can be interpolated/extrapolated. The linearized operators are expensive, encouraging their parallel implementation. For the source-receiver parameterization of the scattering potential this parallelization is non-trivial. Seismic data is typically corrupted by various types of noise. Sparse coding can be used to suppress noise prior to migration. It is a method that stems from information theory and that we apply to noise suppression in seismic data.

  7. Straddling Interdisciplinary Seams: Working Safely in the Field, Living Dangerously With a Model

    NASA Astrophysics Data System (ADS)

    Light, B.; Roberts, A.

    2016-12-01

    Many excellent proposals for observational work have included language detailing how the proposers will appropriately archive their data and publish their results in peer-reviewed literature so that they may be readily available to the modeling community for parameterization development. While such division of labor may be both practical and inevitable, the assimilation of observational results and the development of observationally-based parameterizations of physical processes require care and feeding. Key questions include: (1) Is an existing parameterization accurate, consistent, and general? If not, it may be ripe for additional physics. (2) Do there exist functional working relationships between human modeler and human observationalist? If not, one or more may need to be initiated and cultivated. (3) If empirical observation and model development are a chicken/egg problem, how, given our lack of prescience and foreknowledge, can we better design observational science plans to meet the eventual demands of model parameterization? (4) Will the addition of new physics "break" the model? If so, then the addition may be imperative. In the context of these questions, we will make retrospective and forward-looking assessments of a now-decade-old numerical parameterization to treat the partitioning of solar energy at the Earth's surface where sea ice is present. While this so called "Delta-Eddington Albedo Parameterization" is currently employed in the widely-used Los Alamos Sea Ice Model (CICE) and appears to be standing the tests of accuracy, consistency, and generality, we will highlight some ideas for its ongoing development and improvement.

  8. Reconstruction of SAXS Profiles from Protein Structures

    PubMed Central

    Putnam, Daniel K.; Lowe, Edward W.

    2013-01-01

    Small angle X-ray scattering (SAXS) is used for low resolution structural characterization of proteins often in combination with other experimental techniques. After briefly reviewing the theory of SAXS we discuss computational methods based on 1) the Debye equation and 2) Spherical Harmonics to compute intensity profiles from a particular macromolecular structure. Further, we review how these formulas are parameterized for solvent density and hydration shell adjustment. Finally we introduce our solution to compute SAXS profiles utilizing GPU acceleration. PMID:24688746

  9. An Integrative Wave Model for the Marginal Ice Zone Based on a Rheological Parameterization

    DTIC Science & Technology

    2015-09-30

    2015) Characterizing the behavior of gravity wave propagation into a floating or submerged viscous layer , 2015 AGU Joint Assembly Meeting, May 3–7...are the PI and a PhD student. Task 1: Use an analytical method to determine the propagation of waves through a floating viscoelastic mat for a wide...and Ben Holt. 2 Task 3: Assemble all existing laboratory and field data of wave propagation in ice covers. Task 4: Determine if all existing

  10. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    PubMed

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  11. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    NASA Astrophysics Data System (ADS)

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    2018-03-01

    Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^'), and velocity-impedance-II (α″, β″ and I_S^'). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. The heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson's ratios, can be identified clearly with the inverted isotropic-elastic parameters.

  12. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    DOE PAGES

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    2018-03-06

    We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less

  13. Elastic full-waveform inversion and parameterization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    We report seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter tradeoff, arising from the covariance between different physical parameters, which increases nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parameterization and acquisition arrangement. An appropriate choice of model parameterization is critical to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parameterizations in isotropic-elastic FWI with walk-away vertical seismicmore » profile (W-VSP) dataset for unconventional heavy oil reservoir characterization. Six model parameterizations are considered: velocity-density (α, β and ρ'), modulus-density (κ, μ and ρ), Lamé-density (λ, μ' and ρ'''), impedance-density (IP, IS and ρ''), velocity-impedance-I (α', β' and I' P), and velocity-impedance-II (α'', β'' and I'S). We begin analyzing the interparameter tradeoff by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. In this paper, we discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter tradeoffs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter tradeoffs for various model parameterizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parameterization, the inverted density profile can be over-estimated, under-estimated or spatially distorted. Among the six cases, only the velocity-density parameterization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. Finally, the heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson’s ratios, can be identified clearly with the inverted isotropic-elastic parameters.« less

  14. Dynamically consistent parameterization of mesoscale eddies. Part III: Deterministic approach

    NASA Astrophysics Data System (ADS)

    Berloff, Pavel

    2018-07-01

    This work continues development of dynamically consistent parameterizations for representing mesoscale eddy effects in non-eddy-resolving and eddy-permitting ocean circulation models and focuses on the classical double-gyre problem, in which the main dynamic eddy effects maintain eastward jet extension of the western boundary currents and its adjacent recirculation zones via eddy backscatter mechanism. Despite its fundamental importance, this mechanism remains poorly understood, and in this paper we, first, study it and, then, propose and test its novel parameterization. We start by decomposing the reference eddy-resolving flow solution into the large-scale and eddy components defined by spatial filtering, rather than by the Reynolds decomposition. Next, we find that the eastward jet and its recirculations are robustly present not only in the large-scale flow itself, but also in the rectified time-mean eddies, and in the transient rectified eddy component, which consists of highly anisotropic ribbons of the opposite-sign potential vorticity anomalies straddling the instantaneous eastward jet core and being responsible for its continuous amplification. The transient rectified component is separated from the flow by a novel remapping method. We hypothesize that the above three components of the eastward jet are ultimately driven by the small-scale transient eddy forcing via the eddy backscatter mechanism, rather than by the mean eddy forcing and large-scale nonlinearities. We verify this hypothesis by progressively turning down the backscatter and observing the induced flow anomalies. The backscatter analysis leads us to formulating the key eddy parameterization hypothesis: in an eddy-permitting model at least partially resolved eddy backscatter can be significantly amplified to improve the flow solution. Such amplification is a simple and novel eddy parameterization framework implemented here in terms of local, deterministic flow roughening controlled by single parameter. We test the parameterization skills in an hierarchy of non-eddy-resolving and eddy-permitting modifications of the original model and demonstrate, that indeed it can be highly efficient for restoring the eastward jet extension and its adjacent recirculation zones. The new deterministic parameterization framework not only combines remarkable simplicity with good performance but also is dynamically transparent, therefore, it provides a powerful alternative to the common eddy diffusion and emerging stochastic parameterizations.

  15. Feeding on Multiple Sources: Towards a Universal Parameterization of the Functional Response of a Generalist Predator Allowing for Switching

    PubMed Central

    Morozov, Andrew; Petrovskii, Sergei

    2013-01-01

    Understanding of complex trophic interactions in ecosystems requires correct descriptions of the rate at which predators consume a variety of different prey species. Field and laboratory data on multispecies communities are rarely sufficient and usually cannot provide an unambiguous test for the theory. As a result, the conventional way of constructing a multi-prey functional response is speculative, and often based on assumptions that are difficult to verify. Predator responses allowing for prey selectivity and active switching are thought to be more biologically relevant compared to the standard proportion-based consumption. However, here we argue that the functional responses with switching may not be applicable to communities with a broad spectrum of resource types. We formulate a set of general rules that a biologically sound parameterization of a predator functional response should satisfy, and show that all existing formulations for the multispecies response with prey selectivity and switching fail to do so. Finally, we propose a universal framework for parameterization of a multi-prey functional response by combining patterns of food selectivity and proportion-based feeding. PMID:24086356

  16. Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs

    NASA Astrophysics Data System (ADS)

    Belochitski, A.; Krueger, S. K.; Moorthi, S.; Bogenschutz, P.; Pincus, R.

    2016-12-01

    A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation and cloudiness. Unlike other similar methods, only one new prognostic variable, turbulent kinetic energy (TKE), needs to be intoduced, making the technique computationally efficient.SHOC is now incorporated into a version of GFS, as well as into the next generation of the NCEP global model - NOAA Environmental Modeling System (NEMS). Turbulent diffusion coefficients computed by SHOC are now used in place of those produced by the boundary layer turbulence and shallow convection parameterizations. Large scale microphysics scheme is no longer used to calculate cloud fraction or the large-scale condensation/deposition. Instead, SHOC provides these variables. Radiative transfer parameterization uses cloudiness computed by SHOC.Outstanding problems include high level tropical cloud fraction being too high in SHOC runs, possibly related to the interaction of SHOC with condensate detrained from deep convection.Future work will consist of evaluating model performance and tuning the physics if necessary, by performing medium-range NWP forecasts with prescribed initial conditions, and AMIP-type climate tests with prescribed SSTs. Depending on the results, the model will be tuned or parameterizations modified. Next, SHOC will be implemented in the NCEP CFS, and tuned and evaluated for climate applications - seasonal prediction and long coupled climate runs. Impact of new physics on ENSO, MJO, ISO, monsoon variability, etc will be examined.

  17. Study of different deposition parameterizations on an atmospheric mesoscale Eulerian air quality model: Madrid case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    San Jose, R.; Cortes, J.; Moreno, J.

    1996-12-31

    The importance of an adequate parameterization of the deposition process for the simulation of the three dimensional pollution fields in a mesoscale context is out of any doubt. An accurate parameterization of the deposition flux is essential for a precise determination of the flux removal and for allowing longer simulation periods of the atmospheric processes. In addition, an accurate deposition pattern will allow a much more precise diagnostic of the impact of different pollutants on the different types of terrain actually present in complex environments such as the urban ones and their environs. In this contribution, we have implemented amore » complex resistance deposition model into an Air Quality System (ANA) applied over a large city such as Madrid (Spain). The model domain is 80x100 km which is much larger than the actual urban domain. The ANA model is composed on four different modules; a meteorological module which solves numerically the Navier Stokes equations and predicts the wind, temperature and humidity three dimensional fields every time step; the emission module, which produces the emissions every hour and with a high spatial resolution (250 x 250 m) and with landuse information (for biogenic emissions) from the Landsat-5 satellite image; a photochemical modules, which is based on the CBM-IV mechanism and solved numerically by following the SMVGEAR method and finally, a deposition module which is based on the resistance approach. The resistance module takes into account the landuse classification, the global solar radiation, the humidity of the terrain, the pH of the terrain, the characteristics of the pollutant, the Leaf Area Index and the reactivity of the pollutant.« less

  18. An empirical approach for estimating stress-coupling lengths for marine-terminating glaciers

    USGS Publications Warehouse

    Enderlin, Ellyn; Hamilton, Gordon S.; O'Neel, Shad; Bartholomaus, Timothy C.; Morlighem, Mathieu; Holt, John W.

    2016-01-01

    Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA) demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness) and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.

  19. Single-Image Distance Measurement by a Smart Mobile Device.

    PubMed

    Chen, Shangwen; Fang, Xianyong; Shen, Jianbing; Wang, Linbo; Shao, Ling

    2017-12-01

    Existing distance measurement methods either require multiple images and special photographing poses or only measure the height with a special view configuration. We propose a novel image-based method that can measure various types of distance from single image captured by a smart mobile device. The embedded accelerometer is used to determine the view orientation of the device. Consequently, pixels can be back-projected to the ground, thanks to the efficient calibration method using two known distances. Then the distance in pixel is transformed to a real distance in centimeter with a linear model parameterized by the magnification ratio. Various types of distance specified in the image can be computed accordingly. Experimental results demonstrate the effectiveness of the proposed method.

  20. Anisotropic dissipation of the global internal tide from a higher-order multiscale barotropic tidal simulation

    NASA Astrophysics Data System (ADS)

    Salehipour, Hesam; Peltier, W. Richard

    2013-04-01

    Increasing recognition of the importance of the diapycnal mixing induced by the dissipation of internal tides excited by the interaction of the barotropic tide with bottom topography has begun to attract increasing attention. The partition of the dissipation of the barotropic tide between that related to the internal tide and that related to bottom friction is also of considerable interest as this partition has been shown to shift significantly between the modern and Last Glacial Maximum tidal regimes [Griffiths and Peltier, 2008, 2009] . Ocean general circulation models, though clearly unable to explicitly resolve small scale mixing processes, currently rely on the introduction of an appropriate parameterization of the contribution to such mixing due to dissipation of the internal tidal. One widely-used parameterization of this kind (which is currently employed in POP2) is that proposed by Jayne and St. Laurent [GRL 2001] and is based on topographic roughness. This contrasts with the parameterization of Carrere and Lyard [GRL 2003] and Lyard [Ocean Dynamics, 2006] which also considers the flow direction with respect to the topographic features. Both of these parameterizations require the tuning of parameters to arrive at sensible tidal amplitudes. We have developed an original higher order barotropic tidal model based on the discontinuous Galerkin finite element method applied on global triangular grids [Salehipour et al., submitted to Ocean Modelling] in which we parameterize the energy conversion to baroclinic tides by introducing an anisotropic internal tide drag [Griffiths and Peltier GRL 2008, Griffiths and Peltier J Climate 2009] which also considers the time dependent angle of attack of the barotropic tidal flow on abyssal topographic features but requires no tuning parameters. The model is massively parallelized which enables very high resolution modeling of global barotropic tides as well as the implementation of local grid refinement. In this paper we will present maps of energy dissipation for different tidal constituents using grids with resolutions up to 1/18° in coastal regions as well as in areas with high gradients in the bottom topography. The discontinuous Galerkin formulation provides important energy conservation properties as well as enabling the accurate representation of sharp topographic gradients without smoothing, a feature well matched to the multi-scale problem of the dissipation of the internal tide. We will describe the detailed energy budgets delivered by this model under both modern and Last Glacial Maximum oceanographic conditions, including relative sea level and internal density stratification effects. The results of the simulations will be illustrated with global maps with enhanced resolution for the internal tidal dissipation which may be exploited in the parameterization of vertical mixing. We will use the reconstructed paleotopography of the ICE-5G model of Peltier [Annu. Rev. Earth Planet Sci. 2004] as well as the more recent refinement (ICE-6G) to compute the characteristics of the LGM tidal regime and will compare these characteristics to those of the modern ocean.

  1. Elastic Properties of Novel Co- and CoNi-Based Superalloys Determined through Bayesian Inference and Resonant Ultrasound Spectroscopy

    NASA Astrophysics Data System (ADS)

    Goodlet, Brent R.; Mills, Leah; Bales, Ben; Charpagne, Marie-Agathe; Murray, Sean P.; Lenthe, William C.; Petzold, Linda; Pollock, Tresa M.

    2018-06-01

    Bayesian inference is employed to precisely evaluate single crystal elastic properties of novel γ -γ ' Co- and CoNi-based superalloys from simple and non-destructive resonant ultrasound spectroscopy (RUS) measurements. Nine alloys from three Co-, CoNi-, and Ni-based alloy classes were evaluated in the fully aged condition, with one alloy per class also evaluated in the solution heat-treated condition. Comparisons are made between the elastic properties of the three alloy classes and among the alloys of a single class, with the following trends observed. A monotonic rise in the c_{44} (shear) elastic constant by a total of 12 pct is observed between the three alloy classes as Co is substituted for Ni. Elastic anisotropy ( A) is also increased, with a large majority of the nearly 13 pct increase occurring after Co becomes the dominant constituent. Together the five CoNi alloys, with Co:Ni ratios from 1:1 to 1.5:1, exhibited remarkably similar properties with an average A 1.8 pct greater than the Ni-based alloy CMSX-4. Custom code demonstrating a substantial advance over previously reported methods for RUS inversion is also reported here for the first time. CmdStan-RUS is built upon the open-source probabilistic programing language of Stan and formulates the inverse problem using Bayesian methods. Bayesian posterior distributions are efficiently computed with Hamiltonian Monte Carlo (HMC), while initial parameterization is randomly generated from weakly informative prior distributions. Remarkably robust convergence behavior is demonstrated across multiple independent HMC chains in spite of initial parameterization often very far from actual parameter values. Experimental procedures are substantially simplified by allowing any arbitrary misorientation between the specimen and crystal axes, as elastic properties and misorientation are estimated simultaneously.

  2. Regionalization of subsurface stormflow parameters of hydrologic models: Up-scaling from physically based numerical simulations at hillslope scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Melkamu; Ye, Sheng; Li, Hongyi

    2014-07-19

    Subsurface stormflow is an important component of the rainfall-runoff response, especially in steep forested regions. However; its contribution is poorly represented in current generation of land surface hydrological models (LSMs) and catchment-scale rainfall-runoff models. The lack of physical basis of common parameterizations precludes a priori estimation (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global models. This paper is aimed at deriving physically based parameterizations of the storage-discharge relationship relating to subsurface flow. These parameterizations are derived through a two-step up-scaling procedure: firstly, through simulations with a physically based (Darcian) subsurfacemore » flow model for idealized three dimensional rectangular hillslopes, accounting for within-hillslope random heterogeneity of soil hydraulic properties, and secondly, through subsequent up-scaling to the catchment scale by accounting for between-hillslope and within-catchment heterogeneity of topographic features (e.g., slope). These theoretical simulation results produced parameterizations of the storage-discharge relationship in terms of soil hydraulic properties, topographic slope and their heterogeneities, which were consistent with results of previous studies. Yet, regionalization of the resulting storage-discharge relations across 50 actual catchments in eastern United States, and a comparison of the regionalized results with equivalent empirical results obtained on the basis of analysis of observed streamflow recession curves, revealed a systematic inconsistency. It was found that the difference between the theoretical and empirically derived results could be explained, to first order, by climate in the form of climatic aridity index. This suggests a possible codependence of climate, soils, vegetation and topographic properties, and suggests that subsurface flow parameterization needed for ungauged locations must account for both the physics of flow in heterogeneous landscapes, and the co-dependence of soil and topographic properties with climate, including possibly the mediating role of vegetation.« less

  3. Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application

    NASA Astrophysics Data System (ADS)

    Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni

    2018-06-01

    Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.

  4. Seasonal Parameterizations of the Tau-Omega Model Using the ComRAD Ground-Based SMAP Simulator

    NASA Technical Reports Server (NTRS)

    O'Neill, P.; Joseph, A.; Srivastava, P.; Cosh, M.; Lang, R.

    2014-01-01

    NASA's Soil Moisture Active Passive (SMAP) mission is scheduled for launch in November 2014. In the prelaunch time frame, the SMAP team has focused on improving retrieval algorithms for the various SMAP baseline data products. The SMAP passive-only soil moisture product depends on accurate parameterization of the tau-omega model to achieve the required accuracy in soil moisture retrieval. During a field experiment (APEX12) conducted in the summer of 2012 under dry conditions in Maryland, the Combined Radar/Radiometer (ComRAD) truck-based SMAP simulator collected active/passive microwave time series data at the SMAP incident angle of 40 degrees over corn and soybeans throughout the crop growth cycle. A similar experiment was conducted only over corn in 2002 under normal moist conditions. Data from these two experiments will be analyzed and compared to evaluate how changes in vegetation conditions throughout the growing season in both a drought and normal year can affect parameterizations in the tau-omega model for more accurate soil moisture retrieval.

  5. Local identifiability and sensitivity analysis of neuromuscular blockade and depth of hypnosis models.

    PubMed

    Silva, M M; Lemos, J M; Coito, A; Costa, B A; Wigren, T; Mendonça, T

    2014-01-01

    This paper addresses the local identifiability and sensitivity properties of two classes of Wiener models for the neuromuscular blockade and depth of hypnosis, when drug dose profiles like the ones commonly administered in the clinical practice are used as model inputs. The local parameter identifiability was assessed based on the singular value decomposition of the normalized sensitivity matrix. For the given input signal excitation, the results show an over-parameterization of the standard pharmacokinetic/pharmacodynamic models. The same identifiability assessment was performed on recently proposed minimally parameterized parsimonious models for both the neuromuscular blockade and the depth of hypnosis. The results show that the majority of the model parameters are identifiable from the available input-output data. This indicates that any identification strategy based on the minimally parameterized parsimonious Wiener models for the neuromuscular blockade and for the depth of hypnosis is likely to be more successful than if standard models are used. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Hierarchical atom type definitions and extensible all-atom force fields.

    PubMed

    Jin, Zhao; Yang, Chunwei; Cao, Fenglei; Li, Feng; Jing, Zhifeng; Chen, Long; Shen, Zhe; Xin, Liang; Tong, Sijia; Sun, Huai

    2016-03-15

    The extensibility of force field is a key to solve the missing parameter problem commonly found in force field applications. The extensibility of conventional force fields is traditionally managed in the parameterization procedure, which becomes impractical as the coverage of the force field increases above a threshold. A hierarchical atom-type definition (HAD) scheme is proposed to make extensible atom type definitions, which ensures that the force field developed based on the definitions are extensible. To demonstrate how HAD works and to prepare a foundation for future developments, two general force fields based on AMBER and DFF functional forms are parameterized for common organic molecules. The force field parameters are derived from the same set of quantum mechanical data and experimental liquid data using an automated parameterization tool, and validated by calculating molecular and liquid properties. The hydration free energies are calculated successfully by introducing a polarization scaling factor to the dispersion term between the solvent and solute molecules. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  7. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  8. Handwriting: Feature Correlation Analysis for Biometric Hashes

    NASA Astrophysics Data System (ADS)

    Vielhauer, Claus; Steinmetz, Ralf

    2004-12-01

    In the application domain of electronic commerce, biometric authentication can provide one possible solution for the key management problem. Besides server-based approaches, methods of deriving digital keys directly from biometric measures appear to be advantageous. In this paper, we analyze one of our recently published specific algorithms of this category based on behavioral biometrics of handwriting, the biometric hash. Our interest is to investigate to which degree each of the underlying feature parameters contributes to the overall intrapersonal stability and interpersonal value space. We will briefly discuss related work in feature evaluation and introduce a new methodology based on three components: the intrapersonal scatter (deviation), the interpersonal entropy, and the correlation between both measures. Evaluation of the technique is presented based on two data sets of different size. The method presented will allow determination of effects of parameterization of the biometric system, estimation of value space boundaries, and comparison with other feature selection approaches.

  9. The use and misuse of V(c,max) in Earth System Models.

    PubMed

    Rogers, Alistair

    2014-02-01

    Earth System Models (ESMs) aim to project global change. Central to this aim is the need to accurately model global carbon fluxes. Photosynthetic carbon dioxide assimilation by the terrestrial biosphere is the largest of these fluxes, and in many ESMs is represented by the Farquhar, von Caemmerer and Berry (FvCB) model of photosynthesis. The maximum rate of carboxylation by the enzyme Rubisco, commonly termed V c,max, is a key parameter in the FvCB model. This study investigated the derivation of the values of V c,max used to represent different plant functional types (PFTs) in ESMs. Four methods for estimating V c,max were identified; (1) an empirical or (2) mechanistic relationship was used to relate V c,max to leaf N content, (3) V c,max was estimated using an approach based on the optimization of photosynthesis and respiration or (4) calibration of a user-defined V c,max to obtain a target model output. Despite representing the same PFTs, the land model components of ESMs were parameterized with a wide range of values for V c,max (-46 to +77% of the PFT mean). In many cases, parameterization was based on limited data sets and poorly defined coefficients that were used to adjust model parameters and set PFT-specific values for V c,max. Examination of the models that linked leaf N mechanistically to V c,max identified potential changes to fixed parameters that collectively would decrease V c,max by 31% in C3 plants and 11% in C4 plants. Plant trait data bases are now available that offer an excellent opportunity for models to update PFT-specific parameters used to estimate V c,max. However, data for parameterizing some PFTs, particularly those in the Tropics and the Arctic are either highly variable or largely absent.

  10. Polarization-sensitive optical coherence tomography-based imaging, parameterization, and quantification of human cartilage degeneration

    NASA Astrophysics Data System (ADS)

    Brill, Nicolai; Wirtz, Mathias; Merhof, Dorit; Tingart, Markus; Jahr, Holger; Truhn, Daniel; Schmitt, Robert; Nebelung, Sven

    2016-07-01

    Polarization-sensitive optical coherence tomography (PS-OCT) is a light-based, high-resolution, real-time, noninvasive, and nondestructive imaging modality yielding quasimicroscopic cross-sectional images of cartilage. As yet, comprehensive parameterization and quantification of birefringence and tissue properties have not been performed on human cartilage. PS-OCT and algorithm-based image analysis were used to objectively grade human cartilage degeneration in terms of surface irregularity, tissue homogeneity, signal attenuation, as well as birefringence coefficient and band width, height, depth, and number. Degeneration-dependent changes were noted for the former three parameters exclusively, thereby questioning the diagnostic value of PS-OCT in the assessment of human cartilage degeneration.

  11. ULTRASONIC MEASUREMENT OF SEDIMENT RESUSPENSION

    EPA Science Inventory

    Recognizing the need for improved measurement and parameterization of sediment resuspension, this paper presents a review of the major methods now in use for alleviating this need. Special attention is devoted to reviewing methods for obtaining sediment concentration profiles by ...

  12. Review of design optimization methods for turbomachinery aerodynamics

    NASA Astrophysics Data System (ADS)

    Li, Zhihui; Zheng, Xinqian

    2017-08-01

    In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.

  13. Impact of Snow Grain Shape and Internal Mixing with Black Carbon Aerosol on Snow Optical Properties for use in Climate Models

    NASA Astrophysics Data System (ADS)

    He, C.; Liou, K. N.; Takano, Y.; Yang, P.; Li, Q.; Chen, F.

    2017-12-01

    A set of parameterizations is developed for spectral single-scattering properties of clean and black carbon (BC)-contaminated snow based on geometric-optic surface-wave (GOS) computations, which explicitly resolves BC-snow internal mixing and various snow grain shapes. GOS calculations show that, compared with nonspherical grains, volume-equivalent snow spheres show up to 20% larger asymmetry factors and hence stronger forward scattering, particularly at wavelengths <1 mm. In contrast, snow grain sizes have a rather small impact on the asymmetry factor at wavelengths <1 mm, whereas size effects are important at longer wavelengths. The snow asymmetry factor is parameterized as a function of effective size, aspect ratio, and shape factor, and shows excellent agreement with GOS calculations. According to GOS calculations, the single-scattering coalbedo of pure snow is predominantly affected by grain sizes, rather than grain shapes, with higher values for larger grains. The snow single-scattering coalbedo is parameterized in terms of the effective size that combines shape and size effects, with an accuracy of >99%. Based on GOS calculations, BC-snow internal mixing enhances the snow single-scattering coalbedo at wavelengths <1 mm, but it does not alter the snow asymmetry factor. The BC-induced enhancement ratio of snow single-scattering coalbedo, independent of snow grain size and shape, is parameterized as a function of BC concentration with an accuracy of >99%. Overall, in addition to snow grain size, both BC-snow internal mixing and snow grain shape play critical roles in quantifying BC effects on snow optical properties. The present parameterizations can be conveniently applied to snow, land surface, and climate models including snowpack radiative transfer processes.

  14. A general science-based framework for dynamical spatio-temporal models

    USGS Publications Warehouse

    Wikle, C.K.; Hooten, M.B.

    2010-01-01

    Spatio-temporal statistical models are increasingly being used across a wide variety of scientific disciplines to describe and predict spatially-explicit processes that evolve over time. Correspondingly, in recent years there has been a significant amount of research on new statistical methodology for such models. Although descriptive models that approach the problem from the second-order (covariance) perspective are important, and innovative work is being done in this regard, many real-world processes are dynamic, and it can be more efficient in some cases to characterize the associated spatio-temporal dependence by the use of dynamical models. The chief challenge with the specification of such dynamical models has been related to the curse of dimensionality. Even in fairly simple linear, first-order Markovian, Gaussian error settings, statistical models are often over parameterized. Hierarchical models have proven invaluable in their ability to deal to some extent with this issue by allowing dependency among groups of parameters. In addition, this framework has allowed for the specification of science based parameterizations (and associated prior distributions) in which classes of deterministic dynamical models (e. g., partial differential equations (PDEs), integro-difference equations (IDEs), matrix models, and agent-based models) are used to guide specific parameterizations. Most of the focus for the application of such models in statistics has been in the linear case. The problems mentioned above with linear dynamic models are compounded in the case of nonlinear models. In this sense, the need for coherent and sensible model parameterizations is not only helpful, it is essential. Here, we present an overview of a framework for incorporating scientific information to motivate dynamical spatio-temporal models. First, we illustrate the methodology with the linear case. We then develop a general nonlinear spatio-temporal framework that we call general quadratic nonlinearity and demonstrate that it accommodates many different classes of scientific-based parameterizations as special cases. The model is presented in a hierarchical Bayesian framework and is illustrated with examples from ecology and oceanography. ?? 2010 Sociedad de Estad??stica e Investigaci??n Operativa.

  15. Simulation of semi-explicit mechanisms of SOA formation from glyoxal in a 3D model

    NASA Astrophysics Data System (ADS)

    Knote, C. J.; Hodzic, A.; Jimenez, J. L.; Volkamer, R.; Orlando, J. J.; Baidar, S.; Brioude, J. F.; Fast, J. D.; Gentner, D. R.; Goldstein, A. H.; Hayes, P. L.; Knighton, W. B.; Oetjen, H.; Setyan, A.; Stark, H.; Thalman, R. M.; Tyndall, G. S.; Washenfelder, R. A.; Waxman, E.; Zhang, Q.

    2013-12-01

    Formation of secondary organic aerosols (SOA) through multi-phase processing of glyoxal has been proposed recently as a relevant contributor to SOA mass. Glyoxal has both anthropogenic and biogenic sources, and readily partitions into the aqueous-phase of cloud droplets and aerosols. Both reversible and irreversible chemistry in the liquid-phase has been observed. A recent laboratory study indicates that the presence of salts in the liquid-phase strongly enhances the Henry';s law constant of glyoxal, allowing for much more effective multi-phase processing. In our work we investigate the contribution of glyoxal to SOA formation on the regional scale. We employ the regional chemistry transport model WRF-chem with MOZART gas-phase chemistry and MOSAIC aerosols, which we both extended to improve the description of glyoxal formation in the gas-phase, and its interactions with aerosols. The detailed description of aerosols in our setup allows us to compare very simple (uptake coefficient) parameterizations of SOA formation from glyoxal, as has been used in previous modeling studies, with much more detailed descriptions of the various pathways postulated based on laboratory studies. Measurements taken during the CARES and CalNex campaigns in California in summer 2010 allowed us to constrain the model, including the major direct precursors of glyoxal. Simulations at convection-permitting resolution over a 2 week period in June 2010 have been conducted to assess the effect of the different ways to parameterize SOA formation from glyoxal and investigate its regional variability. We find that depending on the parameterization used the contribution of glyoxal to SOA is between 1 and 15% in the LA basin during this period, and that simple parameterizations based on uptake coefficients derived from box model studies lead to higher contributions (15%) than parameterizations based on lab experiments (1%). A kinetic limitation found in experiments hinders substantial contribution of volume-based pathways to total SOA formation from glyoxal. Once removed, 5% of total SOA can be formed from glyoxal through these channels. Results from a year-long simulation over the continental US will give a broader picture of the contribution of glyoxal to SOA formation.

  16. Shape design sensitivity analysis and optimization of three dimensional elastic solids using geometric modeling and automatic regridding. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Yao, Tse-Min; Choi, Kyung K.

    1987-01-01

    An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.

  17. Parameterizing microphysical effects on variances and covariances of moisture and heat content using a multivariate probability density function: a study with CLUBB (tag MVCS)

    DOE PAGES

    Griffin, Brian M.; Larson, Vincent E.

    2016-11-25

    Microphysical processes, such as the formation, growth, and evaporation of precipitation, interact with variability and covariances (e.g., fluxes) in moisture and heat content. For instance, evaporation of rain may produce cold pools, which in turn may trigger fresh convection and precipitation. These effects are usually omitted or else crudely parameterized at subgrid scales in weather and climate models.A more formal approach is pursued here, based on predictive, horizontally averaged equations for the variances, covariances, and fluxes of moisture and heat content. These higher-order moment equations contain microphysical source terms. The microphysics terms can be integrated analytically, given a suitably simplemore » warm-rain microphysics scheme and an approximate assumption about the multivariate distribution of cloud-related and precipitation-related variables. Performing the integrations provides exact expressions within an idealized context.A large-eddy simulation (LES) of a shallow precipitating cumulus case is performed here, and it indicates that the microphysical effects on (co)variances and fluxes can be large. In some budgets and altitude ranges, they are dominant terms. The analytic expressions for the integrals are implemented in a single-column, higher-order closure model. Interactive single-column simulations agree qualitatively with the LES. The analytic integrations form a parameterization of microphysical effects in their own right, and they also serve as benchmark solutions that can be compared to non-analytic integration methods.« less

  18. Adaptively Parameterized Tomography of the Western Hellenic Subduction Zone

    NASA Astrophysics Data System (ADS)

    Hansen, S. E.; Papadopoulos, G. A.

    2017-12-01

    The Hellenic subduction zone (HSZ) is the most seismically active region in Europe and plays a major role in the active tectonics of the eastern Mediterranean. This complicated environment has the potential to generate both large magnitude (M > 8) earthquakes and tsunamis. Situated above the western end of the HSZ, Greece faces a high risk from these geologic hazards, and characterizing this risk requires detailed understanding of the geodynamic processes occurring in this area. However, despite previous investigations, the kinematics of the HSZ are still controversial. Regional tomographic studies have yielded important information about the shallow seismic structure of the HSZ, but these models only image down to 150 km depth within small geographic areas. Deeper structure is constrained by global tomographic models but with coarser resolution ( 200-300 km). Additionally, current tomographic models focused on the HSZ were generated with regularly-spaced gridding, and this type of parameterization often over-emphasizes poorly sampled regions of the model or under-represents small-scale structure. Therefore, we are developing a new, high-resolution image of the mantle structure beneath the western HSZ using an adaptively parameterized seismic tomography approach. By combining multiple, regional travel-time datasets in the context of a global model, with adaptable gridding based on the sampling density of high-frequency data, this method generates a composite model of mantle structure that is being used to better characterize geodynamic processes within the HSZ, thereby allowing for improved hazard assessment. Preliminary results will be shown.

  19. Importance of Chemical Composition of Ice Nuclei on the Formation of Arctic Ice Clouds

    NASA Astrophysics Data System (ADS)

    Keita, Setigui Aboubacar; Girard, Eric

    2016-09-01

    Ice clouds play an important role in the Arctic weather and climate system but interactions between aerosols, clouds and radiation remain poorly understood. Consequently, it is essential to fully understand their properties and especially their formation process. Extensive measurements from ground-based sites and satellite remote sensing reveal the existence of two Types of Ice Clouds (TICs) in the Arctic during the polar night and early spring. TICs-1 are composed by non-precipitating small (radar-unseen) ice crystals of less than 30 μm in diameter. The second type, TICs-2, are detected by radar and are characterized by a low concentration of large precipitating ice crystals ice crystals (>30 μm). To explain these differences, we hypothesized that TIC-2 formation is linked to the acidification of aerosols, which inhibits the ice nucleating properties of ice nuclei (IN). As a result, the IN concentration is reduced in these regions, resulting to a lower concentration of larger ice crystals. Water vapor available for deposition being the same, these crystals reach a larger size. Current weather and climate models cannot simulate these different types of ice clouds. This problem is partly due to the parameterizations implemented for ice nucleation. Over the past 10 years, several parameterizations of homogeneous and heterogeneous ice nucleation on IN of different chemical compositions have been developed. These parameterizations are based on two approaches: stochastic (that is nucleation is a probabilistic process, which is time dependent) and singular (that is nucleation occurs at fixed conditions of temperature and humidity and time-independent). The best approach remains unclear. This research aims to better understand the formation process of Arctic TICs using recently developed ice nucleation parameterizations. For this purpose, we have implemented these ice nucleation parameterizations into the Limited Area version of the Global Multiscale Environmental Model (GEM-LAM) and use them to simulate ice clouds observed during the Indirect and Semi-Direct Aerosol Campaign (ISDAC) in Alaska. Simulation results of the TICs-2 observed on April 15th and 25th (acidic cases) and TICs-1 observed on April 5th (non-acidic cases) are presented. Our results show that the stochastic approach based on the classical nucleation theory with the appropriate contact angle is better. Parameterizations of ice nucleation based on the singular approach tend to overestimate the ice crystal concentration in TICs-1 and TICs-2. The classical nucleation theory using the appropriate contact angle is the best approach to use to simulate the ice clouds investigated in this research.

  20. Parameterization of spectral baseline directly from short echo time full spectra in 1 H-MRS.

    PubMed

    Lee, Hyeong Hun; Kim, Hyeonjin

    2017-09-01

    To investigate the feasibility of parameterizing macromolecule (MM) resonances directly from short echo time (TE) spectra rather than pre-acquired, T 1 -weighted, metabolite-nulled spectra in 1 H-MRS. Initial line parameters for metabolites and MMs were set for rat brain spectra acquired at 9.4 Tesla upon a priori knowledge. Then, MM line parameters were optimized over several steps with fixed metabolite line parameters. The proposed method was tested by estimating metabolite T 1 . The results were compared with those obtained with two existing methods. Furthermore, subject-specific, spin density-weighted, MM model spectra were generated according to the MM line parameters from the proposed method for metabolite quantification. The results were compared with those obtained with subject-specific, T 1 -weighted, metabolite-nulled spectra. The metabolite T 1 were largely in close agreement among the three methods. The spin density-weighted MM resonances from the proposed method were in good agreement with the T 1 -weighted, metabolite-nulled spectra except for the MM resonance at ∼3.2 ppm. The metabolite concentrations estimated by incorporating these two different spectral baselines were also in good agreement except for several metabolites with resonances at ∼3.2 ppm. The MM parameterization directly from short-TE spectra is feasible. Further development of the method may allow for better representation of spectral baseline with negligible T 1 -weighting. Magn Reson Med 78:836-847, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  1. Improving and Understanding Climate Models: Scale-Aware Parameterization of Cloud Water Inhomogeneity and Sensitivity of MJO Simulation to Physical Parameters in a Convection Scheme

    NASA Astrophysics Data System (ADS)

    Xie, Xin

    Microphysics and convection parameterizations are two key components in a climate model to simulate realistic climatology and variability of cloud distribution and the cycles of energy and water. When a model has varying grid size or simulations have to be run with different resolutions, scale-aware parameterization is desirable so that we do not have to tune model parameters tailored to a particular grid size. The subgrid variability of cloud hydrometers is known to impact microphysics processes in climate models and is found to highly depend on spatial scale. A scale- aware liquid cloud subgrid variability parameterization is derived and implemented in the Community Earth System Model (CESM) in this study using long-term radar-based ground measurements from the Atmospheric Radiation Measurement (ARM) program. When used in the default CESM1 with the finite-volume dynamic core where a constant liquid inhomogeneity parameter was assumed, the newly developed parameterization reduces the cloud inhomogeneity in high latitudes and increases it in low latitudes. This is due to both the smaller grid size in high latitudes, and larger grid size in low latitudes in the longitude-latitude grid setting of CESM as well as the variation of the stability of the atmosphere. The single column model and general circulation model (GCM) sensitivity experiments show that the new parameterization increases the cloud liquid water path in polar regions and decreases it in low latitudes. Current CESM1 simulation suffers from the bias of both the pacific double ITCZ precipitation and weak Madden-Julian oscillation (MJO). Previous studies show that convective parameterization with multiple plumes may have the capability to alleviate such biases in a more uniform and physical way. A multiple-plume mass flux convective parameterization is used in Community Atmospheric Model (CAM) to investigate the sensitivity of MJO simulations. We show that MJO simulation is sensitive to entrainment rate specification. We found that shallow plumes can generate and sustain the MJO propagation in the model.

  2. Strong parameterization and coordination encirclements of graph of Penrose tiling vertices

    NASA Astrophysics Data System (ADS)

    Shutov, A. V.; Maleev, A. V.

    2017-07-01

    The coordination encirclements in a graph of Penrose tiling vertices have been investigated based on the analysis of vertice parameters. A strong parameterization of these vertices is developed in the form of a tiling of a parameter set in the region corresponding to different first coordination encirclements of vertices. An algorithm for constructing tilings of a set of parameters determining different coordination encirclements in a graph of Penrose tiling vertices of order n is proposed.

  3. Stochastic Parameterization: Toward a New View of Weather and Climate Models

    DOE PAGES

    Berner, Judith; Achatz, Ulrich; Batté, Lauriane; ...

    2017-03-31

    The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans,more » land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined« less

  4. Stochastic Parameterization: Toward a New View of Weather and Climate Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berner, Judith; Achatz, Ulrich; Batté, Lauriane

    The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans,more » land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined« less

  5. Why different gas flux velocity parameterizations result in so similar flux results in the North Atlantic?

    NASA Astrophysics Data System (ADS)

    Piskozub, Jacek; Wróbel, Iwona

    2016-04-01

    The North Atlantic is a crucial region for both ocean circulation and the carbon cycle. Most of ocean deep waters are produced in the basin making it a large CO2 sink. The region, close to the major oceanographic centres has been well covered with cruises. This is why we have performed a study of net CO2 flux dependence upon the choice of gas transfer velocity k parameterization for this very region: the North Atlantic including European Arctic Seas. The study has been a part of a ESA funded OceanFlux GHG Evolution project and, at the same time, a PhD thesis (of I.W) funded by Centre of Polar Studies "POLAR-KNOW" (a project of the Polish Ministry of Science). Early results have been presented last year at EGU 2015 as a PICO presentation EGU2015-11206-1. We have used FluxEngine, a tool created within an earlier ESA funded project (OceanFlux Greenhouse Gases) to calculate the North Atlantic and global fluxes with different gas transfer velocity formulas. During the processing of the data, we have noticed that the North Atlantic results for different k formulas are more similar (in the sense of relative error) that global ones. This was true both for parameterizations using the same power of wind speed and when comparing wind squared and wind cubed parameterizations. This result was interesting because North Atlantic winds are stronger than the global average ones. Was the flux result similarity caused by the fact that the parameterizations were tuned to the North Atlantic area where many of the early cruises measuring CO2 fugacities were performed? A closer look at the parameterizations and their history showed that not all of them were based on North Atlantic data. Some of them were tuned to the South Ocean with even stronger winds while some were based on global budgets of 14C. However we have found two reasons, not reported before in the literature, for North Atlantic fluxes being more similar than global ones for different gas transfer velocity parametrizations. The first one is the fact that most of the k functions intersect close to 9 m/s, the typical North Atlantic wind speeds. The squared and cubed function need to intersect in order to have similar global averages. This way the higher values of cubic functions for strong winds are offset by higher values of squared ones for weak ones. The wind speed of the intersection has to be higher than global wind speed average because discrepancies between different parameterizations increase with the wind speed. The North Atlantic region seem to have by chance just the right average wind speeds to make all the parameterizations resulting in similar annual fluxes. However there is a second reason for smaller inter-parameterization discrepancies in the North Atlantic than many other ocean basins. The North Atlantic CO2 fluxes are downward in every month. In many regions of the world, the direction of the flux changes between the winter and summer with wind speeds much stronger in the cold season. We show, using the actual formulas that in such a case the differences between the parameterizations partly cancel out which is not the case when the flux never changes its direction. Both the mechanisms accidentally make the North Atlantic an area where the choice of k parameterizations causes very small flux uncertainty in annual fluxes. On the other hand, it makes the North Atlantic data not very useful for choosing the parameterizations most closely representing real fluxes.

  6. Use of a flux-based field capacity criterion to identify effective hydraulic parameters of layered soil profiles subjected to synthetic drainage experiments

    NASA Astrophysics Data System (ADS)

    Nasta, Paolo; Romano, Nunzio

    2016-01-01

    This study explores the feasibility of identifying the effective soil hydraulic parameterization of a layered soil profile by using a conventional unsteady drainage experiment leading to field capacity. The flux-based field capacity criterion is attained by subjecting the soil profile to a synthetic drainage process implemented numerically in the Soil-Water-Atmosphere-Plant (SWAP) model. The effective hydraulic parameterization is associated to either aggregated or equivalent parameters, the former being determined by the geometrical scaling theory while the latter is obtained through the inverse modeling approach. Outcomes from both these methods depend on information that is sometimes difficult to retrieve at local scale and rather challenging or virtually impossible at larger scales. The only knowledge of topsoil hydraulic properties, for example, as retrieved by a near-surface field campaign or a data assimilation technique, is often exploited as a proxy to determine effective soil hydraulic parameterization at the largest spatial scales. Comparisons of the effective soil hydraulic characterization provided by these three methods are conducted by discussing the implications for their use and accounting for the trade-offs between required input information and model output reliability. To better highlight the epistemic errors associated to the different effective soil hydraulic properties and to provide some more practical guidance, the layered soil profiles are then grouped by using the FAO textural classes. For the moderately heterogeneous soil profiles available, all three approaches guarantee a general good predictability of the actual field capacity values and provide adequate identification of the effective hydraulic parameters. Conversely, worse performances are encountered for the highly variable vertical heterogeneity, especially when resorting to the "topsoil-only" information. In general, the best performances are guaranteed by the equivalent parameters, which might be considered a reference for comparisons with other techniques. As might be expected, the information content of the soil hydraulic properties pertaining only to the uppermost soil horizon is rather inefficient and also not capable to map out the hydrologic behavior of the real vertical soil heterogeneity since the drainage process is significantly affected by profile layering in almost all cases.

  7. Cloud Microphysics Parameterization in a Shallow Cumulus Cloud Simulated by a Largrangian Cloud Model

    NASA Astrophysics Data System (ADS)

    Oh, D.; Noh, Y.; Hoffmann, F.; Raasch, S.

    2017-12-01

    Lagrangian cloud model (LCM) is a fundamentally new approach of cloud simulation, in which the flow field is simulated by large eddy simulation and droplets are treated as Lagrangian particles undergoing cloud microphysics. LCM enables us to investigate raindrop formation and examine the parameterization of cloud microphysics directly by tracking the history of individual Lagrangian droplets simulated by LCM. Analysis of the magnitude of raindrop formation and the background physical conditions at the moment at which every Lagrangian droplet grows from cloud droplets to raindrops in a shallow cumulus cloud reveals how and under which condition raindrops are formed. It also provides information how autoconversion and accretion appear and evolve within a cloud, and how they are affected by various factors such as cloud water mixing ratio, rain water mixing ratio, aerosol concentration, drop size distribution, and dissipation rate. Based on these results, the parameterizations of autoconversion and accretion, such as Kessler (1969), Tripoli and Cotton (1980), Beheng (1994), and Kharioutdonov and Kogan (2000), are examined, and the modifications to improve the parameterizations are proposed.

  8. The relationship between a deformation-based eddy parameterization and the LANS-α turbulence model

    NASA Astrophysics Data System (ADS)

    Bachman, Scott D.; Anstey, James A.; Zanna, Laure

    2018-06-01

    A recent class of ocean eddy parameterizations proposed by Porta Mana and Zanna (2014) and Anstey and Zanna (2017) modeled the large-scale flow as a non-Newtonian fluid whose subgridscale eddy stress is a nonlinear function of the deformation. This idea, while largely new to ocean modeling, has a history in turbulence modeling dating at least back to Rivlin (1957). The new class of parameterizations results in equations that resemble the Lagrangian-averaged Navier-Stokes-α model (LANS-α, e.g., Holm et al., 1998a). In this note we employ basic tensor mathematics to highlight the similarities between these turbulence models using component-free notation. We extend the Anstey and Zanna (2017) parameterization, which was originally presented in 2D, to 3D, and derive variants of this closure that arise when the full non-Newtonian stress tensor is used. Despite the mathematical similarities between the non-Newtonian and LANS-α models which might provide insight into numerical implementation, the input and dissipation of kinetic energy between these two turbulent models differ.

  9. A new scheme for the parameterization of the turbulent planetary boundary layer in the GLAS fourth order GCM

    NASA Technical Reports Server (NTRS)

    Helfand, H. M.

    1985-01-01

    Methods being used to increase the horizontal and vertical resolution and to implement more sophisticated parameterization schemes for general circulation models (GCM) run on newer, more powerful computers are described. Attention is focused on the NASA-Goddard Laboratory for Atmospherics fourth order GCM. A new planetary boundary layer (PBL) model has been developed which features explicit resolution of two or more layers. Numerical models are presented for parameterizing the turbulent vertical heat, momentum and moisture fluxes at the earth's surface and between the layers in the PBL model. An extended Monin-Obhukov similarity scheme is applied to express the relationships between the lowest levels of the GCM and the surface fluxes. On-line weather prediction experiments are to be run to test the effects of the higher resolution thereby obtained for dynamic atmospheric processes.

  10. Rapid Parameterization Schemes for Aircraft Shape Optimization

    NASA Technical Reports Server (NTRS)

    Li, Wu

    2012-01-01

    A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.

  11. A linear-RBF multikernel SVM to classify big text corpora.

    PubMed

    Romero, R; Iglesias, E L; Borrajo, L

    2015-01-01

    Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers.

  12. A parameterized logarithmic image processing method with Laplacian of Gaussian filtering for lung nodule enhancement in chest radiographs.

    PubMed

    Chen, Sheng; Yao, Liping; Chen, Bao

    2016-11-01

    The enhancement of lung nodules in chest radiographs (CXRs) plays an important role in the manual as well as computer-aided detection (CADe) lung cancer. In this paper, we proposed a parameterized logarithmic image processing (PLIP) method combined with the Laplacian of a Gaussian (LoG) filter to enhance lung nodules in CXRs. We first applied several LoG filters with varying parameters to an original CXR to enhance the nodule-like structures as well as the edges in the image. We then applied the PLIP model, which can enhance lung nodule images with high contrast and was beneficial in extracting effective features for nodule detection in the CADe scheme. Our method combined the advantages of both the PLIP algorithm and the LoG algorithm, which can enhance lung nodules in chest radiographs with high contrast. To test our nodule enhancement method, we tested a CADe scheme, with a relatively high performance in nodule detection, using a publically available database containing 140 nodules in 140 CXRs enhanced through our nodule enhancement method. The CADe scheme attained a sensitivity of 81 and 70 % with an average of 5.0 frame rate (FP) and 2.0 FP, respectively, in a leave-one-out cross-validation test. By contrast, the CADe scheme based on the original image recorded a sensitivity of 77 and 63 % at 5.0 FP and 2.0 FP, respectively. We introduced the measurement of enhancement by entropy evaluation to objectively assess our method. Experimental results show that the proposed method obtains an effective enhancement of lung nodules in CXRs for both radiologists and CADe schemes.

  13. Multisite Evaluation of APEX for Water Quality: I. Best Professional Judgment Parameterization.

    PubMed

    Baffaut, Claire; Nelson, Nathan O; Lory, John A; Senaviratne, G M M M Anomaa; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S

    2017-11-01

    The Agricultural Policy Environmental eXtender (APEX) model is capable of estimating edge-of-field water, nutrient, and sediment transport and is used to assess the environmental impacts of management practices. The current practice is to fully calibrate the model for each site simulation, a task that requires resources and data not always available. The objective of this study was to compare model performance for flow, sediment, and phosphorus transport under two parameterization schemes: a best professional judgment (BPJ) parameterization based on readily available data and a fully calibrated parameterization based on site-specific soil, weather, event flow, and water quality data. The analysis was conducted using 12 datasets at four locations representing poorly drained soils and row-crop production under different tillage systems. Model performance was based on the Nash-Sutcliffe efficiency (NSE), the coefficient of determination () and the regression slope between simulated and measured annualized loads across all site years. Although the BPJ model performance for flow was acceptable (NSE = 0.7) at the annual time step, calibration improved it (NSE = 0.9). Acceptable simulation of sediment and total phosphorus transport (NSE = 0.5 and 0.9, respectively) was obtained only after full calibration at each site. Given the unacceptable performance of the BPJ approach, uncalibrated use of APEX for planning or management purposes may be misleading. Model calibration with water quality data prior to using APEX for simulating sediment and total phosphorus loss is essential. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  14. Parameter dimension of turbulence-induced phase errors and its effects on estimation in phase diversity

    NASA Technical Reports Server (NTRS)

    Thelen, Brian J.; Paxman, Richard G.

    1994-01-01

    The method of phase diversity has been used in the context of incoherent imaging to estimate jointly an object that is being imaged and phase aberrations induced by atmospheric turbulence. The method requires a parametric model for the phase-aberration function. Typically, the parameters are coefficients to a finite set of basis functions. Care must be taken in selecting a parameterization that properly balances accuracy in the representation of the phase-aberration function with stability in the estimates. It is well known that over parameterization can result in unstable estimates. Thus a certain amount of model mismatch is often desirable. We derive expressions that quantify the bias and variance in object and aberration estimates as a function of parameter dimension.

  15. A comprehensive parameterization of heterogeneous ice nucleation of dust surrogate: laboratory study with hematite particles and its application to atmospheric models

    NASA Astrophysics Data System (ADS)

    Hiranuma, N.; Paukert, M.; Steinke, I.; Zhang, K.; Kulkarni, G.; Hoose, C.; Schnaiter, M.; Saathoff, H.; Möhler, O.

    2014-12-01

    A new heterogeneous ice nucleation parameterization that covers a wide temperature range (-36 to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is important to accurately simulate the ice nucleation processes in cirrus clouds. The ice nucleation active surface-site density (ns) of hematite particles, used as a proxy for atmospheric dust particles, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions. These conditions were achieved by continuously changing the temperature (T) and relative humidity with respect to ice (RHice) in the chamber. Our measurements showed several different pathways to nucleate ice depending on T and RHice conditions. For instance, almost T-independent freezing was observed at -60 °C < T < -50 °C, where RHice explicitly controlled ice nucleation efficiency, while both T and RHice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T lower than -60 °C revealed that higher RHice was necessary to maintain a constant ns, whereas T may have played a significant role in ice nucleation at T higher than -50 °C. We implemented the new hematite-derived ns parameterization, which agrees well with previous AIDA measurements of desert dust, into two conceptual cloud models to investigate their sensitivity to the new parameterization in comparison to existing ice nucleation schemes for simulating cirrus cloud properties. Our results show that the new AIDA-based parameterization leads to an order of magnitude higher ice crystal concentrations and to an inhibition of homogeneous nucleation in lower-temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have a stronger influence on cloud properties, such as cloud longevity and initiation, compared to previous parameterizations.

  16. A basal stress parameterization for modeling landfast ice

    NASA Astrophysics Data System (ADS)

    Lemieux, Jean-François; Tremblay, L. Bruno; Dupont, Frédéric; Plante, Mathieu; Smith, Gregory C.; Dumont, Dany

    2015-04-01

    Current large-scale sea ice models represent very crudely or are unable to simulate the formation, maintenance and decay of coastal landfast ice. We present a simple landfast ice parameterization representing the effect of grounded ice keels. This parameterization is based on bathymetry data and the mean ice thickness in a grid cell. It is easy to implement and can be used for two-thickness and multithickness category models. Two free parameters are used to determine the critical thickness required for large ice keels to reach the bottom and to calculate the basal stress associated with the weight of the ridge above hydrostatic balance. A sensitivity study was conducted and demonstrates that the parameter associated with the critical thickness has the largest influence on the simulated landfast ice area. A 6 year (2001-2007) simulation with a 20 km resolution sea ice model was performed. The simulated landfast ice areas for regions off the coast of Siberia and for the Beaufort Sea were calculated and compared with data from the National Ice Center. With optimal parameters, the basal stress parameterization leads to a slightly shorter landfast ice season but overall provides a realistic seasonal cycle of the landfast ice area in the East Siberian, Laptev and Beaufort Seas. However, in the Kara Sea, where ice arches between islands are key to the stability of the landfast ice, the parameterization consistently leads to an underestimation of the landfast area.

  17. Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics

    DOE PAGES

    Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul

    2015-03-11

    Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less

  18. Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul

    Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less

  19. Methods used to parameterize the spatially-explicit components of a state-and-transition simulation model

    USGS Publications Warehouse

    Sleeter, Rachel; Acevedo, William; Soulard, Christopher E.; Sleeter, Benjamin M.

    2015-01-01

    Spatially-explicit state-and-transition simulation models of land use and land cover (LULC) increase our ability to assess regional landscape characteristics and associated carbon dynamics across multiple scenarios. By characterizing appropriate spatial attributes such as forest age and land-use distribution, a state-and-transition model can more effectively simulate the pattern and spread of LULC changes. This manuscript describes the methods and input parameters of the Land Use and Carbon Scenario Simulator (LUCAS), a customized state-and-transition simulation model utilized to assess the relative impacts of LULC on carbon stocks for the conterminous U.S. The methods and input parameters are spatially explicit and describe initial conditions (strata, state classes and forest age), spatial multipliers, and carbon stock density. Initial conditions were derived from harmonization of multi-temporal data characterizing changes in land use as well as land cover. Harmonization combines numerous national-level datasets through a cell-based data fusion process to generate maps of primary LULC categories. Forest age was parameterized using data from the North American Carbon Program and spatially-explicit maps showing the locations of past disturbances (i.e. wildfire and harvest). Spatial multipliers were developed to spatially constrain the location of future LULC transitions. Based on distance-decay theory, maps were generated to guide the placement of changes related to forest harvest, agricultural intensification/extensification, and urbanization. We analyze the spatially-explicit input parameters with a sensitivity analysis, by showing how LUCAS responds to variations in the model input. This manuscript uses Mediterranean California as a regional subset to highlight local to regional aspects of land change, which demonstrates the utility of LUCAS at many scales and applications.

  20. Topology Synthesis of Structures Using Parameter Relaxation and Geometric Refinement

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.

    2007-01-01

    Typically, structural topology optimization problems undergo relaxation of certain design parameters to allow the existence of intermediate variable optimum topologies. Relaxation permits the use of a variety of gradient-based search techniques and has been shown to guarantee the existence of optimal solutions and eliminate mesh dependencies. This Technical Publication (TP) will demonstrate the application of relaxation to a control point discretization of the design workspace for the structural topology optimization process. The control point parameterization with subdivision has been offered as an alternative to the traditional method of discretized finite element design domain. The principle of relaxation demonstrates the increased utility of the control point parameterization. One of the significant results of the relaxation process offered in this TP is that direct manufacturability of the optimized design will be maintained without the need for designer intervention or translation. In addition, it will be shown that relaxation of certain parameters may extend the range of problems that can be addressed; e.g., in permitting limited out-of-plane motion to be included in a path generation problem.

  1. Measures of Microbial Biomass for Soil Carbon Decomposition Models

    NASA Astrophysics Data System (ADS)

    Mayes, M. A.; Dabbs, J.; Steinweg, J. M.; Schadt, C. W.; Kluber, L. A.; Wang, G.; Jagadamma, S.

    2014-12-01

    Explicit parameterization of the decomposition of plant inputs and soil organic matter by microbes is becoming more widely accepted in models of various complexity, ranging from detailed process models to global-scale earth system models. While there are multiple ways to measure microbial biomass, chloroform fumigation-extraction (CFE) is commonly used to parameterize models.. However CFE is labor- and time-intensive, requires toxic chemicals, and it provides no specific information about the composition or function of the microbial community. We investigated correlations between measures of: CFE; DNA extraction yield; QPCR base-gene copy numbers for Bacteria, Fungi and Archaea; phospholipid fatty acid analysis; and direct cell counts to determine the potential for use as proxies for microbial biomass. As our ultimate goal is to develop a reliable, more informative, and faster methods to predict microbial biomass for use in models, we also examined basic soil physiochemical characteristics including texture, organic matter content, pH, etc. to identify multi-factor predictive correlations with one or more measures of the microbial community. Our work will have application to both microbial ecology studies and the next generation of process and earth system models.

  2. Mapping Global Ocean Surface Albedo from Satellite Observations: Models, Algorithms, and Datasets

    NASA Astrophysics Data System (ADS)

    Li, X.; Fan, X.; Yan, H.; Li, A.; Wang, M.; Qu, Y.

    2018-04-01

    Ocean surface albedo (OSA) is one of the important parameters in surface radiation budget (SRB). It is usually considered as a controlling factor of the heat exchange among the atmosphere and ocean. The temporal and spatial dynamics of OSA determine the energy absorption of upper level ocean water, and have influences on the oceanic currents, atmospheric circulations, and transportation of material and energy of hydrosphere. Therefore, various parameterizations and models have been developed for describing the dynamics of OSA. However, it has been demonstrated that the currently available OSA datasets cannot full fill the requirement of global climate change studies. In this study, we present a literature review on mapping global OSA from satellite observations. The models (parameterizations, the coupled ocean-atmosphere radiative transfer (COART), and the three component ocean water albedo (TCOWA)), algorithms (the estimation method based on reanalysis data, and the direct-estimation algorithm), and datasets (the cloud, albedo and radiation (CLARA) surface albedo product, dataset derived by the TCOWA model, and the global land surface satellite (GLASS) phase-2 surface broadband albedo product) of OSA have been discussed, separately.

  3. New Technique for Retrieving Liquid Water Path over Land using Satellite Microwave Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deeter, M.N.; Vivekanandan, J.

    2005-03-18

    We present a new methodology for retrieving liquid water path over land using satellite microwave observations. As input, the technique exploits the Advanced Microwave Scanning Radiometer for earth observing plan (EOS) (AMSR-E) polarization-difference signals at 37 and 89 GHz. Regression analysis performed on model simulations indicates that over variable atmospheric and surface conditions the polarization-difference signals can be simply parameterized in terms of the surface emissivity polarization difference ({Delta}{var_epsilon}), surface temperature, liquid water path (LWP), and precipitable water vapor (PWV). The resulting polarization-difference parameterization (PDP) enables fast and direct (noniterative) retrievals of LWP with minimal requirements for ancillary data. Single-more » and dual-channel retrieval methods are described and demonstrated. Data gridding is used to reduce the effects of instrumental noise. The methodology is demonstrated using AMSR-E observations over the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site during a six day period in November and December, 2003. Single- and dual-channel retrieval results mostly agree with ground-based microwave retrievals of LWP to within approximately 0.04 mm.« less

  4. Two-dimensional angular energy spectrum of electrons accelerated by the ultra-short relativistic laser pulse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borovskiy, A. V.; Galkin, A. L.; Department of Physics of MBF, Pirogov Russian National Research Medical University, 1 Ostrovitianov Street, Moscow 117997

    The new method of calculating energy spectra of accelerated electrons, based on the parameterization by their initial coordinates, is proposed. The energy spectra of electrons accelerated by Gaussian ultra-short relativistic laser pulse at a selected angle to the axis of the optical system focusing the laser pulse in a low density gas are theoretically calculated. The two-peak structure of the electron energy spectrum is obtained. Discussed are the reasons for its appearance as well as an applicability of other models of the laser field.

  5. Assimilation of MODIS and VIIRS AOD to improve aerosols forecasts with FV3-GOCART

    NASA Astrophysics Data System (ADS)

    Pagowski, M.

    2017-12-01

    In 2016 NOAA chose the FV3 dynamical core as a basis for its future global modeling system. We present an implementation of aerosol module in the FV3 model and its assimilation framework. The parameterization of aerosols is based on the GOCART scheme. The assimilation methodology relies on hybrid 3D-Var and EnKF methods. Aerosol observations include aerosol optical depth at 550 nm from VIIRS satellite. Results and evaluation of the system against independent observations and NASA's MERRA-2 is shown.

  6. The effects of ground hydrology on climate sensitivity to solar constant variations

    NASA Technical Reports Server (NTRS)

    Chou, S. H.; Curran, R. J.; Ohring, G.

    1979-01-01

    The effects of two different evaporation parameterizations on the climate sensitivity to solar constant variations are investigated by using a zonally averaged climate model. The model is based on a two-level quasi-geostrophic zonally averaged annual mean model. One of the evaporation parameterizations tested is a nonlinear formulation with the Bowen ratio determined by the predicted vertical temperature and humidity gradients near the earth's surface. The other is the linear formulation with the Bowen ratio essentially determined by the prescribed linear coefficient.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachan, John

    Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages. Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference. From the same source, Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.

  8. Using prediction uncertainty analysis to design hydrologic monitoring networks: Example applications from the Great Lakes water availability pilot project

    USGS Publications Warehouse

    Fienen, Michael N.; Doherty, John E.; Hunt, Randall J.; Reeves, Howard W.

    2010-01-01

    The importance of monitoring networks for resource-management decisions is becoming more recognized, in both theory and application. Quantitative computer models provide a science-based framework to evaluate the efficacy and efficiency of existing and possible future monitoring networks. In the study described herein, two suites of tools were used to evaluate the worth of new data for specific predictions, which in turn can support efficient use of resources needed to construct a monitoring network. The approach evaluates the uncertainty of a model prediction and, by using linear propagation of uncertainty, estimates how much uncertainty could be reduced if the model were calibrated with addition information (increased a priori knowledge of parameter values or new observations). The theoretical underpinnings of the two suites of tools addressing this technique are compared, and their application to a hypothetical model based on a local model inset into the Great Lakes Water Availability Pilot model are described. Results show that meaningful guidance for monitoring network design can be obtained by using the methods explored. The validity of this guidance depends substantially on the parameterization as well; hence, parameterization must be considered not only when designing the parameter-estimation paradigm but also-importantly-when designing the prediction-uncertainty paradigm.

  9. Aerodynamic Design of Complex Configurations Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2003-01-01

    The objective for this paper is to present the development of an optimization capability for the Cartesian inviscid-flow analysis package of Aftosmis et al. We evaluate and characterize the following modules within the new optimization framework: (1) A component-based geometry parameterization approach using a CAD solid representation and the CAPRI interface. (2) The use of Cartesian methods in the development Optimization techniques using a genetic algorithm. The discussion and investigations focus on several real world problems of the optimization process. We examine the architectural issues associated with the deployment of a CAD-based design approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute nodes. In addition, we study the influence of noise on the performance of optimization techniques, and the overall efficiency of the optimization process for aerodynamic design of complex three-dimensional configurations. of automated optimization tools. rithm and a gradient-based algorithm.

  10. Mixed H(2)/H(sub infinity): Control with output feedback compensators using parameter optimization

    NASA Technical Reports Server (NTRS)

    Schoemig, Ewald; Ly, Uy-Loi

    1992-01-01

    Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.

  11. Mixed H2/H(infinity)-Control with an output-feedback compensator using parameter optimization

    NASA Technical Reports Server (NTRS)

    Schoemig, Ewald; Ly, Uy-Loi

    1992-01-01

    Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.

  12. Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection.

    PubMed

    Hu, Weiming; Gao, Jun; Wang, Yanguo; Wu, Ou; Maybank, Stephen

    2014-01-01

    Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types.

  13. Pion, Kaon, Proton and Antiproton Production in Proton-Proton Collisions

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Blattnig, Steve R.

    2008-01-01

    Inclusive pion, kaon, proton, and antiproton production from proton-proton collisions is studied at a variety of proton energies. Various available parameterizations of Lorentz-invariant differential cross sections as a function of transverse momentum and rapidity are compared with experimental data. The Badhwar and Alper parameterizations are moderately satisfactory for charged pion production. The Badhwar parameterization provides the best fit for charged kaon production. For proton production, the Alper parameterization is best, and for antiproton production the Carey parameterization works best. However, no parameterization is able to fully account for all the data.

  14. Parameterization, sensitivity analysis, and inversion: an investigation using groundwater modeling of the surface-mined Tivoli-Guidonia basin (Metropolitan City of Rome, Italy)

    NASA Astrophysics Data System (ADS)

    La Vigna, Francesco; Hill, Mary C.; Rossetto, Rudy; Mazza, Roberto

    2016-09-01

    With respect to model parameterization and sensitivity analysis, this work uses a practical example to suggest that methods that start with simple models and use computationally frugal model analysis methods remain valuable in any toolbox of model development methods. In this work, groundwater model calibration starts with a simple parameterization that evolves into a moderately complex model. The model is developed for a water management study of the Tivoli-Guidonia basin (Rome, Italy) where surface mining has been conducted in conjunction with substantial dewatering. The approach to model development used in this work employs repeated analysis using sensitivity and inverse methods, including use of a new observation-stacked parameter importance graph. The methods are highly parallelizable and require few model runs, which make the repeated analyses and attendant insights possible. The success of a model development design can be measured by insights attained and demonstrated model accuracy relevant to predictions. Example insights were obtained: (1) A long-held belief that, except for a few distinct fractures, the travertine is homogeneous was found to be inadequate, and (2) The dewatering pumping rate is more critical to model accuracy than expected. The latter insight motivated additional data collection and improved pumpage estimates. Validation tests using three other recharge and pumpage conditions suggest good accuracy for the predictions considered. The model was used to evaluate management scenarios and showed that similar dewatering results could be achieved using 20 % less pumped water, but would require installing newly positioned wells and cooperation between mine owners.

  15. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    PubMed

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  16. Approaches for Subgrid Parameterization: Does Scaling Help?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2016-04-01

    Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.

  17. Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?

    DOE PAGES

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel; ...

    2016-10-20

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  18. Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  19. Assessing the CAM5 Physics Suite in the WRF-Chem Model: Implementation, Resolution Sensitivity, and a First Evaluation for a Regional Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Po-Lun; Rasch, Philip J.; Fast, Jerome D.

    A suite of physical parameterizations (deep and shallow convection, turbulent boundary layer, aerosols, cloud microphysics, and cloud fraction) from the global climate model Community Atmosphere Model version 5.1 (CAM5) has been implemented in the regional model Weather Research and Forecasting with chemistry (WRF-Chem). A downscaling modeling framework with consistent physics has also been established in which both global and regional simulations use the same emissions and surface fluxes. The WRF-Chem model with the CAM5 physics suite is run at multiple horizontal resolutions over a domain encompassing the northern Pacific Ocean, northeast Asia, and northwest North America for April 2008 whenmore » the ARCTAS, ARCPAC, and ISDAC field campaigns took place. These simulations are evaluated against field campaign measurements, satellite retrievals, and ground-based observations, and are compared with simulations that use a set of common WRF-Chem Parameterizations. This manuscript describes the implementation of the CAM5 physics suite in WRF-Chem provides an overview of the modeling framework and an initial evaluation of the simulated meteorology, clouds, and aerosols, and quantifies the resolution dependence of the cloud and aerosol parameterizations. We demonstrate that some of the CAM5 biases, such as high estimates of cloud susceptibility to aerosols and the underestimation of aerosol concentrations in the Arctic, can be reduced simply by increasing horizontal resolution. We also show that the CAM5 physics suite performs similarly to a set of parameterizations commonly used in WRF-Chem, but produces higher ice and liquid water condensate amounts and near-surface black carbon concentration. Further evaluations that use other mesoscale model parameterizations and perform other case studies are needed to infer whether one parameterization consistently produces results more consistent with observations.« less

  20. Perspective: Ab initio force field methods derived from quantum mechanics

    NASA Astrophysics Data System (ADS)

    Xu, Peng; Guidez, Emilie B.; Bertoni, Colleen; Gordon, Mark S.

    2018-03-01

    It is often desirable to accurately and efficiently model the behavior of large molecular systems in the condensed phase (thousands to tens of thousands of atoms) over long time scales (from nanoseconds to milliseconds). In these cases, ab initio methods are difficult due to the increasing computational cost with the number of electrons. A more computationally attractive alternative is to perform the simulations at the atomic level using a parameterized function to model the electronic energy. Many empirical force fields have been developed for this purpose. However, the functions that are used to model interatomic and intermolecular interactions contain many fitted parameters obtained from selected model systems, and such classical force fields cannot properly simulate important electronic effects. Furthermore, while such force fields are computationally affordable, they are not reliable when applied to systems that differ significantly from those used in their parameterization. They also cannot provide the information necessary to analyze the interactions that occur in the system, making the systematic improvement of the functional forms that are used difficult. Ab initio force field methods aim to combine the merits of both types of methods. The ideal ab initio force fields are built on first principles and require no fitted parameters. Ab initio force field methods surveyed in this perspective are based on fragmentation approaches and intermolecular perturbation theory. This perspective summarizes their theoretical foundation, key components in their formulation, and discusses key aspects of these methods such as accuracy and formal computational cost. The ab initio force fields considered here were developed for different targets, and this perspective also aims to provide a balanced presentation of their strengths and shortcomings. Finally, this perspective suggests some future directions for this actively developing area.

  1. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less

  2. An analytical model for subsurface irradiance and remote sensing reflectance in deep and shallow case-2 waters.

    PubMed

    Albert, A; Mobley, C

    2003-11-03

    Subsurface remote sensing signals, represented by the irradiance re fl ectance and the remote sensing re fl ectance, were investigated. The present study is based on simulations with the radiative transfer program Hydrolight using optical properties of Lake Constance (German: Bodensee) based on in-situ measurements of the water constituents and the bottom characteristics. Analytical equations are derived for the irradiance re fl ectance and remote sensing re fl ectance for deep and shallow water applications. The input of the parameterization are the inherent optical properties of the water - absorption a(lambda) and backscattering bb(lambda). Additionally, the solar zenith angle thetas, the viewing angle thetav , and the surface wind speed u are considered. For shallow water applications the bottom albedo RB and the bottom depth zB are included into the parameterizations. The result is a complete set of analytical equations for the remote sensing signals R and Rrs in deep and shallow waters with an accuracy better than 4%. In addition, parameterizations of apparent optical properties were derived for the upward and downward diffuse attenuation coefficients Ku and Kd.

  3. Determination of the 4D-Tropospheric Water Vapor Distribution by GPS for the Assimilation into Numerical Weather Prediction Models

    NASA Astrophysics Data System (ADS)

    Perler, D.; Geiger, A.; Rothacher, M.

    2011-12-01

    Water vapor is involved in many atmospheric processes and is therefore a crucial quantity in numerical weather prediction (NWP). Recent flood events in Switzerland have pointed out several deficiencies in planning and prediction methods used for flood risk mitigation. Investigations have shown that one of the limiting factors to forecast such events with NWP models is the insufficient knowledge of the water vapor distribution in the atmosphere. Global Navigation Satellite System (GNSS) ground-based tomography is a technique to monitor the 4D distribution of water vapor in the troposphere and has the potential to considerably improve the initial water vapor field used in NWP. We developed a GNSS tomography software called AWATOS-2 which is based on the Kalman filter technique and provides different parameterizations of the tropospheric wet refractivity field (Perler et al., 2010; Perler et al., 2011). The software can be used for the assimilation of different observations such as GNSS zero-differences, GNSS double-differences and any kind of point observations (e.g. balloons, aircrafts). In this talk, we present the results of a long-term study where GPS double-difference delays have been processed. The tomographic solutions have been investigated in view of their assimilation into local NWP models. The data set comprises observations from 46 GPS stations collected during 1 year. The core area of the investigation is located in Central Europe. We analyzed the performance of different voxel parameterizations used in the tomographic reconstruction of the troposphere and developed a new bias correction model which minimizes systematic differences. The correction model reduces the root-mean-square error (RMSE) with respected to the NWP model from 4.6 ppm to 3.0 ppm. After bias correction, high-elevation stations still show high RMSEs. In the presentation, we will discuss the treatment of such stations in terms of assimilation into NWP models and will show how sophisticated voxel parameterizations improve the accuracy. Perler, D.; Hurter, F.; Brockmann, E.; Leuenberger, D.; Ruffieux, D.; Geiger, A. and Rothacher, M. (2010). In Proceedings of the 7th Management Committee (MC7) and Working Group (WG) Meeting, Colone (Germany), 8 pp. Perler, D.; Geiger, A. and Hurter, F. (2011). 4D GPS water vapor tomography: new parameterized approaches. J. Geodesy 85(8), pp. 539-550, DOI 10.1007/s00190-011-0454-2.

  4. Optimization of Composite Structures with Curved Fiber Trajectories

    NASA Astrophysics Data System (ADS)

    Lemaire, Etienne; Zein, Samih; Bruyneel, Michael

    2014-06-01

    This paper studies the problem of optimizing composites shells manufactured using Automated Tape Layup (ATL) or Automated Fiber Placement (AFP) processes. The optimization procedure relies on a new approach to generate equidistant fiber trajectories based on Fast Marching Method. Starting with a (possibly curved) reference fiber direction defined on a (possibly curved) meshed surface, the new method allows determining fibers orientation resulting from a uniform thickness layup. The design variables are the parameters defining the position and the shape of the reference curve which results in very few design variables. Thanks to this efficient parameterization, maximum stiffness optimization numerical applications are proposed. The shape of the design space is discussed, regarding local and global optimal solutions.

  5. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  6. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  7. Alternatives for jet engine control

    NASA Technical Reports Server (NTRS)

    Sain, M. K.

    1981-01-01

    Research centered on basic topics in the modeling and feedback control of nonlinear dynamical systems is reported. Of special interest were the following topics: (1) the role of series descriptions, especially insofar as they relate to questions of scheduling, in the control of gas turbine engines; (2) the use of algebraic tensor theory as a technique for parameterizing such descriptions; (3) the relationship between tensor methodology and other parts of the nonlinear literature; (4) the improvement of interactive methods for parameter selection within a tensor viewpoint; and (5) study of feedback gain representation as a counterpart to these modeling and parameterization ideas.

  8. Sensitivity of Simulated Warm Rain Formation to Collision and Coalescence Efficiencies, Breakup, and Turbulence: Comparison of Two Bin-Resolved Numerical Models

    NASA Technical Reports Server (NTRS)

    Fridlind, Ann; Seifert, Axel; Ackerman, Andrew; Jensen, Eric

    2004-01-01

    Numerical models that resolve cloud particles into discrete mass size distributions on an Eulerian grid provide a uniquely powerful means of studying the closely coupled interaction of aerosols, cloud microphysics, and transport that determine cloud properties and evolution. However, such models require many experimentally derived paramaterizations in order to properly represent the complex interactions of droplets within turbulent flow. Many of these parameterizations remain poorly quantified, and the numerical methods of solving the equations for temporal evolution of the mass size distribution can also vary considerably in terms of efficiency and accuracy. In this work, we compare results from two size-resolved microphysics models that employ various widely-used parameterizations and numerical solution methods for several aspects of stochastic collection.

  9. A large-eddy simulation based power estimation capability for wind farms over complex terrain

    NASA Astrophysics Data System (ADS)

    Senocak, I.; Sandusky, M.; Deleon, R.

    2017-12-01

    There has been an increasing interest in predicting wind fields over complex terrain at the micro-scale for resource assessment, turbine siting, and power forecasting. These capabilities are made possible by advancements in computational speed from a new generation of computing hardware, numerical methods and physics modelling. The micro-scale wind prediction model presented in this work is based on the large-eddy simulation paradigm with surface-stress parameterization. The complex terrain is represented using an immersed-boundary method that takes into account the parameterization of the surface stresses. Governing equations of incompressible fluid flow are solved using a projection method with second-order accurate schemes in space and time. We use actuator disk models with rotation to simulate the influence of turbines on the wind field. Data regarding power production from individual turbines are mostly restricted because of proprietary nature of the wind energy business. Most studies report percentage drop of power relative to power from the first row. There have been different approaches to predict power production. Some studies simply report available wind power in the upstream, some studies estimate power production using power curves available from turbine manufacturers, and some studies estimate power as torque multiplied by rotational speed. In the present work, we propose a black-box approach that considers a control volume around a turbine and estimate the power extracted from the turbine based on the conservation of energy principle. We applied our wind power prediction capability to wind farms over flat terrain such as the wind farm over Mower County, Minnesota and the Horns Rev offshore wind farm in Denmark. The results from these simulations are in good agreement with published data. We also estimate power production from a hypothetical wind farm in complex terrain region and identify potential zones suitable for wind power production.

  10. TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics

    DOE PAGES

    Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...

    2015-04-16

    Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less

  11. Obtaining sub-daily new snow density from automated measurements in high mountain regions

    NASA Astrophysics Data System (ADS)

    Helfricht, Kay; Hartl, Lea; Koch, Roland; Marty, Christoph; Olefs, Marc

    2018-05-01

    The density of new snow is operationally monitored by meteorological or hydrological services at daily time intervals, or occasionally measured in local field studies. However, meteorological conditions and thus settling of the freshly deposited snow rapidly alter the new snow density until measurement. Physically based snow models and nowcasting applications make use of hourly weather data to determine the water equivalent of the snowfall and snow depth. In previous studies, a number of empirical parameterizations were developed to approximate the new snow density by meteorological parameters. These parameterizations are largely based on new snow measurements derived from local in situ measurements. In this study a data set of automated snow measurements at four stations located in the European Alps is analysed for several winter seasons. Hourly new snow densities are calculated from the height of new snow and the water equivalent of snowfall. Considering the settling of the new snow and the old snowpack, the average hourly new snow density is 68 kg m-3, with a standard deviation of 9 kg m-3. Seven existing parameterizations for estimating new snow densities were tested against these data, and most calculations overestimate the hourly automated measurements. Two of the tested parameterizations were capable of simulating low new snow densities observed at sheltered inner-alpine stations. The observed variability in new snow density from the automated measurements could not be described with satisfactory statistical significance by any of the investigated parameterizations. Applying simple linear regressions between new snow density and wet bulb temperature based on the measurements' data resulted in significant relationships (r2 > 0.5 and p ≤ 0.05) for single periods at individual stations only. Higher new snow density was calculated for the highest elevated and most wind-exposed station location. Whereas snow measurements using ultrasonic devices and snow pillows are appropriate for calculating station mean new snow densities, we recommend instruments with higher accuracy e.g. optical devices for more reliable investigations of the variability of new snow densities at sub-daily intervals.

  12. Modeling the MJO rain rates using parameterized large scale dynamics: vertical structure, radiation, and horizontal advection of dry air

    NASA Astrophysics Data System (ADS)

    Wang, S.; Sobel, A. H.; Nie, J.

    2015-12-01

    Two Madden Julian Oscillation (MJO) events were observed during October and November 2011 in the equatorial Indian Ocean during the DYNAMO field campaign. Precipitation rates and large-scale vertical motion profiles derived from the DYNAMO northern sounding array are simulated in a small-domain cloud-resolving model using parameterized large-scale dynamics. Three parameterizations of large-scale dynamics --- the conventional weak temperature gradient (WTG) approximation, vertical mode based spectral WTG (SWTG), and damped gravity wave coupling (DGW) --- are employed. The target temperature profiles and radiative heating rates are taken from a control simulation in which the large-scale vertical motion is imposed (rather than directly from observations), and the model itself is significantly modified from that used in previous work. These methodological changes lead to significant improvement in the results.Simulations using all three methods, with imposed time -dependent radiation and horizontal moisture advection, capture the time variations in precipitation associated with the two MJO events well. The three methods produce significant differences in the large-scale vertical motion profile, however. WTG produces the most top-heavy and noisy profiles, while DGW's is smoother with a peak in midlevels. SWTG produces a smooth profile, somewhere between WTG and DGW, and in better agreement with observations than either of the others. Numerical experiments without horizontal advection of moisture suggest that that process significantly reduces the precipitation and suppresses the top-heaviness of large-scale vertical motion during the MJO active phases, while experiments in which the effect of cloud on radiation are disabled indicate that cloud-radiative interaction significantly amplifies the MJO. Experiments in which interactive radiation is used produce poorer agreement with observation than those with imposed time-varying radiative heating. Our results highlight the importance of both horizontal advection of moisture and cloud-radiative feedback to the dynamics of the MJO, as well as to accurate simulation and prediction of it in models.

  13. Assessing the performance of wave breaking parameterizations in shallow waters in spectral wave models

    NASA Astrophysics Data System (ADS)

    Lin, Shangfei; Sheng, Jinyu

    2017-12-01

    Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.

  14. Thermodynamic integration based on classical atomistic simulations to determine the Gibbs energy of condensed phases: Calculation of the aluminum-zirconium system

    NASA Astrophysics Data System (ADS)

    Harvey, J.-P.; Gheribi, A. E.; Chartrand, P.

    2012-12-01

    In this work, an in silico procedure to generate a fully coherent set of thermodynamic properties obtained from classical molecular dynamics (MD) and Monte Carlo (MC) simulations is proposed. The procedure is applied to the Al-Zr system because of its importance in the development of high strength Al-Li alloys and of bulk metallic glasses. Cohesive energies of the studied condensed phases of the Al-Zr system (the liquid phase, the fcc solid solution, and various orthorhombic stoichiometric compounds) are calculated using the modified embedded atom model (MEAM) in the second-nearest-neighbor formalism (2NN). The Al-Zr MEAM-2NN potential is parameterized in this work using ab initio and experimental data found in the literature for the AlZr3-L12 structure, while its predictive ability is confirmed for several other solid structures and for the liquid phase. The thermodynamic integration (TI) method is implemented in a general MC algorithm in order to evaluate the absolute Gibbs energy of the liquid and the fcc solutions. The entropy of mixing calculated from the TI method, combined to the enthalpy of mixing and the heat capacity data generated from MD/MC simulations performed in the isobaric-isothermal/canonical (NPT/NVT) ensembles are used to parameterize the Gibbs energy function of all the condensed phases in the Al-rich side of the Al-Zr system in a CALculation of PHAse Diagrams (CALPHAD) approach. The modified quasichemical model in the pair approximation (MQMPA) and the cluster variation method (CVM) in the tetrahedron approximation are used to define the Gibbs energy of the liquid and the fcc solid solution respectively for their entire range of composition. Thermodynamic and structural data generated from our MD/MC simulations are used as input data to parameterize these thermodynamic models. A detailed analysis of the validity and transferability of the Al-Zr MEAM-2NN potential is presented throughout our work by comparing the predicted properties obtained from this formalism with available ab initio and experimental data for both liquid and solid phases.

  15. The impact of structural uncertainty on cost-effectiveness models for adjuvant endocrine breast cancer treatments: the need for disease-specific model standardization and improved guidance.

    PubMed

    Frederix, Gerardus W J; van Hasselt, Johan G C; Schellens, Jan H M; Hövels, Anke M; Raaijmakers, Jan A M; Huitema, Alwin D R; Severens, Johan L

    2014-01-01

    Structural uncertainty relates to differences in model structure and parameterization. For many published health economic analyses in oncology, substantial differences in model structure exist, leading to differences in analysis outcomes and potentially impacting decision-making processes. The objectives of this analysis were (1) to identify differences in model structure and parameterization for cost-effectiveness analyses (CEAs) comparing tamoxifen and anastrazole for adjuvant breast cancer (ABC) treatment; and (2) to quantify the impact of these differences on analysis outcome metrics. The analysis consisted of four steps: (1) review of the literature for identification of eligible CEAs; (2) definition and implementation of a base model structure, which included the core structural components for all identified CEAs; (3) definition and implementation of changes or additions in the base model structure or parameterization; and (4) quantification of the impact of changes in model structure or parameterizations on the analysis outcome metrics life-years gained (LYG), incremental costs (IC) and the incremental cost-effectiveness ratio (ICER). Eleven CEA analyses comparing anastrazole and tamoxifen as ABC treatment were identified. The base model consisted of the following health states: (1) on treatment; (2) off treatment; (3) local recurrence; (4) metastatic disease; (5) death due to breast cancer; and (6) death due to other causes. The base model estimates of anastrazole versus tamoxifen for the LYG, IC and ICER were 0.263 years, €3,647 and €13,868/LYG, respectively. In the published models that were evaluated, differences in model structure included the addition of different recurrence health states, and associated transition rates were identified. Differences in parameterization were related to the incidences of recurrence, local recurrence to metastatic disease, and metastatic disease to death. The separate impact of these model components on the LYG ranged from 0.207 to 0.356 years, while incremental costs ranged from €3,490 to €3,714 and ICERs ranged from €9,804/LYG to €17,966/LYG. When we re-analyzed the published CEAs in our framework by including their respective model properties, the LYG ranged from 0.207 to 0.383 years, IC ranged from €3,556 to €3,731 and ICERs ranged from €9,683/LYG to €17,570/LYG. Differences in model structure and parameterization lead to substantial differences in analysis outcome metrics. This analysis supports the need for more guidance regarding structural uncertainty and the use of standardized disease-specific models for health economic analyses of adjuvant endocrine breast cancer therapies. The developed approach in the current analysis could potentially serve as a template for further evaluations of structural uncertainty and development of disease-specific models.

  16. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-05-01

    Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  17. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-11-01

    Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  18. Bayesian parameter estimation for nonlinear modelling of biological pathways.

    PubMed

    Ghasemi, Omid; Lindsey, Merry L; Yang, Tianyi; Nguyen, Nguyen; Huang, Yufei; Jin, Yu-Fang

    2011-01-01

    The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC) method. We applied this approach to the biological pathways involved in the left ventricle (LV) response to myocardial infarction (MI) and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly parameterized dynamic systems. Our proposed Bayesian algorithm successfully estimated parameters in nonlinear mathematical models for biological pathways. This method can be further extended to high order systems and thus provides a useful tool to analyze biological dynamics and extract information using temporal data.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, May Wai San; Ovchinnikov, Mikhail; Wang, Minghuai

    Potential ways of parameterizing vertical turbulent fluxes of hydrometeors are examined using a high-resolution cloud-resolving model. The cloud-resolving model uses the Morrison microphysics scheme, which contains prognostic variables for rain, graupel, ice, and snow. A benchmark simulation with a horizontal grid spacing of 250 m of a deep convection case carried out to evaluate three different ways of parameterizing the turbulent vertical fluxes of hydrometeors: an eddy-diffusion approximation, a quadrant-based decomposition, and a scaling method that accounts for within-quadrant (subplume) correlations. Results show that the down-gradient nature of the eddy-diffusion approximation tends to transport mass away from concentrated regions, whereasmore » the benchmark simulation indicates that the vertical transport tends to transport mass from below the level of maximum to aloft. Unlike the eddy-diffusion approach, the quadri-modal decomposition is able to capture the signs of the flux gradient but underestimates the magnitudes. The scaling approach is shown to perform the best by accounting for within-quadrant correlations, and improves the results for all hydrometeors except for snow. A sensitivity study is performed to examine how vertical transport may affect the microphysics of the hydrometeors. The vertical transport of each hydrometeor type is artificially suppressed in each test. Results from the sensitivity tests show that cloud-droplet-related processes are most sensitive to suppressed rain or graupel transport. In particular, suppressing rain or graupel transport has a strong impact on the production of snow and ice aloft. Lastly, a viable subgrid-scale hydrometeor transport scheme in an assumed probability density function parameterization is discussed.« less

  20. Double-dictionary matching pursuit for fault extent evaluation of rolling bearing based on the Lempel-Ziv complexity

    NASA Astrophysics Data System (ADS)

    Cui, Lingli; Gong, Xiangyang; Zhang, Jianyu; Wang, Huaqing

    2016-12-01

    The quantitative diagnosis of rolling bearing fault severity is particularly crucial to realize a proper maintenance decision. Aiming at the fault feature of rolling bearing, a novel double-dictionary matching pursuit (DDMP) for fault extent evaluation of rolling bearing based on the Lempel-Ziv complexity (LZC) index is proposed in this paper. In order to match the features of rolling bearing fault, the impulse time-frequency dictionary and modulation dictionary are constructed to form the double-dictionary by using the method of parameterized function model. Then a novel matching pursuit method is proposed based on the new double-dictionary. For rolling bearing vibration signals with different fault sizes, the signals are decomposed and reconstructed by the DDMP. After the noise reduced and signals reconstructed, the LZC index is introduced to realize the fault extent evaluation. The applications of this method to the fault experimental signals of bearing outer race and inner race with different degree of injury have shown that the proposed method can effectively realize the fault extent evaluation.

  1. Dimensionless parameterization of lidar for laser remote sensing of the atmosphere and its application to systems with SiPM and PMT detectors.

    PubMed

    Agishev, Ravil; Comerón, Adolfo; Rodriguez, Alejandro; Sicard, Michaël

    2014-05-20

    In this paper, we show a renewed approach to the generalized methodology for atmospheric lidar assessment, which uses the dimensionless parameterization as a core component. It is based on a series of our previous works where the problem of universal parameterization over many lidar technologies were described and analyzed from different points of view. The modernized dimensionless parameterization concept applied to relatively new silicon photomultiplier detectors (SiPMs) and traditional photomultiplier (PMT) detectors for remote-sensing instruments allowed predicting the lidar receiver performance with sky background available. The renewed approach can be widely used to evaluate a broad range of lidar system capabilities for a variety of lidar remote-sensing applications as well as to serve as a basis for selection of appropriate lidar system parameters for a specific application. Such a modernized methodology provides a generalized, uniform, and objective approach for evaluation of a broad range of lidar types and systems (aerosol, Raman, DIAL) operating on different targets (backscatter or topographic) and under intense sky background conditions. It can be used within the lidar community to compare different lidar instruments.

  2. Modeling the interplay between sea ice formation and the oceanic mixed layer: Limitations of simple brine rejection parameterizations

    NASA Astrophysics Data System (ADS)

    Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan

    2015-02-01

    The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean-sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice-ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.

  3. Modelling the interplay between sea ice formation and the oceanic mixed layer: limitations of simple brine rejection parameterizations

    NASA Astrophysics Data System (ADS)

    Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan

    2015-04-01

    The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean--sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice--ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.

  4. Inclusion of Solar Elevation Angle in Land Surface Albedo Parameterization Over Bare Soil Surface.

    PubMed

    Zheng, Zhiyuan; Wei, Zhigang; Wen, Zhiping; Dong, Wenjie; Li, Zhenchao; Wen, Xiaohang; Zhu, Xian; Ji, Dong; Chen, Chen; Yan, Dongdong

    2017-12-01

    Land surface albedo is a significant parameter for maintaining a balance in surface energy. It is also an important parameter of bare soil surface albedo for developing land surface process models that accurately reflect diurnal variation characteristics and the mechanism behind the solar spectral radiation albedo on bare soil surfaces and for understanding the relationships between climate factors and spectral radiation albedo. Using a data set of field observations, we conducted experiments to analyze the variation characteristics of land surface solar spectral radiation and the corresponding albedo over a typical Gobi bare soil underlying surface and to investigate the relationships between the land surface solar spectral radiation albedo, solar elevation angle, and soil moisture. Based on both solar elevation angle and soil moisture measurements simultaneously, we propose a new two-factor parameterization scheme for spectral radiation albedo over bare soil underlying surfaces. The results of numerical simulation experiments show that the new parameterization scheme can more accurately depict the diurnal variation characteristics of bare soil surface albedo than the previous schemes. Solar elevation angle is one of the most important factors for parameterizing bare soil surface albedo and must be considered in the parameterization scheme, especially in arid and semiarid areas with low soil moisture content. This study reveals the characteristics and mechanism of the diurnal variation of bare soil surface solar spectral radiation albedo and is helpful in developing land surface process models, weather models, and climate models.

  5. Developpement de techniques numeriques pour l'estimation, la modelisation et la prediction de proprietes thermodynamiques et structurales de systems metalliques a fort ordonnancement chimique

    NASA Astrophysics Data System (ADS)

    Harvey, Jean-Philippe

    In this work, the possibility to calculate and evaluate with a high degree of precision the Gibbs energy of complex multiphase equilibria for which chemical ordering is explicitly and simultaneously considered in the thermodynamic description of solid (short range order and long range order) and liquid (short range order) metallic phases is studied. The cluster site approximation (CSA) and the cluster variation method (CVM) are implemented in a new minimization technique of the Gibbs energy of multicomponent and multiphase systems to describe the thermodynamic behaviour of metallic solid solutions showing strong chemical ordering. The modified quasichemical model in the pair approximation (MQMPA) is also implemented in the new minimization algorithm presented in this work to describe the thermodynamic behaviour of metallic liquid solutions. The constrained minimization technique implemented in this work consists of a sequential quadratic programming technique based on an exact Newton’s method (i.e. the use of exact second derivatives in the determination of the Hessian of the objective function) combined to a line search method to identify a direction of sufficient decrease of the merit function. The implementation of a new algorithm to perform the constrained minimization of the Gibbs energy is justified by the difficulty to identify, in specific cases, the correct multiphase assemblage of a system where the thermodynamic behaviour of the equilibrium phases is described by one of the previously quoted models using the FactSage software (ex.: solid_CSA+liquid_MQMPA; solid1_CSA+solid2_CSA). After a rigorous validation of the constrained Gibbs energy minimization algorithm using several assessed binary and ternary systems found in the literature, the CVM and the CSA models used to describe the energetic behaviour of metallic solid solutions present in systems with key industrial applications such as the Cu-Zr and the Al-Zr systems are parameterized using fully consistent thermodynamic an structural data generated from a Monte Carlo (MC) simulator also implemented in the framework of this project. In this MC simulator, the modified embedded atom model in the second nearest neighbour formalism (MEAM-2NN) is used to describe the cohesive energy of each studied structure. A new Al-Zr MEAM-2NN interatomic potential needed to evaluate the cohesive energy of the condensed phases of this system is presented in this work. The thermodynamic integration (TI) method implemented in the MC simulator allows the evaluation of the absolute Gibbs energy of the considered solid or liquid structures. The original implementation of the TI method allowed us to evaluate theoretically for the first time all the thermodynamic mixing contributions (i.e., mixing enthalpy and mixing entropy contributions) of a metallic liquid (Cu-Zr and Al-Zr) and of a solid solution (face-centered cubic (FCC) Al-Zr solid solution) described by the MEAM-2NN. Thermodynamic and structural data obtained from MC and molecular dynamic simulations are then used to parameterize the CVM for the Al-Zr FCC solid solution and the MQMPA for the Al-Zr and the Cu-Zr liquid phase respectively. The extended thermodynamic study of these systems allow the introduction of a new type of configuration-dependent excess parameters in the definition of the thermodynamic function of solid solutions described by the CVM or the CSA. These parameters greatly improve the precision of these thermodynamic models based on experimental evidences found in the literature. A new parameterization approach of the MQMPA model of metallic liquid solutions is presented throughout this work. In this new approach, calculated pair fractions obtained from MC/MD simulations are taken into account as well as configuration-independent volumetric relaxation effects (regular like excess parameters) in order to parameterize precisely the Gibbs energy function of metallic melts. The generation of a complete set of fully consistent thermodynamic, physical and structural data for solid, liquid, and stoichiometric compounds and the subsequent parameterization of their respective thermodynamic model lead to the first description of the complete Al-Zr phase diagram in the range of composition [0 ≤ XZr ≤ 5 / 9] based on theoretical and fully consistent thermodynamic properties. MC and MD simulations are performed for the Al-Zr system to define for the first time the precise thermodynamic behaviour of the amorphous phase for its entire range of composition. Finally, all the thermodynamic models for the liquid phase, the FCC solid solution and the amorphous phase are used to define conditions based on thermodynamic and volumetric considerations that favor the amorphization of Al-Zr alloys.

  6. A skeleton family generator via physics-based deformable models.

    PubMed

    Krinidis, Stelios; Chatzis, Vassilios

    2009-01-01

    This paper presents a novel approach for object skeleton family extraction. The introduced technique utilizes a 2-D physics-based deformable model that parameterizes the objects shape. Deformation equations are solved exploiting modal analysis, and proportional to model physical characteristics, a different skeleton is produced every time, generating, in this way, a family of skeletons. The theoretical properties and the experiments presented demonstrate that obtained skeletons match to hand-labeled skeletons provided by human subjects, even in the presence of significant noise and shape variations, cuts and tears, and have the same topology as the original skeletons. In particular, the proposed approach produces no spurious branches without the need of any known skeleton pruning method.

  7. Trends and uncertainties in budburst projections of Norway spruce in Northern Europe.

    PubMed

    Olsson, Cecilia; Olin, Stefan; Lindström, Johan; Jönsson, Anna Maria

    2017-12-01

    Budburst is regulated by temperature conditions, and a warming climate is associated with earlier budburst. A range of phenology models has been developed to assess climate change effects, and they tend to produce different results. This is mainly caused by different model representations of tree physiology processes, selection of observational data for model parameterization, and selection of climate model data to generate future projections. In this study, we applied (i) Bayesian inference to estimate model parameter values to address uncertainties associated with selection of observational data, (ii) selection of climate model data representative of a larger dataset, and (iii) ensembles modeling over multiple initial conditions, model classes, model parameterizations, and boundary conditions to generate future projections and uncertainty estimates. The ensemble projection indicated that the budburst of Norway spruce in northern Europe will on average take place 10.2 ± 3.7 days earlier in 2051-2080 than in 1971-2000, given climate conditions corresponding to RCP 8.5. Three provenances were assessed separately (one early and two late), and the projections indicated that the relationship among provenance will remain also in a warmer climate. Structurally complex models were more likely to fail predicting budburst for some combinations of site and year than simple models. However, they contributed to the overall picture of current understanding of climate impacts on tree phenology by capturing additional aspects of temperature response, for example, chilling. Model parameterizations based on single sites were more likely to result in model failure than parameterizations based on multiple sites, highlighting that the model parameterization is sensitive to initial conditions and may not perform well under other climate conditions, whether the change is due to a shift in space or over time. By addressing a range of uncertainties, this study showed that ensemble modeling provides a more robust impact assessment than would a single phenology model run.

  8. A comparison study of convective and microphysical parameterization schemes associated with lightning occurrence in southeastern Brazil using the WRF model

    NASA Astrophysics Data System (ADS)

    Zepka, G. D.; Pinto, O.

    2010-12-01

    The intent of this study is to identify the combination of convective and microphysical WRF parameterizations that better adjusts to lightning occurrence over southeastern Brazil. Twelve thunderstorm days were simulated with WRF model using three different convective parameterizations (Kain-Fritsch, Betts-Miller-Janjic and Grell-Devenyi ensemble) and two different microphysical schemes (Purdue-Lin and WSM6). In order to test the combinations of parameterizations at the same time of lightning occurrence, a comparison was made between the WRF grid point values of surface-based Convective Available Potential Energy (CAPE), Lifted Index (LI), K-Index (KI) and equivalent potential temperature (theta-e), and the lightning locations nearby those grid points. Histograms were built up to show the ratio of the occurrence of different values of these variables for WRF grid points associated with lightning to all WRF grid points. The first conclusion from this analysis was that the choice of microphysics did not change appreciably the results as much as different convective schemes. The Betts-Miller-Janjic parameterization has generally worst skill to relate higher magnitudes for all four variables to lightning occurrence. The differences between the Kain-Fritsch and Grell-Devenyi ensemble schemes were not large. This fact can be attributed to the similar main assumptions used by these schemes that consider entrainment/detrainment processes along the cloud boundaries. After that, we examined three case studies using the combinations of convective and microphysical options without the Betts-Miller-Janjic scheme. Differently from the traditional verification procedures, fields of surface-based CAPE from WRF 10 km domain were compared to the Eta model, satellite images and lightning data. In general the more reliable convective scheme was Kain-Fritsch since it provided more consistent distribution of the CAPE fields with respect to satellite images and lightning data.

  9. Responses of Mixed-Phase Cloud Condensates and Cloud Radiative Effects to Ice Nucleating Particle Concentrations in NCAR CAM5 and DOE ACME Climate Models

    NASA Astrophysics Data System (ADS)

    Liu, X.; Shi, Y.; Wu, M.; Zhang, K.

    2017-12-01

    Mixed-phase clouds frequently observed in the Arctic and mid-latitude storm tracks have the substantial impacts on the surface energy budget, precipitation and climate. In this study, we first implement the two empirical parameterizations (Niemand et al. 2012 and DeMott et al. 2015) of heterogeneous ice nucleation for mixed-phase clouds in the NCAR Community Atmosphere Model Version 5 (CAM5) and DOE Accelerated Climate Model for Energy Version 1 (ACME1). Model simulated ice nucleating particle (INP) concentrations based on Niemand et al. and DeMott et al. are compared with those from the default ice nucleation parameterization based on the classical nucleation theory (CNT) in CAM5 and ACME, and with in situ observations. Significantly higher INP concentrations (by up to a factor of 5) are simulated from Niemand et al. than DeMott et al. and CNT especially over the dust source regions in both CAM5 and ACME. Interestingly the ACME model simulates higher INP concentrations than CAM5, especially in the Polar regions. This is also the case when we nudge the two models' winds and temperature towards the same reanalysis, indicating more efficient transport of aerosols (dust) to the Polar regions in ACME. Next, we examine the responses of model simulated cloud liquid water and ice water contents to different INP concentrations from three ice nucleation parameterizations (Niemand et al., DeMott et al., and CNT) in CAM5 and ACME. Changes in liquid water path (LWP) reach as much as 20% in the Arctic regions in ACME between the three parameterizations while the LWP changes are smaller and limited in the Northern Hemispheric mid-latitudes in CAM5. Finally, the impacts on cloud radiative forcing and dust indirect effects on mixed-phase clouds are quantified with the three ice nucleation parameterizations in CAM5 and ACME.

  10. Physical retrieval of precipitation water contents from Special Sensor Microwave/Imager (SSM/I) data. Part 2: Retrieval method and applications (report version)

    NASA Technical Reports Server (NTRS)

    Olson, William S.

    1990-01-01

    A physical retrieval method for estimating precipitating water distributions and other geophysical parameters based upon measurements from the DMSP-F8 SSM/I is developed. Three unique features of the retrieval method are (1) sensor antenna patterns are explicitly included to accommodate varying channel resolution; (2) precipitation-brightness temperature relationships are quantified using the cloud ensemble/radiative parameterization; and (3) spatial constraints are imposed for certain background parameters, such as humidity, which vary more slowly in the horizontal than the cloud and precipitation water contents. The general framework of the method will facilitate the incorporation of measurements from the SSMJT, SSM/T-2 and geostationary infrared measurements, as well as information from conventional sources (e.g., radiosondes) or numerical forecast model fields.

  11. From Global to Cloud Resolving Scale: Experiments with a Scale- and Aerosol-Aware Physics Package and Impact on Tracer Transport

    NASA Astrophysics Data System (ADS)

    Grell, G. A.; Freitas, S. R.; Olson, J.; Bela, M.

    2017-12-01

    We will start by providing a summary of the latest cumulus parameterization modeling efforts at NOAA's Earth System Research Laboratory (ESRL) will be presented on both regional and global scales. The physics package includes a scale-aware parameterization of subgrid cloudiness feedback to radiation (coupled PBL, microphysics, radiation, shallow and congestus type convection), the stochastic Grell-Freitas (GF) scale- and aerosol-aware convective parameterization, and an aerosol aware microphysics package. GF is based on a stochastic approach originally implemented by Grell and Devenyi (2002) and described in more detail in Grell and Freitas (2014, ACP). It was expanded to include PDF's for vertical mass flux, as well as modifications to improve the diurnal cycle. This physics package will be used on different scales, spanning global to cloud resolving, to look at the impact on scalar transport and numerical weather prediction.

  12. Application of a planetary wave breaking parameterization to stratospheric circulation statistics

    NASA Technical Reports Server (NTRS)

    Randel, William J.; Garcia, Rolando R.

    1994-01-01

    The planetary wave parameterization scheme developed recently by Garcia is applied to statospheric circulation statistics derived from 12 years of National Meteorological Center operational stratospheric analyses. From the data a planetary wave breaking criterion (based on the ratio of the eddy to zonal mean meridional potential vorticity (PV) gradients), a wave damping rate, and a meridional diffusion coefficient are calculated. The equatorward flank of the polar night jet during winter is identified as a wave breaking region from the observed PV gradients; the region moves poleward with season, covering all high latitudes in spring. Derived damping rates maximize in the subtropical upper stratosphere (the 'surf zone'), with damping time scales of 3-4 days. Maximum diffusion coefficients follow the spatial patterns of the wave breaking criterion, with magnitudes comparable to prior published estimates. Overall, the observed results agree well with the parameterized calculations of Garcia.

  13. A new parameterization of the UV irradiance altitude dependence for clear-sky conditions and its application in the on-line UV tool over Northern Eurasia

    NASA Astrophysics Data System (ADS)

    Chubarova, Nataly; Zhdanova, Yekaterina; Nezval, Yelena

    2016-09-01

    A new method for calculating the altitude UV dependence is proposed for different types of biologically active UV radiation (erythemally weighted, vitamin-D-weighted and cataract-weighted types). We show that for the specified groups of parameters the altitude UV amplification (AUV) can be presented as a composite of independent contributions of UV amplification from different factors within a wide range of their changes with mean uncertainty of 1 % and standard deviation of 3 % compared with the exact model simulations with the same input parameters. The parameterization takes into account for the altitude dependence of molecular number density, ozone content, aerosol and spatial surface albedo. We also provide generalized altitude dependencies of the parameters for evaluating the AUV. The resulting comparison of the altitude UV effects using the proposed method shows a good agreement with the accurate 8-stream DISORT model simulations with correlation coefficient r > 0.996. A satisfactory agreement was also obtained with the experimental UV data in mountain regions. Using this parameterization we analyzed the role of different geophysical parameters in UV variations with altitude. The decrease in molecular number density, especially at high altitudes, and the increase in surface albedo play the most significant role in the UV growth. Typical aerosol and ozone altitude UV effects do not exceed 10-20 %. Using the proposed parameterization implemented in the on-line UV tool (http://momsu.ru/uv/) for Northern Eurasia over the PEEX domain we analyzed the altitude UV increase and its possible effects on human health considering different skin types and various open body fraction for January and April conditions in the Alpine region.

  14. Sensitivity analysis with the regional climate model COSMO-CLM over the CORDEX-MENA domain

    NASA Astrophysics Data System (ADS)

    Bucchignani, E.; Cattaneo, L.; Panitz, H.-J.; Mercogliano, P.

    2016-02-01

    The results of a sensitivity work based on ERA-Interim driven COSMO-CLM simulations over the Middle East-North Africa (CORDEX-MENA) domain are presented. All simulations were performed at 0.44° spatial resolution. The purpose of this study was to ascertain model performances with respect to changes in physical and tuning parameters which are mainly related to surface, convection, radiation and cloud parameterizations. Evaluation was performed for the whole CORDEX-MENA region and six sub-regions, comparing a set of 26 COSMO-CLM runs against a combination of available ground observations, satellite products and reanalysis data to assess temperature, precipitation, cloud cover and mean sea level pressure. The model proved to be very sensitive to changes in physical parameters. The optimized configuration allows COSMO-CLM to improve the simulated main climate features of this area. Its main characteristics consist in the new parameterization of albedo, based on Moderate Resolution Imaging Spectroradiometer data, and the new parameterization of aerosol, based on NASA-GISS AOD distributions. When applying this configuration, Mean Absolute Error values for the considered variables are as follows: about 1.2 °C for temperature, about 15 mm/month for precipitation, about 9 % for total cloud cover, and about 0.6 hPa for mean sea level pressure.

  15. Parameterized examination in econometrics

    NASA Astrophysics Data System (ADS)

    Malinova, Anna; Kyurkchiev, Vesselin; Spasov, Georgi

    2018-01-01

    The paper presents a parameterization of basic types of exam questions in Econometrics. This algorithm is used to automate and facilitate the process of examination, assessment and self-preparation of a large number of students. The proposed parameterization of testing questions reduces the time required to author tests and course assignments. It enables tutors to generate a large number of different but equivalent dynamic questions (with dynamic answers) on a certain topic, which are automatically assessed. The presented methods are implemented in DisPeL (Distributed Platform for e-Learning) and provide questions in the areas of filtering and smoothing of time-series data, forecasting, building and analysis of single-equation econometric models. Questions also cover elasticity, average and marginal characteristics, product and cost functions, measurement of monopoly power, supply, demand and equilibrium price, consumer and product surplus, etc. Several approaches are used to enable the required numerical computations in DisPeL - integration of third-party mathematical libraries, developing our own procedures from scratch, and wrapping our legacy math codes in order to modernize and reuse them.

  16. On constraining pilot point calibration with regularization in PEST

    USGS Publications Warehouse

    Fienen, M.N.; Muffels, C.T.; Hunt, R.J.

    2009-01-01

    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.

  17. Investigation of nuclear structure of 30-44S isotopes using spherical and deformed Skyrme-Hartree-Fock method

    NASA Astrophysics Data System (ADS)

    Alzubadi, A. A.

    2015-06-01

    Nuclear many-body system is usually described by a mean-field built upon a nucleon-nucleon effective interaction. In this work, we investigate ground state properties of the sulfur isotopes covering a wide range from the line of stability up to the dripline region (30-44S). For this purpose the Hartree-Fock mean field theory in coordinate space with a Skyrme parameterization SkM* has been utilized. In particular, we calculate the nuclear charge, neutrons, protons, mass densities, the associated radii, neutron skin thickness and binding energy. The charge form factors have been also investigated using SkM*, SkO, SkE, SLy4 and Skxs15 Skyrme parameterizations and the results obtained using the theoretical approach are compared with the available experimental data. To investigate the potential energy surface as a function of the quadrupole deformation for isotopic sulfur chains, Skyrme-Hartree-Fock-Bogoliubov theory has been adopted with SLy4 parameterization.

  18. Dissecting the accountability of parameterized and parameter-free single-hybrid and double-hybrid functionals for photophysical properties of TADF-based OLEDs

    NASA Astrophysics Data System (ADS)

    Alipour, Mojtaba; Karimi, Niloofar

    2017-06-01

    Organic light emitting diodes (OLEDs) based on thermally activated delayed fluorescence (TADF) emitters are an attractive category of materials that have witnessed a booming development in recent years. In the present contribution, we scrutinize the accountability of parameterized and parameter-free single-hybrid (SH) and double-hybrid (DH) functionals through the two formalisms, full time-dependent density functional theory (TD-DFT) and Tamm-Dancoff approximation (TDA), for the estimation of photophysical properties like absorption energy, emission energy, zero-zero transition energy, and singlet-triplet energy splitting of TADF molecules. According to our detailed analyses on the performance of SHs based on TD-DFT and TDA, the TDA-based parameter-free SH functionals, PBE0 and TPSS0, with one-third of exact-like exchange turned out to be the best performers in comparison to other functionals from various rungs to reproduce the experimental data of the benchmarked set. Such affordable SH approximations can thus be employed to predict and design the TADF molecules with low singlet-triplet energy gaps for OLED applications. From another perspective, considering this point that both the nonlocal exchange and correlation are essential for a more reliable description of large charge-transfer excited states, applicability of the functionals incorporating these terms, namely, parameterized and parameter-free DHs, has also been evaluated. Perusing the role of exact-like exchange, perturbative-like correlation, solvent effects, and other related factors, we find that the parameterized functionals B2π-PLYP and B2GP-PLYP and the parameter-free models PBE-CIDH and PBE-QIDH have respectable performance with respect to others. Lastly, besides the recommendation of reliable computational protocols for the purpose, hopefully this study can pave the way toward further developments of other SHs and DHs for theoretical explorations in the field of OLEDs technology.

  19. Modeling parameterized geometry in GPU-based Monte Carlo particle transport simulation for radiotherapy.

    PubMed

    Chi, Yujie; Tian, Zhen; Jia, Xun

    2016-08-07

    Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0.69-1.23 times for photon only transport.

  20. Alternate methodologies to experimentally investigate shock initiation properties of explosives

    NASA Astrophysics Data System (ADS)

    Svingala, Forrest R.; Lee, Richard J.; Sutherland, Gerrit T.; Benjamin, Richard; Boyle, Vincent; Sickels, William; Thompson, Ronnie; Samuels, Phillip J.; Wrobel, Erik; Cornell, Rodger

    2017-01-01

    Reactive flow models are desired for new explosive formulations early in the development stage. Traditionally, these models are parameterized by carefully-controlled 1-D shock experiments, including gas-gun testing with embedded gauges and wedge testing with explosive plane wave lenses (PWL). These experiments are easy to interpret due to their 1-D nature, but are expensive to perform and cannot be performed at all explosive test facilities. This work investigates alternative methods to probe shock-initiation behavior of new explosives using widely-available pentolite gap test donors and simple time-of-arrival type diagnostics. These experiments can be performed at a low cost at most explosives testing facilities. This allows experimental data to parameterize reactive flow models to be collected much earlier in the development of an explosive formulation. However, the fundamentally 2-D nature of these tests may increase the modeling burden in parameterizing these models and reduce general applicability. Several variations of the so-called modified gap test were investigated and evaluated for suitability as an alternative to established 1-D gas gun and PWL techniques. At least partial agreement with 1-D test methods was observed for the explosives tested, and future work is planned to scope the applicability and limitations of these experimental techniques.

  1. Using raindrop size distributions from different types of disdrometer to establish weather radar algorithms

    NASA Astrophysics Data System (ADS)

    Baldini, Luca; Adirosi, Elisa; Roberto, Nicoletta; Vulpiani, Gianfranco; Russo, Fabio; Napolitano, Francesco

    2015-04-01

    Radar precipitation retrieval uses several relationships that parameterize precipitation properties (like rainfall rate and liquid water content and attenuation (in case of radars at attenuated frequencies such as those at C- and X- band) as a function of combinations of radar measurements. The uncertainty in such relations highly affects the uncertainty precipitation and attenuation estimates. A commonly used method to derive such relationships is to apply regression methods to precipitation measurements and radar observables simulated from datasets of drop size distributions (DSD) using microphysical and electromagnetic assumptions. DSD datasets are determined both by theoretical considerations (i.e. based on the assumption that the radar always samples raindrops whose sizes follow a gamma distribution) or from experimental measurements collected throughout the years by disdrometers. In principle, using long-term disdrometer measurements provide parameterizations more representative of a specific climatology. However, instrumental errors, specific of a disdrometer, can affect the results. In this study, different weather radar algorithms resulting from DSDs collected by diverse types of disdrometers, namely 2D video disdrometer, first and second generation of OTT Parsivel laser disdrometer, and Thies Clima laser disdrometer, in the area of Rome (Italy) are presented and discussed to establish at what extent dual-polarization radar algorithms derived from experimental DSD datasets are influenced by the different error structure of the different type of disdrometers used to collect the data.

  2. Improving the Process of Adjusting the Parameters of Finite Element Models of Healthy Human Intervertebral Discs by the Multi-Response Surface Method.

    PubMed

    Gómez, Fátima Somovilla; Lorza, Rubén Lostado; Bobadilla, Marina Corral; García, Rubén Escribano

    2017-09-21

    The kinematic behavior of models that are based on the finite element method (FEM) for modeling the human body depends greatly on an accurate estimate of the parameters that define such models. This task is complex, and any small difference between the actual biomaterial model and the simulation model based on FEM can be amplified enormously in the presence of nonlinearities. The current paper attempts to demonstrate how a combination of the FEM and the MRS methods with desirability functions can be used to obtain the material parameters that are most appropriate for use in defining the behavior of Finite Element (FE) models of the healthy human lumbar intervertebral disc (IVD). The FE model parameters were adjusted on the basis of experimental data from selected standard tests (compression, flexion, extension, shear, lateral bending, and torsion) and were developed as follows: First, three-dimensional parameterized FE models were generated on the basis of the mentioned standard tests. Then, 11 parameters were selected to define the proposed parameterized FE models. For each of the standard tests, regression models were generated using MRS to model the six stiffness and nine bulges of the healthy IVD models that were created by changing the parameters of the FE models. The optimal combination of the 11 parameters was based on three different adjustment criteria. The latter, in turn, were based on the combination of stiffness and bulges that were obtained from the standard test FE simulations. The first adjustment criteria considered stiffness and bulges to be equally important in the adjustment of FE model parameters. The second adjustment criteria considered stiffness as most important, whereas the third considered the bulges to be most important. The proposed adjustment methods were applied to a medium-sized human IVD that corresponded to the L3-L4 lumbar level with standard dimensions of width = 50 mm, depth = 35 mm, and height = 10 mm. Agreement between the kinematic behavior that was obtained with the optimized parameters and that obtained from the literature demonstrated that the proposed method is a powerful tool with which to adjust healthy IVD FE models when there are many parameters, stiffnesses, and bulges to which the models must adjust.

  3. Improving the Process of Adjusting the Parameters of Finite Element Models of Healthy Human Intervertebral Discs by the Multi-Response Surface Method

    PubMed Central

    Somovilla Gómez, Fátima

    2017-01-01

    The kinematic behavior of models that are based on the finite element method (FEM) for modeling the human body depends greatly on an accurate estimate of the parameters that define such models. This task is complex, and any small difference between the actual biomaterial model and the simulation model based on FEM can be amplified enormously in the presence of nonlinearities. The current paper attempts to demonstrate how a combination of the FEM and the MRS methods with desirability functions can be used to obtain the material parameters that are most appropriate for use in defining the behavior of Finite Element (FE) models of the healthy human lumbar intervertebral disc (IVD). The FE model parameters were adjusted on the basis of experimental data from selected standard tests (compression, flexion, extension, shear, lateral bending, and torsion) and were developed as follows: First, three-dimensional parameterized FE models were generated on the basis of the mentioned standard tests. Then, 11 parameters were selected to define the proposed parameterized FE models. For each of the standard tests, regression models were generated using MRS to model the six stiffness and nine bulges of the healthy IVD models that were created by changing the parameters of the FE models. The optimal combination of the 11 parameters was based on three different adjustment criteria. The latter, in turn, were based on the combination of stiffness and bulges that were obtained from the standard test FE simulations. The first adjustment criteria considered stiffness and bulges to be equally important in the adjustment of FE model parameters. The second adjustment criteria considered stiffness as most important, whereas the third considered the bulges to be most important. The proposed adjustment methods were applied to a medium-sized human IVD that corresponded to the L3–L4 lumbar level with standard dimensions of width = 50 mm, depth = 35 mm, and height = 10 mm. Agreement between the kinematic behavior that was obtained with the optimized parameters and that obtained from the literature demonstrated that the proposed method is a powerful tool with which to adjust healthy IVD FE models when there are many parameters, stiffnesses, and bulges to which the models must adjust. PMID:28934161

  4. A physiologically based toxicokinetic model for lake trout (Salvelinus namaycush).

    PubMed

    Lien, G J; McKim, J M; Hoffman, A D; Jenson, C T

    2001-01-01

    A physiologically based toxicokinetic (PB-TK) model for fish, incorporating chemical exchange at the gill and accumulation in five tissue compartments, was parameterized and evaluated for lake trout (Salvelinus namaycush). Individual-based model parameterization was used to examine the effect of natural variability in physiological, morphological, and physico-chemical parameters on model predictions. The PB-TK model was used to predict uptake of organic chemicals across the gill and accumulation in blood and tissues in lake trout. To evaluate the accuracy of the model, a total of 13 adult lake trout were exposed to waterborne 1,1,2,2-tetrachloroethane (TCE), pentachloroethane (PCE), and hexachloroethane (HCE), concurrently, for periods of 6, 12, 24 or 48 h. The measured and predicted concentrations of TCE, PCE and HCE in expired water, dorsal aortic blood and tissues were generally within a factor of two, and in most instances much closer. Variability noted in model predictions, based on the individual-based model parameterization used in this study, reproduced variability observed in measured concentrations. The inference is made that parameters influencing variability in measured blood and tissue concentrations of xenobiotics are included and accurately represented in the model. This model contributes to a better understanding of the fundamental processes that regulate the uptake and disposition of xenobiotic chemicals in the lake trout. This information is crucial to developing a better understanding of the dynamic relationships between contaminant exposure and hazard to the lake trout.

  5. ATLAS - A new Lagrangian transport and mixing model with detailed stratospheric chemistry

    NASA Astrophysics Data System (ADS)

    Wohltmann, I.; Rex, M.; Lehmann, R.

    2009-04-01

    We present a new global Chemical Transport Model (CTM) with full stratospheric chemistry and Lagrangian transport and mixing called ATLAS. Lagrangian models have some crucial advantages over Eulerian grid-box based models, like no numerical diffusion, no limitation of the time step of the model by the CFL criterion, conservation of mixing ratios by design and easy parallelization of code. The transport module is based on a trajectory code developed at the Alfred Wegener Institute. The horizontal and vertical resolution, the vertical coordinate system (pressure, potential temperature, hybrid coordinate) and the time step of the model are flexible, so that the model can be used both for process studies and long-time runs over several decades. Mixing of the Lagrangian air parcels is parameterized based on the local shear and strain of the flow with a method similar to that used in the CLaMS model, but with some modifications like a triangulation that introduces no vertical layers. The stratospheric chemistry module was developed at the Institute and includes 49 species and 170 reactions and a detailed treatment of heterogenous chemistry on polar stratospheric clouds. We present an overview over the model architecture, the transport and mixing concept and some validation results. Comparison of model results with tracer data from flights of the ER2 aircraft in the stratospheric polar vortex in 1999/2000 which are able to resolve fine tracer filaments show that excellent agreement with observed tracer structures can be achieved with a suitable mixing parameterization.

  6. A new parameterization of the post-fire snow albedo effect

    NASA Astrophysics Data System (ADS)

    Gleason, K. E.; Nolin, A. W.

    2013-12-01

    Mountain snowpack serves as an important natural reservoir of water: recharging aquifers, sustaining streams, and providing important ecosystem services. Reduced snowpacks and earlier snowmelt have been shown to affect fire size, frequency, and severity in the western United States. In turn, wildfire disturbance affects patterns of snow accumulation and ablation by reducing canopy interception, increasing turbulent fluxes, and modifying the surface radiation balance. Recent work shows that after a high severity forest fire, approximately 60% more solar radiation reaches the snow surface due to the reduction in canopy density. Also, significant amounts of pyrogenic carbon particles and larger burned woody debris (BWD) are shed from standing charred trees, which concentrate on the snowpack, darken its surface, and reduce snow albedo by 50% during ablation. Although the post-fire forest environment drives a substantial increase in net shortwave radiation at the snowpack surface, driving earlier and more rapid melt, hydrologic models do not explicitly incorporate forest fire disturbance effects to snowpack dynamics. The objective of this study was to parameterize the post-fire snow albedo effect due to BWD deposition on snow to better represent forest fire disturbance in modeling of snow-dominated hydrologic regimes. Based on empirical results from winter experiments, in-situ snow monitoring, and remote sensing data from a recent forest fire in the Oregon High Cascades, we characterized the post-fire snow albedo effect, and developed a simple parameterization of snowpack albedo decay in the post-fire forest environment. We modified the recession coefficient in the algorithm: α = α0 + K exp (-nr) where α = snowpack albedo, α0 = minimum snowpack albedo (≈0.4), K = constant (≈ 0.44), -n = number of days since last major snowfall, r = recession coefficient [Rohrer and Braun, 1994]. Our parameterization quantified BWD deposition and snow albedo decay rates and related these forest disturbance effects to radiative heating and snow melt rates. We validated our parameterization of the post-fire snow albedo effect at the plot scale using a physically-based, spatially-distributed snow accumulation and melt model, and in-situ eddy covariance and snow monitoring data. This research quantified wildfire impacts to snow dynamics in the Oregon High Cascades, and provided a new parameterization of post-fire drivers to changes in high elevation winter water storage.

  7. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  8. Multisite-multivariable sensitivity analysis of distributed watershed models: enhancing the perceptions from computationally frugal methods

    USDA-ARS?s Scientific Manuscript database

    This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...

  9. Characterizing Reinforcement Learning Methods through Parameterized Learning Problems

    DTIC Science & Technology

    2011-06-03

    extraneous. The agent could potentially adapt these representational aspects by applying methods from feature selection ( Kolter and Ng, 2009; Petrik et al...611–616. AAAI Press. Kolter , J. Z. and Ng, A. Y. (2009). Regularization and feature selection in least-squares temporal difference learning. In A. P

  10. What have we learned from HaChi (HAZE IN CHINA) project?

    NASA Astrophysics Data System (ADS)

    Zhao, Chunsheng; Wiedensohler, Alfred

    2016-04-01

    HaChi (Haze in China) project, a joint research between Chinese NSFC and German DFG, focuses on investigating the aerosol hygroscopic properties in the North China Plain and their relationships to aerosol optics, radiation, cloud physics and ozone photochemistry. As we know, Eastern China has suffered from severe pollution caused by large concentrations of aerosol particles resulting from emissions from fossil fuel and biomass burning, transportation and some other combustion sources. Low visibility events are frequently encountered and mainly accompanied with haze as a result of either high aerosol loading or the strong hygroscopic growth of the aerosol particles. Especially at relative humidities between 90 and 99%, the aerosol particles grow exponentially. The hygroscopic behaviors at relative humidities close to 100% are also strongly linked to the particles ability to grow into cloud droplets at supersaturation. In my talk, I will present an overview of the up to date results from a serial of intensive and comprehensive field campaigns conducted at the sites of Wuqing and Xianghe, China, between 2009 and 2014. The measurements of the ambient aerosol hygroscopic properties at high RH between 90 and 98.5% are reported first. These in situ field measurements of atmospheric aerosol are unique with respect to their high RH range and especially of importance to better understand the widespread anthropogenic haze over the North China Plain. Then I will introduce the methods for calculating of aerosol hygroscopicity and their parameterization schemes derived from size-segregated chemical composition and the light scattering enhancement factor measurements in the North China Plain. A new method was proposed to retrieve the ratio of the externally mixed light absorbing carbon mass to the total mass of light absorbing carbon. A new parameterization scheme of light extinction for low visibilities on hazy days is proposed based on visibility, relative humidity, aerosol hygroscopic growth factors and particle number size distributions measured. Cloud Condensation Nuclei (CCN) closure study is conducted with bulk CCN number concentration and calculated CCN number concentration based on the aerosol number size distribution and size-resolved activation properties. An evaluation of various methods for CCN parameterization is presented based on in situ measurements of aerosol activation properties within HaChi project. Hygroscopic growth of aerosol particles can significantly affect their single-scattering albedo, and consequently alters the aerosol effect on tropospheric photochemistry. At last, I will introduce the results on the relationship between aerosol hygroscopic properties and aerosol radiation including impacts of aerosol hygroscopic growth on the NO2 photolysis rate coefficient and the estimation of direct aerosol radiative effect in the North China Plan.

  11. Assessment of groundwater quality: a fusion of geochemical and geophysical information via Bayesian neural networks.

    PubMed

    Maiti, Saumen; Erram, V C; Gupta, Gautam; Tiwari, Ram Krishna; Kulkarni, U D; Sangpal, R R

    2013-04-01

    Deplorable quality of groundwater arising from saltwater intrusion, natural leaching and anthropogenic activities is one of the major concerns for the society. Assessment of groundwater quality is, therefore, a primary objective of scientific research. Here, we propose an artificial neural network-based method set in a Bayesian neural network (BNN) framework and employ it to assess groundwater quality. The approach is based on analyzing 36 water samples and inverting up to 85 Schlumberger vertical electrical sounding data. We constructed a priori model by suitably parameterizing geochemical and geophysical data collected from the western part of India. The posterior model (post-inversion) was estimated using the BNN learning procedure and global hybrid Monte Carlo/Markov Chain Monte Carlo optimization scheme. By suitable parameterization of geochemical and geophysical parameters, we simulated 1,500 training samples, out of which 50 % samples were used for training and remaining 50 % were used for validation and testing. We show that the trained model is able to classify validation and test samples with 85 % and 80 % accuracy respectively. Based on cross-correlation analysis and Gibb's diagram of geochemical attributes, the groundwater qualities of the study area were classified into following three categories: "Very good", "Good", and "Unsuitable". The BNN model-based results suggest that groundwater quality falls mostly in the range of "Good" to "Very good" except for some places near the Arabian Sea. The new modeling results powered by uncertainty and statistical analyses would provide useful constrain, which could be utilized in monitoring and assessment of the groundwater quality.

  12. Surface-Constrained Volumetric Brain Registration Using Harmonic Mappings

    PubMed Central

    Joshi, Anand A.; Shattuck, David W.; Thompson, Paul M.; Leahy, Richard M.

    2015-01-01

    In order to compare anatomical and functional brain imaging data across subjects, the images must first be registered to a common coordinate system in which anatomical features are aligned. Intensity-based volume registration methods can align subcortical structures well, but the variability in sulcal folding patterns typically results in misalignment of the cortical surface. Conversely, surface-based registration using sulcal features can produce excellent cortical alignment but the mapping between brains is restricted to the cortical surface. Here we describe a method for volumetric registration that also produces an accurate one-to-one point correspondence between cortical surfaces. This is achieved by first parameterizing and aligning the cortical surfaces using sulcal landmarks. We then use a constrained harmonic mapping to extend this surface correspondence to the entire cortical volume. Finally, this mapping is refined using an intensity-based warp. We demonstrate the utility of the method by applying it to T1-weighted magnetic resonance images (MRI). We evaluate the performance of our proposed method relative to existing methods that use only intensity information; for this comparison we compute the inter-subject alignment of expert-labeled sub-cortical structures after registration. PMID:18092736

  13. A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur

    2009-07-01

    For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).

  14. Topside Electron Density Representations for Middle and High Latitudes: A Topside Parameterization for E-CHAIM Based On the NeQuick

    NASA Astrophysics Data System (ADS)

    Themens, David R.; Jayachandran, P. T.; Bilitza, Dieter; Erickson, Philip J.; Häggström, Ingemar; Lyashenko, Mykhaylo V.; Reid, Benjamin; Varney, Roger H.; Pustovalova, Ljubov

    2018-02-01

    In this study, we present a topside model representation to be used by the Empirical Canadian High Arctic Ionospheric Model (E-CHAIM). In the process of this, we also present a comprehensive evaluation of the NeQuick's, and by extension the International Reference Ionosphere's, topside electron density model for middle and high latitudes in the Northern Hemisphere. Using data gathered from all available incoherent scatter radars, topside sounders, and Global Navigation Satellite System Radio Occultation satellites, we show that the current NeQuick parameterization suboptimally represents the shape of the topside electron density profile at these latitudes and performs poorly in the representation of seasonal and solar cycle variations of the topside scale thickness. Despite this, the simple, one variable, NeQuick model is a powerful tool for modeling the topside ionosphere. By refitting the parameters that define the maximum topside scale thickness and the rate of increase of the scale height within the NeQuick topside model function, r and g, respectively, and refitting the model's parameterization of the scale height at the F region peak, H0, we find considerable improvement in the NeQuick's ability to represent the topside shape and behavior. Building on these results, we present a new topside model extension of the E-CHAIM based on the revised NeQuick function. Overall, root-mean-square errors in topside electron density are improved over the traditional International Reference Ionosphere/NeQuick topside by 31% for a new NeQuick parameterization and by 36% for a newly proposed topside for E-CHAIM.

  15. Regional tectonic analysis of Venus equatorial highlands and comparison with Earth-based Magellan radar images

    NASA Technical Reports Server (NTRS)

    Williams, David R.; Wetherill, George

    1993-01-01

    Research on regional tectonic analysis of Venus equatorial highlands and comparison with earth-based and Magellan radar images is presented. Over the past two years, the tectonic analysis of Venus performed centered on global properties of the planet, in order to understand fundamental aspects of the dynamics of the mantle and lithosphere of Venus. These include studies pertaining to the original constitutive and thermal character of the planet, as well as the evolution of Venus through time, and the present day tectonics. Parameterized convection models of the Earth and Venus were developed. The parameterized convection code was reformulated to model Venus with an initially hydrous mantle to determine how the cold-trap could affect the evolution of the planet.

  16. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    DOE PAGES

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less

  17. Parameterizing the Morse Potential for Coarse-Grained Modeling of Blood Plasma

    PubMed Central

    Zhang, Na; Zhang, Peng; Kang, Wei; Bluestein, Danny; Deng, Yuefan

    2014-01-01

    Multiscale simulations of fluids such as blood represent a major computational challenge of coupling the disparate spatiotemporal scales between molecular and macroscopic transport phenomena characterizing such complex fluids. In this paper, a coarse-grained (CG) particle model is developed for simulating blood flow by modifying the Morse potential, traditionally used in Molecular Dynamics for modeling vibrating structures. The modified Morse potential is parameterized with effective mass scales for reproducing blood viscous flow properties, including density, pressure, viscosity, compressibility and characteristic flow dynamics of human blood plasma fluid. The parameterization follows a standard inverse-problem approach in which the optimal micro parameters are systematically searched, by gradually decoupling loosely correlated parameter spaces, to match the macro physical quantities of viscous blood flow. The predictions of this particle based multiscale model compare favorably to classic viscous flow solutions such as Counter-Poiseuille and Couette flows. It demonstrates that such coarse grained particle model can be applied to replicate the dynamics of viscous blood flow, with the advantage of bridging the gap between macroscopic flow scales and the cellular scales characterizing blood flow that continuum based models fail to handle adequately. PMID:24910470

  18. A distributed computing environment with support for constraint-based task scheduling and scientific experimentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L.

    1997-04-01

    This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and executemore » program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.« less

  19. A comprehensive physiologically based pharmacokinetic knowledgebase and web-based interface for rapid model ranking and querying

    EPA Science Inventory

    Published physiologically based pharmacokinetic (PBPK) models from peer-reviewed articles are often well-parameterized, thoroughly-vetted, and can be utilized as excellent resources for the construction of models pertaining to related chemicals. Specifically, chemical-specific pa...

  20. A Density Functional Approach to Polarizable Models: A Kim-Gordon-Response Density Interaction Potential for Molecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabacchi, G; Hutter, J; Mundy, C

    2005-04-07

    A combined linear response--frozen electron density model has been implemented in a molecular dynamics scheme derived from an extended Lagrangian formalism. This approach is based on a partition of the electronic charge distribution into a frozen region described by Kim-Gordon theory, and a response contribution determined by the instaneous ionic configuration of the system. The method is free from empirical pair-potentials and the parameterization protocol involves only calculations on properly chosen subsystems. They apply this method to a series of alkali halides in different physical phases and are able to reproduce experimental structural and thermodynamic properties with an accuracy comparablemore » to Kohn-Sham density functional calculations.« less

  1. Evaluation of Extratropical Cyclone Precipitation in the North Atlantic Basin: An analysis of ERA-Interim, WRF, and two CMIP5 models.

    PubMed

    Booth, James F; Naud, Catherine M; Willison, Jeff

    2018-03-01

    The representation of extratropical cyclones (ETCs) precipitation in general circulation models (GCMs) and a weather research and forecasting (WRF) model is analyzed. This work considers the link between ETC precipitation and dynamical strength and tests if parameterized convection affects this link for ETCs in the North Atlantic Basin. Lagrangian cyclone tracks of ETCs in ERA-Interim reanalysis (ERAI), the GISS and GFDL CMIP5 models, and WRF with two horizontal resolutions are utilized in a compositing analysis. The 20-km resolution WRF model generates stronger ETCs based on surface wind speed and cyclone precipitation. The GCMs and ERAI generate similar composite means and distributions for cyclone precipitation rates, but GCMs generate weaker cyclone surface winds than ERAI. The amount of cyclone precipitation generated by the convection scheme differs significantly across the datasets, with GISS generating the most, followed by ERAI and then GFDL. The models and reanalysis generate relatively more parameterized convective precipitation when the total cyclone-averaged precipitation is smaller. This is partially due to the contribution of parameterized convective precipitation occurring more often late in the ETC life cycle. For reanalysis and models, precipitation increases with both cyclone moisture and surface wind speed, and this is true if the contribution from the parameterized convection scheme is larger or not. This work shows that these different models generate similar total ETC precipitation despite large differences in the parameterized convection, and these differences do not cause unexpected behavior in ETC precipitation sensitivity to cyclone moisture or surface wind speed.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud-aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vertical velocities, and parameterizations which do provide vertical velocities have been subject to limited evaluation against what have until recently been scant observations. Atmospheric observations imply that the distribution of vertical velocities depends on the areas over which the vertical velocities are averaged. Distributions of vertical velocities in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of scale-dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  3. Simulation-based Extraction of Key Material Parameters from Atomic Force Microscopy

    NASA Astrophysics Data System (ADS)

    Alsafi, Huseen; Peninngton, Gray

    Models for the atomic force microscopy (AFM) tip and sample interaction contain numerous material parameters that are often poorly known. This is especially true when dealing with novel material systems or when imaging samples that are exposed to complicated interactions with the local environment. In this work we use Monte Carlo methods to extract sample material parameters from the experimental AFM analysis of a test sample. The parameterized theoretical model that we use is based on the Virtual Environment for Dynamic AFM (VEDA) [1]. The extracted material parameters are then compared with the accepted values for our test sample. Using this procedure, we suggest a method that can be used to successfully determine unknown material properties in novel and complicated material systems. We acknowledge Fisher Endowment Grant support from the Jess and Mildred Fisher College of Science and Mathematics,Towson University.

  4. Precession missile feature extraction using sparse component analysis of radar measurements

    NASA Astrophysics Data System (ADS)

    Liu, Lihua; Du, Xiaoyong; Ghogho, Mounir; Hu, Weidong; McLernon, Des

    2012-12-01

    According to the working mode of the ballistic missile warning radar (BMWR), the radar return from the BMWR is usually sparse. To recognize and identify the warhead, it is necessary to extract the precession frequency and the locations of the scattering centers of the missile. This article first analyzes the radar signal model of the precessing conical missile during flight and develops the sparse dictionary which is parameterized by the unknown precession frequency. Based on the sparse dictionary, the sparse signal model is then established. A nonlinear least square estimation is first applied to roughly extract the precession frequency in the sparse dictionary. Based on the time segmented radar signal, a sparse component analysis method using the orthogonal matching pursuit algorithm is then proposed to jointly estimate the precession frequency and the scattering centers of the missile. Simulation results illustrate the validity of the proposed method.

  5. A~comprehensive parameterization of heterogeneous ice nucleation of dust surrogate: laboratory study with hematite particles and its application to atmospheric models

    NASA Astrophysics Data System (ADS)

    Hiranuma, N.; Paukert, M.; Steinke, I.; Zhang, K.; Kulkarni, G.; Hoose, C.; Schnaiter, M.; Saathoff, H.; Möhler, O.

    2014-06-01

    A new heterogeneous ice nucleation parameterization that covers a~wide temperature range (-36 to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is critical in order to accurately simulate the ice nucleation processes in cirrus clouds. The surface-scaled ice nucleation efficiencies of hematite particles, inferred by ns, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions that were realized by continuously changing temperature (T) and relative humidity with respect to ice (RHice) in the chamber. Our measurements showed several different pathways to nucleate ice depending on T and RHice conditions. For instance, almost T-independent freezing was observed at -60 °C < T < -50 °C, where RHice explicitly controlled ice nucleation efficiency, while both T and RHice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T colder than -60 °C revealed that higher RHice was necessary to maintain constant ns, whereas T may have played a significant role in ice nucleation at T warmer than -50 °C. We implemented new ns parameterizations into two cloud models to investigate its sensitivity and compare with the existing ice nucleation schemes towards simulating cirrus cloud properties. Our results show that the new AIDA-based parameterizations lead to an order of magnitude higher ice crystal concentrations and inhibition of homogeneous nucleation in colder temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have stronger influence on cloud properties such as cloud longevity and initiation when compared to previous parameterizations.

  6. A Comprehensive Parameterization of Heterogeneous Ice Nucleation of Dust Surrogate: Laboratory Study with Hematite Particles and Its Application to Atmospheric Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiranuma, Naruki; Paukert, Marco; Steinke, Isabelle

    2014-12-10

    A new heterogeneous ice nucleation parameterization that covers a wide temperature range (-36 °C to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is critical in order to accurately simulate the ice nucleation processes in cirrus clouds. The surface-scaled ice nucleation efficiencies of hematite particles, inferred by n s, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions that were realized by continuously changing temperature (T) and relative humidity with respect to ice (RH ice) in the chamber. Our measurementsmore » showed several different pathways to nucleate ice depending on T and RH ice conditions. For instance, almost independent freezing was observed at -60 °C < T < -50 °C, where RH ice explicitly controlled ice nucleation efficiency, while both T and RH ice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T colder than -60 °C revealed that higher RHice was necessary to maintain constant n s, whereas T may have played a significant role in ice nucleation at T warmer than -50 °C. We implemented new n s parameterizations into two cloud models to investigate its sensitivity and compare with the existing ice nucleation schemes towards simulating cirrus cloud properties. Our results show that the new AIDA-based parameterizations lead to an order of magnitude higher ice crystal concentrations and inhibition of homogeneous nucleation in colder temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have stronger influence on cloud properties such as cloud longevity and initiation when compared to previous parameterizations.« less

  7. Importance of Physico-Chemical Properties of Aerosols in the Formation of Arctic Ice Clouds

    NASA Astrophysics Data System (ADS)

    Keita, S. A.; Girard, E.

    2014-12-01

    Ice clouds play an important role in the Arctic weather and climate system but interactions between aerosols, clouds and radiation are poorly understood. Consequently, it is essential to fully understand their properties and especially their formation process. Extensive measurements from ground-based sites and satellite remote sensing reveal the existence of two Types of Ice Clouds (TICs) in the Arctic during the polar night and early spring. TIC-1 are composed by non-precipitating very small (radar-unseen) ice crystals whereas TIC-2 are detected by both sensors and are characterized by a low concentration of large precipitating ice crystals. It is hypothesized that TIC-2 formation is linked to the acidification of aerosols, which inhibit the ice nucleating properties of ice nuclei (IN). As a result, the IN concentration is reduced in these regions, resulting to a smaller concentration of larger ice crystals. Over the past 10 years, several parameterizations of homogeneous and heterogeneous ice nucleation have been developed to reflect the various physical and chemical properties of aerosols. These parameterizations are derived from laboratory studies on aerosols of different chemical compositions. The parameterizations are also developed according to two main approaches: stochastic (that nucleation is a probabilistic process, which is time dependent) and singular (that nucleation occurs at fixed conditions of temperature and humidity and time-independent). This research aims to better understand the formation process of TICs using a newly-developed ice nucleation parameterizations. For this purpose, we implement some parameterizations (2 approaches) into the Limited Area version of the Global Multiscale Environmental Model (GEM-LAM) and use them to simulate ice clouds observed during the Indirect and Semi-Direct Arctic Cloud (ISDAC) in Alaska. We use both approaches but special attention is focused on the new parameterizations of the singular approach. Simulation results of the TICs-2 observed on April 15th and 25th (polluted or acidic cases) and TICs-1 observed on April 5th (non-polluted cases) will be presented.

  8. Multiple Quasi-Equilibria of the ITCZ and the Origin of Monsoon Onset. Part 2; Rotational ITCZ Attractors

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Chen, Baode; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Chao's numerical and theoretical work on multiple quasi-equilibria of the intertropical convergence zone (ITCZ) and the origin of monsoon onset is extended to solve two additional puzzles. One is the highly nonlinear dependence on latitude of the "force" acting on the ITCZ due to earth's rotation, which makes the multiple quasi-equilibria of the ITCZ and monsoon onset possible. The other is the dramatic difference in such dependence when different cumulus parameterization schemes are used in a model. Such a difference can lead to a switch between a single ITCZ at the equator and a double ITCZ, when a different cumulus parameterization scheme is used. Sometimes one of the double ITCZ can diminish and only the other remain, but still this can mean different latitudinal locations for the single ITCZ. A single idea based on two off-equator attractors for the ITCZ, due to earth's rotation and symmetric with respect to the equator, and the dependence of the strength and size of these attractors on the cumulus parameterization scheme solves both puzzles. The origin of these rotational attractors, explained in Part I, is further discussed. The "force" acting on the ITCZ due to earth's rotation is the sum of the "forces" of the two attractors. Each attractor exerts on the ITCZ a "force" of simple shape in latitude; but the sum gives a shape highly varying in latitude. Also the strength and the domain of influence of each attractor vary, when change is made in the cumulus parameterization. This gives rise to the high sensitivity of the "force" shape to cumulus parameterization. Numerical results, of experiments using Goddard's GEOS general circulation model, supporting this idea are presented. It is also found that the model results are sensitive to changes outside of the cumulus parameterization. The significance of this study to El Nino forecast and to tropical forecast in general is discussed.

  9. Using Machine learning method to estimate Air Temperature from MODIS over Berlin

    NASA Astrophysics Data System (ADS)

    Marzban, F.; Preusker, R.; Sodoudi, S.; Taheri, H.; Allahbakhshi, M.

    2015-12-01

    Land Surface Temperature (LST) is defined as the temperature of the interface between the Earth's surface and its atmosphere and thus it is a critical variable to understand land-atmosphere interactions and a key parameter in meteorological and hydrological studies, which is involved in energy fluxes. Air temperature (Tair) is one of the most important input variables in different spatially distributed hydrological, ecological models. The estimation of near surface air temperature is useful for a wide range of applications. Some applications from traffic or energy management, require Tair data in high spatial and temporal resolution at two meters height above the ground (T2m), sometimes in near-real-time. Thus, a parameterization based on boundary layer physical principles was developed that determines the air temperature from remote sensing data (MODIS). Tair is commonly obtained from synoptic measurements in weather stations. However, the derivation of near surface air temperature from the LST derived from satellite is far from straight forward. T2m is not driven directly by the sun, but indirectly by LST, thus T2m can be parameterized from the LST and other variables such as Albedo, NDVI, Water vapor and etc. Most of the previous studies have focused on estimating T2m based on simple and advanced statistical approaches, Temperature-Vegetation index and energy-balance approaches but the main objective of this research is to explore the relationships between T2m and LST in Berlin by using Artificial intelligence method with the aim of studying key variables to allow us establishing suitable techniques to obtain Tair from satellite Products and ground data. Secondly, an attempt was explored to identify an individual mix of attributes that reveals a particular pattern to better understanding variation of T2m during day and nighttime over the different area of Berlin. For this reason, a three layer Feedforward neural networks is considered with LMA algorithm. Considering the different relationships between T2m and LST for different land types enable us to improve better parameterization for determination of the best non-linear relation between LST and T2m over Berlin during day and nighttime. The results of the study will be presented and discussed.

  10. EAULIQ: The Next Generation

    NASA Technical Reports Server (NTRS)

    Randall, David A.; Fowler, Laura D.

    1999-01-01

    This report summarizes the design of a new version of the stratiform cloud parameterization called Eauliq; the new version is called Eauliq NG. The key features of Eauliq NG are: (1) a prognostic fractional area covered by stratiform cloudiness, following the approach developed by M. Tiedtke for use in the ECMWF model; (2) separate prognostic thermodynamic variables for the clear and cloudy portions of each grid cell; (3) separate vertical velocities for the clear and cloudy portions of each grid cell, allowing the model to represent some aspects of observed mesoscale circulations; (4) cumulus entrainment from both the clear and cloudy portions of a grid cell, and cumulus detrainment into the cloudy portion only; and (5) the effects of the cumulus-induced subsidence in the cloudy portion of a grid cell on the cloud water and ice there. In this paper we present the mathematical framework of Eauliq NG; a discussion of cumulus effects; a new parameterization of lateral mass exchanges between clear and cloudy regions; and a theory to determine the mesoscale mass circulation, based on the hypothesis that the stratiform clouds remain neutrally buoyant through time and that the mesoscale circulations are the mechanism which makes this possible. An appendix also discusses some time-differencing methods.

  11. Stellar Atmospheric Parameterization Based on Deep Learning

    NASA Astrophysics Data System (ADS)

    Pan, Ru-yang; Li, Xiang-ru

    2017-07-01

    Deep learning is a typical learning method widely studied in the fields of machine learning, pattern recognition, and artificial intelligence. This work investigates the problem of stellar atmospheric parameterization by constructing a deep neural network with five layers, and the node number in each layer of the network is respectively 3821-500-100-50-1. The proposed scheme is verified on both the real spectra measured by the Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with the Kurucz's New Opacity Distribution Function (NEWODF) model, to make an automatic estimation for three physical parameters: the effective temperature (Teff), surface gravitational acceleration (lg g), and metallic abundance (Fe/H). The results show that the stacked autoencoder deep neural network has a better accuracy for the estimation. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for Teff/K, 0.0058 for (lg Teff/K), 0.1706 for lg (g/(cm·s-2)), and 0.1294 dex for the [Fe/H], respectively; On the theoretic spectra, the MAEs are 15.34 for Teff/K, 0.0011 for lg (Teff/K), 0.0214 for lg(g/(cm · s-2)), and 0.0121 dex for [Fe/H], respectively.

  12. Biological engineering applications of feedforward neural networks designed and parameterized by genetic algorithms.

    PubMed

    Ferentinos, Konstantinos P

    2005-09-01

    Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.

  13. Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone

    NASA Astrophysics Data System (ADS)

    Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.

    2017-12-01

    The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.

  14. Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds

    NASA Astrophysics Data System (ADS)

    Yun, Yuxing; Penner, Joyce E.

    2012-04-01

    A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.

  15. Adjustment of interaural time difference in head related transfer functions based on listeners' anthropometry and its effect on sound localization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yôiti; Watanabe, Kanji; Iwaya, Yukio; Gyoba, Jiro; Takane, Shouichi

    2005-04-01

    Because the transfer functions governing subjective sound localization (HRTFs) show strong individuality, sound localization systems based on synthesis of HRTFs require suitable HRTFs for individual listeners. However, it is impractical to obtain HRTFs for all listeners based on measurements. Improving sound localization by adjusting non-individualized HRTFs to a specific listener based on that listener's anthropometry might be a practical method. This study first developed a new method to estimate interaural time differences (ITDs) using HRTFs. Then correlations between ITDs and anthropometric parameters were analyzed using the canonical correlation method. Results indicated that parameters relating to head size, and shoulder and ear positions are significant. Consequently, it was attempted to express ITDs based on listener's anthropometric data. In this process, the change of ITDs as a function of azimuth angle was parameterized as a sum of sine functions. Then the parameters were analyzed using multiple regression analysis, in which the anthropometric parameters were used as explanatory variables. The predicted or individualized ITDs were installed in the nonindividualized HRTFs to evaluate sound localization performance. Results showed that individualization of ITDs improved horizontal sound localization.

  16. Optimization of the Upper Surface of Hypersonic Vehicle Based on CFD Analysis

    NASA Astrophysics Data System (ADS)

    Gao, T. Y.; Cui, K.; Hu, S. C.; Wang, X. P.; Yang, G. W.

    2011-09-01

    For the hypersonic vehicle, the aerodynamic performance becomes more intensive. Therefore, it is a significant event to optimize the shape of the hypersonic vehicle to achieve the project demands. It is a key technology to promote the performance of the hypersonic vehicle with the method of shape optimization. Based on the existing vehicle, the optimization to the upper surface of the Simplified hypersonic vehicle was done to obtain a shape which suits the project demand. At the cruising condition, the upper surface was parameterized with the B-Spline curve method. The incremental parametric method and the reconstruction technology of the local mesh were applied here. The whole flow field was been calculated and the aerodynamic performance of the craft were obtained by the computational fluid dynamic (CFD) technology. Then the vehicle shape was optimized to achieve the maximum lift-drag ratio at attack angle 3°, 4° and 5°. The results will provide the reference for the practical design.

  17. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    DOE PAGES

    Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...

    2015-06-30

    Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme.This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less

  18. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    DOE PAGES

    Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; ...

    2015-12-01

    Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysicsmore » scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. In conclusion, the new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less

  19. Evaluating Cloud Initialization in a Convection-permit NWP Model

    NASA Astrophysics Data System (ADS)

    Li, Jia; Chen, Baode

    2015-04-01

    In general, to avoid "double counting precipitation" problem, in convection permit NWP models, it was a common practice to turn off convective parameterization. However, if there were not any cloud information in the initial conditions, the occurrence of precipitation could be delayed due to spin-up of cloud field or microphysical variables. In this study, we utilized the complex cloud analysis package from the Advanced Regional Prediction System (ARPS) to adjust the initial states of the model on water substance, such as cloud water, cloud ice, rain water, et al., that is, to initialize the microphysical variables (i.e., hydrometers), mainly based on radar reflectivity observations. Using the Advanced Research WRF (ARW) model, numerical experiments with/without cloud initialization and convective parameterization were carried out at grey-zone resolutions (i.e. 1, 3, and 9 km). The results from the experiments without convective parameterization indicate that model ignition with radar reflectivity can significantly reduce spin-up time and accurately simulate precipitation at the initial time. In addition, it helps to improve location and intensity of predicted precipitation. With grey-zone resolutions (i.e. 1, 3, and 9 km), using the cumulus convective parameterization scheme (without radar data) cannot produce realistic precipitation at the early time. The issues related to microphysical parametrization associated with cloud initialization were also discussed.

  20. Vertical structure of mean cross-shore currents across a barred surf zone

    USGS Publications Warehouse

    Haines, John W.; Sallenger, Asbury H.

    1994-01-01

    Mean cross-shore currents observed across a barred surf zone are compared to model predictions. The model is based on a simplified momentum balance with a turbulent boundary layer at the bed. Turbulent exchange is parameterized by an eddy viscosity formulation, with the eddy viscosity Aυ independent of time and the vertical coordinate. Mean currents result from gradients due to wave breaking and shoaling, and the presence of a mean setup of the free surface. Descriptions of the wave field are provided by the wave transformation model of Thornton and Guza [1983]. The wave transformation model adequately reproduces the observed wave heights across the surf zone. The mean current model successfully reproduces the observed cross-shore flows. Both observations and predictions show predominantly offshore flow with onshore flow restricted to a relatively thin surface layer. Successful application of the mean flow model requires an eddy viscosity which varies horizontally across the surf zone. Attempts are made to parameterize this variation with some success. The data does not discriminate between alternative parameterizations proposed. The overall variability in eddy viscosity suggested by the model fitting should be resolvable by field measurements of the turbulent stresses. Consistent shortcomings of the parameterizations, and the overall modeling effort, suggest avenues for further development and data collection.

  1. Retrieving rupture history using waveform inversions in time sequence

    NASA Astrophysics Data System (ADS)

    Yi, L.; Xu, C.; Zhang, X.

    2017-12-01

    The rupture history of large earthquakes is generally regenerated using the waveform inversion through utilizing seismological waveform records. In the waveform inversion, based on the superposition principle, the rupture process is linearly parameterized. After discretizing the fault plane into sub-faults, the local source time function of each sub-fault is usually parameterized using the multi-time window method, e.g., mutual overlapped triangular functions. Then the forward waveform of each sub-fault is synthesized through convoluting the source time function with its Green function. According to the superposition principle, these forward waveforms generated from the fault plane are summarized in the recorded waveforms after aligning the arrival times. Then the slip history is retrieved using the waveform inversion method after the superposing of all forward waveforms for each correspond seismological waveform records. Apart from the isolation of these forward waveforms generated from each sub-fault, we also realize that these waveforms are gradually and sequentially superimposed in the recorded waveforms. Thus we proposed a idea that the rupture model is possibly detachable in sequent rupture times. According to the constrained waveform length method emphasized in our previous work, the length of inverted waveforms used in the waveform inversion is objectively constrained by the rupture velocity and rise time. And one essential prior condition is the predetermined fault plane that limits the duration of rupture time, which means the waveform inversion is restricted in a pre-set rupture duration time. Therefore, we proposed a strategy to inverse the rupture process sequentially using the progressively shift rupture times as the rupture front expanding in the fault plane. And we have designed a simulation inversion to test the feasibility of the method. Our test result shows the prospect of this idea that requiring furthermore investigation.

  2. A factorial assessment of the sensitivity of the BATS land-surface parameterization scheme. [BATS (Biosphere-Atmosphere Transfer Scheme)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henderson-Sellers, A.

    Land-surface schemes developed for incorporation into global climate models include parameterizations that are not yet fully validated and depend upon the specification of a large (20-50) number of ecological and soil parameters, the values of which are not yet well known. There are two methods of investigating the sensitivity of a land-surface scheme to prescribed values: simple one-at-a-time changes or factorial experiments. Factorial experiments offer information about interactions between parameters and are thus a more powerful tool. Here the results of a suite of factorial experiments are reported. These are designed (i) to illustrate the usefulness of this methodology andmore » (ii) to identify factors important to the performance of complex land-surface schemes. The Biosphere-Atmosphere Transfer Scheme (BATS) is used and its sensitivity is considered (a) to prescribed ecological and soil parameters and (b) to atmospheric forcing used in the off-line tests undertaken. Results indicate that the most important atmospheric forcings are mean monthly temperature and the interaction between mean monthly temperature and total monthly precipitation, although fractional cloudiness and other parameters are also important. The most important ecological parameters are vegetation roughness length, soil porosity, and a factor describing the sensitivity of the stomatal resistance of vegetation to the amount of photosynthetically active solar radiation and, to a lesser extent, soil and vegetation albedos. Two-factor interactions including vegetation roughness length are more important than many of the 23 specified single factors. The results of factorial sensitivity experiments such as these could form the basis for intercomparison of land-surface parameterization schemes and for field experiments and satellite-based observation programs aimed at improving evaluation of important parameters.« less

  3. A coupled two-dimensional main chain torsional potential for protein dynamics: generation and implementation.

    PubMed

    Li, Yongxiu; Gao, Ya; Zhang, Xuqiang; Wang, Xingyu; Mou, Lirong; Duan, Lili; He, Xiao; Mei, Ye; Zhang, John Z H

    2013-09-01

    Main chain torsions of alanine dipeptide are parameterized into coupled 2-dimensional Fourier expansions based on quantum mechanical (QM) calculations at M06 2X/aug-cc-pvtz//HF/6-31G** level. Solvation effect is considered by employing polarizable continuum model. Utilization of the M06 2X functional leads to precise potential energy surface that is comparable to or even better than MP2 level, but with much less computational demand. Parameterization of the 2D expansions is against the full main chain torsion space instead of just a few low energy conformations. This procedure is similar to that for the development of AMBER03 force field, except unique weighting factor was assigned to all the grid points. To avoid inconsistency between quantum mechanical calculations and molecular modeling, the model peptide is further optimized at molecular mechanics level with main chain dihedral angles fixed before the calculation of the conformational energy on molecular mechanical level at each grid point, during which generalized Born model is employed. Difference in solvation models at quantum mechanics and molecular mechanics levels makes this parameterization procedure less straightforward. All force field parameters other than main chain torsions are taken from existing AMBER force field. With this new main chain torsion terms, we have studied the main chain dihedral distributions of ALA dipeptide and pentapeptide in aqueous solution. The results demonstrate that 2D main chain torsion is effective in delineating the energy variation associated with rotations along main chain dihedrals. This work is an implication for the necessity of more accurate description of main chain torsions in the future development of ab initio force field and it also raises a challenge to the development of quantum mechanical methods, especially the quantum mechanical solvation models.

  4. Stochastic parameterization for light absorption by internally mixed BC/dust in snow grains for application to climate models

    NASA Astrophysics Data System (ADS)

    Liou, K. N.; Takano, Y.; He, C.; Yang, P.; Leung, L. R.; Gu, Y.; Lee, W. L.

    2014-06-01

    A stochastic approach has been developed to model the positions of BC (black carbon)/dust internally mixed with two snow grain types: hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine BC/dust single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), the action of internal mixing absorbs substantially more light than external mixing. The snow grain shape effect on absorption is relatively small, but its effect on asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2-5 µm) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 µm, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo substantially more than external mixing and that the snow grain shape plays a critical role in snow albedo calculations through its forward scattering strength. Also, multiple inclusion of BC/dust significantly reduces snow albedo as compared to an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization involving contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountain/snow topography.

  5. Improving the Predictability of Severe Water Levels along the Coasts of Marginal Seas

    NASA Astrophysics Data System (ADS)

    Ridder, N. N.; de Vries, H.; van den Brink, H.; De Vries, H.

    2016-12-01

    Extreme water levels can lead to catastrophic consequences with severe societal and economic repercussions. Particularly vulnerable are countries that are largely situated below sea level. To support and optimize forecast models, as well as future adaptation efforts, this study assesses the modeled contribution of storm surges and astronomical tides to total water levels under different air-sea momentum transfer parameterizations in a numerical surge model (WAQUA/DCSMv5) of the North Sea. It particularly focuses on the implications for the representation of extreme and rapidly recurring severe water levels over the past decades based on the example of the Netherlands. For this, WAQUA/DCSMv5, which is currently used to forecast coastal water levels in the Netherlands, is forced with ERA Interim reanalysis data. Model results are obtained from two different methodologies to parameterize air-sea momentum transfer. The first calculates the governing wind stress forcing using a drag coefficient derived from the conventional approach of wind speed dependent Charnock constants. The other uses instantaneous wind stress from the parameterization of the quasi-linear theory applied within the ECMWF wave model which is expected to deliver a more realistic forcing. The performance of both methods is tested by validating the model output with observations, paying particular attention to their ability to reproduce rapidly succeeding high water levels and extreme events. In a second step, the common features of and connections between these events are analyzed. The results of this study will allow recommendations for the improvement of water level forecasts within marginal seas and support decisions by policy makers. Furthermore, they will strengthen the general understanding of severe and extreme water levels as a whole and help to extend the currently limited knowledge about clustering events.

  6. Stochastic Parameterization for Light Absorption by Internally Mixed BC/dust in Snow Grains for Application to Climate Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liou, K. N.; Takano, Y.; He, Cenlin

    2014-06-27

    A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions ofmore » BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.« less

  7. Extensively Parameterized Mutation-Selection Models Reliably Capture Site-Specific Selective Constraint.

    PubMed

    Spielman, Stephanie J; Wilke, Claus O

    2016-11-01

    The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Automated Simplification of Full Chemical Mechanisms

    NASA Technical Reports Server (NTRS)

    Norris, A. T.

    1997-01-01

    A code has been developed to automatically simplify full chemical mechanisms. The method employed is based on the Intrinsic Low Dimensional Manifold (ILDM) method of Maas and Pope. The ILDM method is a dynamical systems approach to the simplification of large chemical kinetic mechanisms. By identifying low-dimensional attracting manifolds, the method allows complex full mechanisms to be parameterized by just a few variables; in effect, generating reduced chemical mechanisms by an automatic procedure. These resulting mechanisms however, still retain all the species used in the full mechanism. Full and skeletal mechanisms for various fuels are simplified to a two dimensional manifold, and the resulting mechanisms are found to compare well with the full mechanisms, and show significant improvement over global one step mechanisms, such as those by Westbrook and Dryer. In addition, by using an ILDM reaction mechanism in a CID code, a considerable improvement in turn-around time can be achieved.

  9. Measured and parameterized energy fluxes estimated for Atlantic transects of RV Polarstern

    NASA Astrophysics Data System (ADS)

    Bumke, Karl; Macke, Andreas; Kalisch, John; Kleta, Henry

    2013-04-01

    Even to date energy fluxes over the oceans are difficult to assess. As an example the relative paucity of evaporation observations and the uncertainties of currently employed empirical approaches lead to large uncertainties of evaporation products over the ocean (e.g. Large and Yeager, 2009). Within the frame of OCEANET (Macke et al., 2010) we performed such measurements on Atlantic transects between Bremerhaven (Germany) and Cape Town (South Africa) or Punta Arenas (Chile) onboard RV Polarstern during the recent years. The basic measurements of sensible and latent heat fluxes are inertial-dissipation (e.g. Dupuis et al., 1997) flux estimates and measurements of the bulk variables. Turbulence measurements included a sonic anemometer and an infrared hygrometer, both mounted on the crow's nest. Mean meteorological sensors were those of the ship's operational measurement system. The global radiation and the down terrestrial radiation were measured on the OCEANET container placed on the monkey island. At least about 1000 time series of 1 h length were analyzed to derive bulk transfer coefficients for the fluxes of sensible and latent heat. The bulk transfer coefficients were applied to the ship's meteorological data to derive the heat fluxes at the sea surface. The reflected solar radiation was estimated from measured global radiation. The up terrestrial radiation was derived from the skin temperature according to the Stefan-Boltzmann law. Parameterized heat fluxes were compared to the widely used COARE-parameterization (Fairall et al., 2003), the agreement is excellent. Measured and parameterized heat and radiation fluxes gave the total energy budget at the air sea interface. As expected the mean total flux is positive, but there are also areas, where it is negative, indicating an energy loss of the ocean. It could be shown that the variations in the energy budget are mainly due to insolation and evaporation. A comparison between the mean values of measured and parameterized sensible and latent heat fluxes shows that the data are suitable to validate satellite derived fluxes at the sea surface and re-analysis data. References Dupuis, H., P. K. Taylor, A. Weill, and K. Katsaros, 1997: Inertial dissipation method applied to derive turbulent fluxes over the ocean during the surface of the ocean. J. Geophys. Res., 102 (C9), 21 115-21 129. Fairall, C. W., E. F. Bradley, J. E. Hare, A. A. Grachev, J. B. Edson, 2003: Bulk Parameterization of Air-Sea Fluxes: Updates and Verification for the COARE Algorithm. J. Climate, 16, 571-591. Large, W.G., and S.G. Yeager, 2009: The global climatology of an interannually varying air-sea flux data set. Climate Dynamics 33, 341-364. Macke, A., Kalisch, J., Zoll, Y., and Bumke, K., 2010: Radiative effects of the cloudy atmosphere from ground and satellite based observations, EPJ Web of Conferences, 5 9, 83-94

  10. Pion and Kaon Lab Frame Differential Cross Sections for Intermediate Energy Nucleus-Nucleus Collisions

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Blattnig, Steve R.

    2008-01-01

    Space radiation transport codes require accurate models for hadron production in intermediate energy nucleus-nucleus collisions. Codes require cross sections to be written in terms of lab frame variables and it is important to be able to verify models against experimental data in the lab frame. Several models are compared to lab frame data. It is found that models based on algebraic parameterizations are unable to describe intermediate energy differential cross section data. However, simple thermal model parameterizations, when appropriately transformed from the center of momentum to the lab frame, are able to account for the data.

  11. On the Use of CAD and Cartesian Methods for Aerodynamic Optimization

    NASA Technical Reports Server (NTRS)

    Nemec, M.; Aftosmis, M. J.; Pulliam, T. H.

    2004-01-01

    The objective for this paper is to present the development of an optimization capability for Curt3D, a Cartesian inviscid-flow analysis package. We present the construction of a new optimization framework and we focus on the following issues: 1) Component-based geometry parameterization approach using parametric-CAD models and CAPRI. A novel geometry server is introduced that addresses the issue of parallel efficiency while only sparingly consuming CAD resources; 2) The use of genetic and gradient-based algorithms for three-dimensional aerodynamic design problems. The influence of noise on the optimization methods is studied. Our goal is to create a responsive and automated framework that efficiently identifies design modifications that result in substantial performance improvements. In addition, we examine the architectural issues associated with the deployment of a CAD-based approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute engines. We demonstrate the effectiveness of the framework for a design problem that features topology changes and complex geometry.

  12. The Berkeley Out-of-Order Machine (BOOM): An Industry-Competitive, Synthesizable, Parameterized RISC-V Processor

    DTIC Science & Technology

    2015-06-13

    The Berkeley Out-of-Order Machine (BOOM): An Industry- Competitive, Synthesizable, Parameterized RISC-V Processor Christopher Celio David A...Synthesizable, Parameterized RISC-V Processor Christopher Celio, David Patterson, and Krste Asanović University of California, Berkeley, California 94720...Order Machine BOOM is a synthesizable, parameterized, superscalar out- of-order RISC-V core designed to serve as the prototypical baseline processor

  13. Effectively parameterizing dissipative particle dynamics using COSMO-SAC: A partition coefficient study

    NASA Astrophysics Data System (ADS)

    Saathoff, Jonathan

    2018-04-01

    Dissipative Particle Dynamics (DPD) provides a tool for studying phase behavior and interfacial phenomena for complex mixtures and macromolecules. Methods to quickly and automatically parameterize DPD greatly increase its effectiveness. One such method is to map predicted activity coefficients derived from COSMO-SAC onto DPD parameter sets. However, there are serious limitations to the accuracy of this mapping, including the inability of single DPD beads to reproduce asymmetric infinite dilution activity coefficients, the loss of precision when reusing parameters for different molecular fragments, and the error due to bonding beads together. This report describes these effects in quantitative detail and provides methods to mitigate much of their deleterious effects. This includes a novel approach to remove errors caused by bonding DPD beads together. Using these methods, logarithm hexane/water partition coefficients were calculated for 61 molecules. The root mean-squared error for these calculations was determined to be 0.14—a very low value—with respect to the final mapping procedure. Cognizance of the above limitations can greatly enhance the predictive power of DPD.

  14. Developing a Physiologically-Based Pharmacokinetic Model Knowledgebase in Support of Provisional Model Construction

    EPA Science Inventory

    Developing physiologically-based pharmacokinetic (PBPK) models for chemicals can be resource-intensive, as neither chemical-specific parameters nor in vivo pharmacokinetic data are easily available for model construction. Previously developed, well-parameterized, and thoroughly-v...

  15. Characterizing the degree of convective clustering using radar reflectivity and its application to evaluating model simulations

    NASA Astrophysics Data System (ADS)

    Cheng, W. Y.; Kim, D.; Rowe, A.; Park, S.

    2017-12-01

    Despite the impact of mesoscale convective organization on the properties of convection (e.g., mixing between updrafts and environment), parameterizing the degree of convective organization has only recently been attempted in cumulus parameterization schemes (e.g., Unified Convection Scheme UNICON). Additionally, challenges remain in determining the degree of convective organization from observations and in comparing directly with the organization metrics in model simulations. This study addresses the need to objectively quantify the degree of mesoscale convective organization using high quality S-PolKa radar data from the DYNAMO field campaign. One of the most noticeable aspects of mesoscale convective organization in radar data is the degree of convective clustering, which can be characterized by the number and size distribution of convective echoes and the distance between them. We propose a method of defining contiguous convective echoes (CCEs) using precipitating convective echoes identified by a rain type classification algorithm. Two classification algorithms, Steiner et al. (1995) and Powell et al. (2016), are tested and evaluated against high-resolution WRF simulations to determine which method better represents the degree of convective clustering. Our results suggest that the CCEs based on Powell et al.'s algorithm better represent the dynamical properties of the convective updrafts and thus provide the basis of a metric for convective organization. Furthermore, through a comparison with the observational data, the WRF simulations driven by the DYNAMO large-scale forcing, similarly applied to UNICON Single Column Model simulations, will allow us to evaluate the ability of both WRF and UNICON to simulate convective clustering. This evaluation is based on the physical processes that are explicitly represented in WRF and UNICON, including the mechanisms leading to convective clustering, and the feedback to the convective properties.

  16. Design Optimization of Vena Cava Filters: An application to dual filtration devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, M A; Wang, S L; Diachin, D P

    Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped modelmore » thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.« less

  17. Sea ice-atmosphere interaction: Application of multispectral satellite data in polar surface energy flux estimates

    NASA Technical Reports Server (NTRS)

    Steffen, K.; Schweiger, A.; Maslanik, J.; Key, J.; Weaver, R.; Barry, R.

    1990-01-01

    The application of multi-spectral satellite data to estimate polar surface energy fluxes is addressed. To what accuracy and over which geographic areas large scale energy budgets can be estimated are investigated based upon a combination of available remote sensing and climatological data sets. The general approach was to: (1) formulate parameterization schemes for the appropriate sea ice energy budget terms based upon the remotely sensed and/or in-situ data sets; (2) conduct sensitivity analyses using as input both natural variability (observed data in regional case studies) and theoretical variability based upon energy flux model concepts; (3) assess the applicability of these parameterization schemes to both regional and basin wide energy balance estimates using remote sensing data sets; and (4) assemble multi-spectral, multi-sensor data sets for at least two regions of the Arctic Basin and possibly one region of the Antarctic. The type of data needed for a basin-wide assessment is described and the temporal coverage of these data sets are determined by data availability and need as defined by parameterization scheme. The titles of the subjects are as follows: (1) Heat flux calculations from SSM/I and LANDSAT data in the Bering Sea; (2) Energy flux estimation using passive microwave data; (3) Fetch and stability sensitivity estimates of turbulent heat flux; and (4) Surface temperature algorithm.

  18. Visualization in hydrological and atmospheric modeling and observation

    NASA Astrophysics Data System (ADS)

    Helbig, C.; Rink, K.; Kolditz, O.

    2013-12-01

    In recent years, visualization of geoscientific and climate data has become increasingly important due to challenges such as climate change, flood prediction or the development of water management schemes for arid and semi-arid regions. Models for simulations based on such data often have a large number of heterogeneous input data sets, ranging from remote sensing data and geometric information (such as GPS data) to sensor data from specific observations sites. Data integration using such information is not straightforward and a large number of potential problems may occur due to artifacts, inconsistencies between data sets or errors based on incorrectly calibrated or stained measurement devices. Algorithms to automatically detect various of such problems are often numerically expensive or difficult to parameterize. In contrast, combined visualization of various data sets is often a surprisingly efficient means for an expert to detect artifacts or inconsistencies as well as to discuss properties of the data. Therefore, the development of general visualization strategies for atmospheric or hydrological data will often support researchers during assessment and preprocessing of the data for model setup. When investigating specific phenomena, visualization is vital for assessing the progress of the ongoing simulation during runtime as well as evaluating the plausibility of the results. We propose a number of such strategies based on established visualization methods that - are applicable to a large range of different types of data sets, - are computationally inexpensive to allow application for time-dependent data - can be easily parameterized based on the specific focus of the research. Examples include the highlighting of certain aspects of complex data sets using, for example, an application-dependent parameterization of glyphs, iso-surfaces or streamlines. In addition, we employ basic rendering techniques allowing affine transformations, changes in opacity as well as variation of transfer functions. We found that similar strategies can be applied for hydrological and atmospheric data such as the use of streamlines for visualization of wind or fluid flow or iso-surfaces as indicators of groundwater recharge levels in the subsurface or levels of humidity in the atmosphere. We applied these strategies for a wide range of hydrological and climate applications such as groundwater flow, distribution of chemicals in water bodies, development of convection cells in the atmosphere or heat flux on the earth's surface. Results have been evaluated in discussions with experts from hydrogeology and meteorology.

  19. Discovering shared segments on the migration route of the bar-headed goose by time-based plane-sweeping trajectory clustering

    USGS Publications Warehouse

    Luo, Ze; Baoping, Yan; Takekawa, John Y.; Prosser, Diann J.

    2012-01-01

    We propose a new method to help ornithologists and ecologists discover shared segments on the migratory pathway of the bar-headed geese by time-based plane-sweeping trajectory clustering. We present a density-based time parameterized line segment clustering algorithm, which extends traditional comparable clustering algorithms from temporal and spatial dimensions. We present a time-based plane-sweeping trajectory clustering algorithm to reveal the dynamic evolution of spatial-temporal object clusters and discover common motion patterns of bar-headed geese in the process of migration. Experiments are performed on GPS-based satellite telemetry data from bar-headed geese and results demonstrate our algorithms can correctly discover shared segments of the bar-headed geese migratory pathway. We also present findings on the migratory behavior of bar-headed geese determined from this new analytical approach.

  20. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  1. Solid, liquid, and interfacial properties of TiAl alloys: parameterization of a new modified embedded atom method model

    NASA Astrophysics Data System (ADS)

    Sun, Shoutian; Ramu Ramachandran, Bala; Wick, Collin D.

    2018-02-01

    New interatomic potentials for pure Ti and Al, and binary TiAl were developed utilizing the second nearest neighbour modified embedded-atom method (MEAM) formalism. The potentials were parameterized to reproduce multiple properties spanning bulk solids, solid surfaces, solid/liquid phase changes, and liquid interfacial properties. This was carried out using a newly developed optimization procedure that combined the simple minimization of a fitness function with a genetic algorithm to efficiently span the parameter space. The resulting MEAM potentials gave good agreement with experimental and DFT solid and liquid properties, and reproduced the melting points for Ti, Al, and TiAl. However, the surface tensions from the model consistently underestimated experimental values. Liquid TiAl’s surface was found to be mostly covered with Al atoms, showing that Al has a significant propensity for the liquid/air interface.

  2. Solid, liquid, and interfacial properties of TiAl alloys: parameterization of a new modified embedded atom method model.

    PubMed

    Sun, Shoutian; Ramachandran, Bala Ramu; Wick, Collin D

    2018-02-21

    New interatomic potentials for pure Ti and Al, and binary TiAl were developed utilizing the second nearest neighbour modified embedded-atom method (MEAM) formalism. The potentials were parameterized to reproduce multiple properties spanning bulk solids, solid surfaces, solid/liquid phase changes, and liquid interfacial properties. This was carried out using a newly developed optimization procedure that combined the simple minimization of a fitness function with a genetic algorithm to efficiently span the parameter space. The resulting MEAM potentials gave good agreement with experimental and DFT solid and liquid properties, and reproduced the melting points for Ti, Al, and TiAl. However, the surface tensions from the model consistently underestimated experimental values. Liquid TiAl's surface was found to be mostly covered with Al atoms, showing that Al has a significant propensity for the liquid/air interface.

  3. Interactive robot control system and method of use

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E. (Inventor); Sanders, Adam M. (Inventor); Platt, Robert (Inventor); Reiland, Matthew J. (Inventor); Linn, Douglas Martin (Inventor)

    2012-01-01

    A robotic system includes a robot having joints, actuators, and sensors, and a distributed controller. The controller includes command-level controller, embedded joint-level controllers each controlling a respective joint, and a joint coordination-level controller coordinating motion of the joints. A central data library (CDL) centralizes all control and feedback data, and a user interface displays a status of each joint, actuator, and sensor using the CDL. A parameterized action sequence has a hierarchy of linked events, and allows the control data to be modified in real time. A method of controlling the robot includes transmitting control data through the various levels of the controller, routing all control and feedback data to the CDL, and displaying status and operation of the robot using the CDL. The parameterized action sequences are generated for execution by the robot, and a hierarchy of linked events is created within the sequence.

  4. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    NASA Astrophysics Data System (ADS)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach), and iii) process-level evaluation at climate time-scales. The advantages and disadvantages of each approach will be identified and discussed, and some thoughts about possible future developments will be given.

  5. Archive, Access, and Supply of Scientifically Derived Data: A Data Model for Multi-Parameterized Querying Where Spectral Data Base Meets GIS-Based Mapping Archive

    NASA Astrophysics Data System (ADS)

    Nass, A.; D'Amore, M.; Helbert, J.

    2018-04-01

    An archiving structure and reference level of derived and already published data supports the scientific community significantly by a constant rise of knowledge and understanding based on recent discussions within Information Science and Management.

  6. Monitoring with Data Automata

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus

    2014-01-01

    We present a form of automaton, referred to as data automata, suited for monitoring sequences of data-carrying events, for example emitted by an executing software system. This form of automata allows states to be parameterized with data, forming named records, which are stored in an efficiently indexed data structure, a form of database. This very explicit approach differs from other automaton-based monitoring approaches. Data automata are also characterized by allowing transition conditions to refer to other parameterized states, and by allowing transitions sequences. The presented automaton concept is inspired by rule-based systems, especially the Rete algorithm, which is one of the well-established algorithms for executing rule-based systems. We present an optimized external DSL for data automata, as well as a comparable unoptimized internal DSL (API) in the Scala programming language, in order to compare the two solutions. An evaluation compares these two solutions to several other monitoring systems.

  7. Assessment of State-of-the-Art Dust Emission Scheme in GEOS

    NASA Technical Reports Server (NTRS)

    Darmenov, Anton; Liu, Xiaohong; Prigent, Catherine

    2017-01-01

    The GEOS modeling system has been extended with state of the art parameterization of dust emissions based on the vertical flux formulation described in Kok et al 2014. The new dust scheme was coupled with the GOCART and MAM aerosol models. In the present study we compare dust emissions, aerosol optical depth (AOD) and radiative fluxes from GEOS experiments with the standard and new dust emissions. AOD from the model experiments are also compared with AERONET and satellite based data. Based on this comparative analysis we concluded that the new parameterization improves the GEOS capability to model dust aerosols originating from African sources, however it lead to overestimation of dust emissions from Asian and Arabian sources. Further regional tuning of key parameters controlling the threshold friction velocity may be required in order to achieve more definitive and uniform improvement in the dust modeling skill.

  8. The implementation and validation of improved landsurface hydrology in an atmospheric general circulation model

    NASA Technical Reports Server (NTRS)

    Johnson, Kevin D.; Entekhabi, Dara; Eagleson, Peter S.

    1991-01-01

    Landsurface hydrological parameterizations are implemented in the NASA Goddard Institute for Space Studies (GISS) General Circulation Model (GCM). These parameterizations are: (1) runoff and evapotranspiration functions that include the effects of subgrid scale spatial variability and use physically based equations of hydrologic flux at the soil surface, and (2) a realistic soil moisture diffusion scheme for the movement of water in the soil column. A one dimensional climate model with a complete hydrologic cycle is used to screen the basic sensitivities of the hydrological parameterizations before implementation into the full three dimensional GCM. Results of the final simulation with the GISS GCM and the new landsurface hydrology indicate that the runoff rate, especially in the tropics is significantly improved. As a result, the remaining components of the heat and moisture balance show comparable improvements when compared to observations. The validation of model results is carried from the large global (ocean and landsurface) scale, to the zonal, continental, and finally the finer river basin scales.

  9. Lightning Scaling Laws Revisited

    NASA Technical Reports Server (NTRS)

    Boccippio, D. J.; Arnold, James E. (Technical Monitor)

    2000-01-01

    Scaling laws relating storm electrical generator power (and hence lightning flash rate) to charge transport velocity and storm geometry were originally posed by Vonnegut (1963). These laws were later simplified to yield simple parameterizations for lightning based upon cloud top height, with separate parameterizations derived over land and ocean. It is demonstrated that the most recent ocean parameterization: (1) yields predictions of storm updraft velocity which appear inconsistent with observation, and (2) is formally inconsistent with the theory from which it purports to derive. Revised formulations consistent with Vonnegut's original framework are presented. These demonstrate that Vonnegut's theory is, to first order, consistent with observation. The implications of assuming that flash rate is set by the electrical generator power, rather than the electrical generator current, are examined. The two approaches yield significantly different predictions about the dependence of charge transfer per flash on storm dimensions, which should be empirically testable. The two approaches also differ significantly in their explanation of regional variability in lightning observations.

  10. Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    2000-01-01

    This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in the same manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminate plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling) analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.

  11. Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    2000-01-01

    This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.

  12. 77 FR 61604 - Exposure Modeling Public Meeting; Notice of Public Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-10

    ..., birds, reptiles, and amphibians: Model Parameterization and Knowledge base Development. 4. Standard Operating Procedure for calculating degradation kinetics. 5. Aquatic exposure modeling using field studies...

  13. Parameterization of daily solar global ultraviolet irradiation.

    PubMed

    Feister, U; Jäkel, E; Gericke, K

    2002-09-01

    Daily values of solar global ultraviolet (UV) B and UVA irradiation as well as erythemal irradiation have been parameterized to be estimated from pyranometer measurements of daily global and diffuse irradiation as well as from atmospheric column ozone. Data recorded at the Meteorological Observatory Potsdam (52 degrees N, 107 m asl) in Germany over the time period 1997-2000 have been used to derive sets of regression coefficients. The validation of the method against independent data sets of measured UV irradiation shows that the parameterization provides a gain of information for UVB, UVA and erythemal irradiation referring to their averages. A comparison between parameterized daily UV irradiation and independent values of UV irradiation measured at a mountain station in southern Germany (Meteorological Observatory Hohenpeissenberg at 48 degrees N, 977 m asl) indicates that the parameterization also holds even under completely different climatic conditions. On a long-term average (1953-2000), parameterized annual UV irradiation values are 15% and 21% higher for UVA and UVB, respectively, at Hohenpeissenberg than they are at Potsdam. Daily global and diffuse irradiation measured at 28 weather stations of the Deutscher Wetterdienst German Radiation Network and grid values of column ozone from the EPTOMS satellite experiment served as inputs to calculate the estimates of the spatial distribution of daily and annual values of UV irradiation across Germany. Using daily values of global and diffuse irradiation recorded at Potsdam since 1937 as well as atmospheric column ozone measured since 1964 at the same site, estimates of daily and annual UV irradiation have been derived for this site over the period from 1937 through 2000, which include the effects of changes in cloudiness, in aerosols and, at least for the period of ozone measurements from 1964 to 2000, in atmospheric ozone. It is shown that the extremely low ozone values observed mainly after the eruption of Mt. Pinatubo in 1991 have substantially enhanced UVB irradiation in the first half of the 1990s. According to the measurements and calculations, the nonlinear long-term changes observed between 1968 and 2000 amount to +4%, ..., +5% for annual global irradiation and UVA irradiation mainly because of changing cloudiness and + 14%, ..., +15% for UVB and erythemal irradiation because of both changing cloudiness and decreasing column ozone. At the mountain site, Hohenpeissenberg, measured global irradiation and parameterized UVA irradiation decreased during the same time period by -3%, ..., -4%, probably because of the enhanced occurrence and increasing optical thickness of clouds, whereas UVB and erythemal irradiation derived by the parameterization have increased by +3%, ..., +4% because of the combined effect of clouds and decreasing ozone. The parameterizations described here should be applicable to other regions with similar atmospheric and geographic conditions, whereas for regions with significantly different climatic conditions, such as high mountainous areas and arctic or tropical regions, the representativeness of the regression coefficients would have to be approved. It is emphasized here that parameterizations, as the one described in this article, cannot replace measurements of solar UV radiation, but they can use existing measurements of solar global and diffuse radiation as well as data on atmospheric ozone to provide estimates of UV irradiation in regions and over time periods for which UV measurements are not available.

  14. A Solar Radiation Parameterization for Atmospheric Studies. Volume 15

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Suarez, Max J. (Editor)

    1999-01-01

    The solar radiation parameterization (CLIRAD-SW) developed at the Goddard Climate and Radiation Branch for application to atmospheric models are described. It includes the absorption by water vapor, O3, O2, CO2, clouds, and aerosols and the scattering by clouds, aerosols, and gases. Depending upon the nature of absorption, different approaches are applied to different absorbers. In the ultraviolet and visible regions, the spectrum is divided into 8 bands, and single O3 absorption coefficient and Rayleigh scattering coefficient are used for each band. In the infrared, the spectrum is divided into 3 bands, and the k-distribution method is applied for water vapor absorption. The flux reduction due to O2 is derived from a simple function, while the flux reduction due to CO2 is derived from precomputed tables. Cloud single-scattering properties are parameterized, separately for liquid drops and ice, as functions of water amount and effective particle size. A maximum-random approximation is adopted for the overlapping of clouds at different heights. Fluxes are computed using the Delta-Eddington approximation.

  15. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Evaluation of scale-aware subgrid mesoscale eddy models in a global eddy-rich model

    NASA Astrophysics Data System (ADS)

    Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank

    2017-07-01

    Two parameterizations for horizontal mixing of momentum and tracers by subgrid mesoscale eddies are implemented in a high-resolution global ocean model. These parameterizations follow on the techniques of large eddy simulation (LES). The theory underlying one parameterization (2D Leith due to Leith, 1996) is that of enstrophy cascades in two-dimensional turbulence, while the other (QG Leith) is designed for potential enstrophy cascades in quasi-geostrophic turbulence. Simulations using each of these parameterizations are compared with a control simulation using standard biharmonic horizontal mixing.Simulations using the 2D Leith and QG Leith parameterizations are more realistic than those using biharmonic mixing. In particular, the 2D Leith and QG Leith simulations have more energy in resolved mesoscale eddies, have a spectral slope more consistent with turbulence theory (an inertial enstrophy or potential enstrophy cascade), have bottom drag and vertical viscosity as the primary sinks of energy instead of lateral friction, and have isoneutral parameterized mesoscale tracer transport. The parameterization choice also affects mass transports, but the impact varies regionally in magnitude and sign.

  17. Robust curb detection with fusion of 3D-Lidar and camera data.

    PubMed

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-05-21

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.

  18. Decision Support for Environmental Management of Industrial ...

    EPA Pesticide Factsheets

    Non-hazardous solid materials from industrial processes, once regarded as waste and disposed in landfills, offer numerous environmental and economic advantages when put to beneficial uses (BUs). Proper management of these industrial non-hazardous secondary materials (INSM) requires estimates of their probable environmental impacts among disposal as well as BU options. The U.S. Environmental Protection Agency (EPA) has recently approved new analytical methods (EPA Methods 1313–1316) to assess leachability of constituents of potential concern in these materials. These new methods are more realistic for many disposal and BU options than historical methods, such as the toxicity characteristic leaching protocol. Experimental data from these new methods are used to parameterize a chemical fate and transport (F&T) model to simulate long-term environmental releases from flue gas desulfurization gypsum (FGDG) when disposed of in an industrial landfill or beneficially used as an agricultural soil amendment. The F&T model is also coupled with optimization algorithms, the Beneficial Use Decision Support System (BUDSS), under development by EPA to enhance INSM management. The objective of this paper is to demonstrate the methodologies and encourage similar applications to improve environmental management and BUs of INSM through F&T simulation coupled with optimization, using realistic model parameterization.

  19. Effects of a polar stratosphere cloud parameterization on ozone depletion due to stratospheric aircraft in a two-dimensional model

    NASA Technical Reports Server (NTRS)

    Considine, David B.; Douglass, Anne R.; Jackman, Charles H.

    1994-01-01

    A parameterization of Type 1 and 2 polar stratospheric cloud (PSC) formation is presented which is appropriate for use in two-dimensional (2-D) photochemical models of the stratosphere. The calculations of PSC frequency of occurrence and surface area density uses climatological temperature probability distributions obtained from National Meteorological Center data to avoid using zonal mean temperatures, which are not good predictors of PSC behavior. The parameterization does not attempt to model the microphysics of PSCs. The parameterization predicts changes in PSC formation and heterogeneous processing due to perturbations of stratospheric trace constituents. It is therefore useful in assessing the potential effects of a fleet of stratospheric aircraft (high speed civil transports, or HSCTs) on stratospheric composition. the model calculated frequency of PSC occurrence agrees well with a climatology based on stratospheric aerosol measurement (SAM) 2 observations. PSCs are predicted to occur in the tropics. Their vertical range is narrow, however, and their impact on model O3 fields is small. When PSC and sulfate aerosol heterogeneous processes are included in the model calculations, the O3 change for 1980 - 1990 is in substantially better agreement with the total ozone mapping spectrometer (TOMS)-derived O3 trend than otherwise. The overall changes in model O3 response to standard HSCT perturbation scenarios produced by the parameterization are small and tend to decrease the model sensitivity to the HSCT perturbation. However, in the southern hemisphere spring a significant increase in O3 sensitivity to HSCT perturbations is found. At this location and time, increased PSC formation leads to increased levels of active chlorine, which produce the O3 decreases.

  20. Sensitivity of Tropical Cyclones to Parameterized Convection in the NASA GEOS5 Model

    NASA Technical Reports Server (NTRS)

    Lim, Young-Kwon; Schubert, Siegfried D.; Reale, Oreste; Lee, Myong-In; Molod, Andrea M.; Suarez, Max J.

    2014-01-01

    The sensitivity of tropical cyclones (TCs) to changes in parameterized convection is investigated to improve the simulation of TCs in the North Atlantic. Specifically, the impact of reducing the influence of the Relaxed Arakawa-Schubert (RAS) scheme-based parameterized convection is explored using the Goddard Earth Observing System version5 (GEOS5) model at 0.25 horizontal resolution. The years 2005 and 2006 characterized by very active and inactive hurricane seasons, respectively, are selected for simulation. A reduction in parameterized deep convection results in an increase in TC activity (e.g., TC number and longer life cycle) to more realistic levels compared to the baseline control configuration. The vertical and horizontal structure of the strongest simulated hurricane shows the maximum lower-level (850-950hPa) wind speed greater than 60 ms and the minimum sea level pressure reaching 940mb, corresponding to a category 4 hurricane - a category never achieved by the control configuration. The radius of the maximum wind of 50km, the location of the warm core exceeding 10 C, and the horizontal compactness of the hurricane center are all quite realistic without any negatively affecting the atmospheric mean state. This study reveals that an increase in the threshold of minimum entrainment suppresses parameterized deep convection by entraining more dry air into the typical plume. This leads to cooling and drying at the mid- to upper-troposphere, along with the positive latent heat flux and moistening in the lower-troposphere. The resulting increase in conditional instability provides an environment that is more conducive to TC vortex development and upward moisture flux convergence by dynamically resolved moist convection, thereby increasing TC activity.

  1. Parameterizing the binding properties of dissolved organic matter with default values skews the prediction of copper solution speciation and ecotoxicity in soil.

    PubMed

    Djae, Tanalou; Bravin, Matthieu N; Garnier, Cédric; Doelsch, Emmanuel

    2017-04-01

    Parameterizing speciation models by setting the percentage of dissolved organic matter (DOM) that is reactive (% r-DOM) toward metal cations at a single 65% default value is very common in predictive ecotoxicology. The authors tested this practice by comparing the free copper activity (pCu 2+  = -log 10 [Cu 2+ ]) measured in 55 soil sample solutions with pCu 2+ predicted with the Windermere humic aqueous model (WHAM) parameterized by default. Predictions of Cu toxicity to soil organisms based on measured or predicted pCu 2+ were also compared. Default WHAM parameterization substantially skewed the prediction of measured pCu 2+ by up to 2.7 pCu 2+ units (root mean square residual = 0.75-1.3) and subsequently the prediction of Cu toxicity for microbial functions, invertebrates, and plants by up to 36%, 45%, and 59% (root mean square residuals ≤9 %, 11%, and 17%), respectively. Reparametrizing WHAM by optimizing the 2 DOM binding properties (i.e., % r-DOM and the Cu complexation constant) within a physically realistic value range much improved the prediction of measured pCu 2+ (root mean square residual = 0.14-0.25). Accordingly, this WHAM parameterization successfully predicted Cu toxicity for microbial functions, invertebrates, and plants (root mean square residual ≤3.4%, 4.4%, and 5.8%, respectively). Thus, it is essential to account for the real heterogeneity in DOM binding properties for relatively accurate prediction of Cu speciation in soil solution and Cu toxic effects on soil organisms. Environ Toxicol Chem 2017;36:898-905. © 2016 SETAC. © 2016 SETAC.

  2. Are Atmospheric Updrafts a Key to Unlocking Climate Forcing and Sensitivity?

    DOE PAGES

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel; ...

    2016-06-08

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud-aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vertical velocities, and parameterizations which do provide vertical velocities have been subject to limited evaluation against what have until recently been scant observations. Atmospheric observations imply that the distribution of vertical velocities depends on the areas over which the vertical velocities are averaged. Distributions of vertical velocities in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of scale-dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  3. Operational evapotranspiration mapping using remote sensing and weather datasets: a new parameterization for the SSEB approach

    USGS Publications Warehouse

    Senay, Gabriel B.; Bohms, Stefanie; Singh, Ramesh K.; Gowda, Prasanna H.; Velpuri, Naga Manohar; Alemu, Henok; Verdin, James P.

    2013-01-01

    The increasing availability of multi-scale remotely sensed data and global weather datasets is allowing the estimation of evapotranspiration (ET) at multiple scales. We present a simple but robust method that uses remotely sensed thermal data and model-assimilated weather fields to produce ET for the contiguous United States (CONUS) at monthly and seasonal time scales. The method is based on the Simplified Surface Energy Balance (SSEB) model, which is now parameterized for operational applications, renamed as SSEBop. The innovative aspect of the SSEBop is that it uses predefined boundary conditions that are unique to each pixel for the "hot" and "cold" reference conditions. The SSEBop model was used for computing ET for 12 years (2000-2011) using the MODIS and Global Data Assimilation System (GDAS) data streams. SSEBop ET results compared reasonably well with monthly eddy covariance ET data explaining 64% of the observed variability across diverse ecosystems in the CONUS during 2005. Twelve annual ET anomalies (2000-2011) depicted the spatial extent and severity of the commonly known drought years in the CONUS. More research is required to improve the representation of the predefined boundary conditions in complex terrain at small spatial scales. SSEBop model was found to be a promising approach to conduct water use studies in the CONUS, with a similar opportunity in other parts of the world. The approach can also be applied with other thermal sensors such as Landsat.

  4. Exploring JLA supernova data with improved flux-averaging technique

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Wen, Sixiang; Li, Miao

    2017-03-01

    In this work, we explore the cosmological consequences of the ``Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the (zcut, Δ z) plane, where zcut and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying zcut and varying Δ z, revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is (zcut = 0.6, Δ z=0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at zcut >= 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ωm. In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.

  5. Parameterization and prediction of nanoparticle transport in porous media: A reanalysis using artificial neural network

    NASA Astrophysics Data System (ADS)

    Babakhani, Peyman; Bridge, Jonathan; Doong, Ruey-an; Phenrat, Tanapon

    2017-06-01

    The continuing rapid expansion of industrial and consumer processes based on nanoparticles (NP) necessitates a robust model for delineating their fate and transport in groundwater. An ability to reliably specify the full parameter set for prediction of NP transport using continuum models is crucial. In this paper we report the reanalysis of a data set of 493 published column experiment outcomes together with their continuum modeling results. Experimental properties were parameterized into 20 factors which are commonly available. They were then used to predict five key continuum model parameters as well as the effluent concentration via artificial neural network (ANN)-based correlations. The Partial Derivatives (PaD) technique and Monte Carlo method were used for the analysis of sensitivities and model-produced uncertainties, respectively. The outcomes shed light on several controversial relationships between the parameters, e.g., it was revealed that the trend of Katt with average pore water velocity was positive. The resulting correlations, despite being developed based on a "black-box" technique (ANN), were able to explain the effects of theoretical parameters such as critical deposition concentration (CDC), even though these parameters were not explicitly considered in the model. Porous media heterogeneity was considered as a parameter for the first time and showed sensitivities higher than those of dispersivity. The model performance was validated well against subsets of the experimental data and was compared with current models. The robustness of the correlation matrices was not completely satisfactory, since they failed to predict the experimental breakthrough curves (BTCs) at extreme values of ionic strengths.

  6. The applicability of the viscous α-parameterization of gravitational instability in circumstellar disks

    NASA Astrophysics Data System (ADS)

    Vorobyov, E. I.

    2010-01-01

    We study numerically the applicability of the effective-viscosity approach for simulating the effect of gravitational instability (GI) in disks of young stellar objects with different disk-to-star mass ratios ξ . We adopt two α-parameterizations for the effective viscosity based on Lin and Pringle [Lin, D.N.C., Pringle, J.E., 1990. ApJ 358, 515] and Kratter et al. [Kratter, K.M., Matzner, Ch.D., Krumholz, M.R., 2008. ApJ 681, 375] and compare the resultant disk structure, disk and stellar masses, and mass accretion rates with those obtained directly from numerical simulations of self-gravitating disks around low-mass (M∗ ∼ 1.0M⊙) protostars. We find that the effective viscosity can, in principle, simulate the effect of GI in stellar systems with ξ≲ 0.2- 0.3 , thus corroborating a similar conclusion by Lodato and Rice [Lodato, G., Rice, W.K.M., 2004. MNRAS 351, 630] that was based on a different α-parameterization. In particular, the Kratter et al.'s α-parameterization has proven superior to that of Lin and Pringle's, because the success of the latter depends crucially on the proper choice of the α-parameter. However, the α-parameterization generally fails in stellar systems with ξ≳ 0.3 , particularly in the Classes 0 and I phases of stellar evolution, yielding too small stellar masses and too large disk-to-star mass ratios. In addition, the time-averaged mass accretion rates onto the star are underestimated in the early disk evolution and greatly overestimated in the late evolution. The failure of the α-parameterization in the case of large ξ is caused by a growing strength of low-order spiral modes in massive disks. Only in the late Class II phase, when the magnitude of spiral modes diminishes and the mode-to-mode interaction ensues, may the effective viscosity be used to simulate the effect of GI in stellar systems with ξ≳ 0.3 . A simple modification of the effective viscosity that takes into account disk fragmentation can somewhat improve the performance of α-models in the case of large ξ and even approximately reproduce the mass accretion burst phenomenon, the latter being a signature of the early gravitationally unstable stage of stellar evolution [Vorobyov, E.I., Basu, S., 2006. ApJ 650, 956]. However, further numerical experiments are needed to explore this issue.

  7. Renormalization group analysis of turbulence

    NASA Technical Reports Server (NTRS)

    Smith, Leslie M.

    1989-01-01

    The objective is to understand and extend a recent theory of turbulence based on dynamic renormalization group (RNG) techniques. The application of RNG methods to hydrodynamic turbulence was explored most extensively by Yakhot and Orszag (1986). An eddy viscosity was calculated which was consistent with the Kolmogorov inertial range by systematic elimination of the small scales in the flow. Further, assumed smallness of the nonlinear terms in the redefined equations for the large scales results in predictions for important flow constants such as the Kolmogorov constant. It is emphasized that no adjustable parameters are needed. The parameterization of the small scales in a self-consistent manner has important implications for sub-grid modeling.

  8. Abstraction Techniques for Parameterized Verification

    DTIC Science & Technology

    2006-11-01

    approach for applying model checking to unbounded systems is to extract finite state models from them using conservative abstraction techniques. Prop...36 2.5.1 Multiple Reference Processes . . . . . . . . . . . . . . . . . . . 36 2.5.2 Adding Monitor Processes...model checking to complex pieces of code like device drivers depends on the use of abstraction methods. An abstraction method extracts a small finite

  9. Parameterized CAD techniques implementation for the fatigue behaviour optimization of a service chamber

    NASA Astrophysics Data System (ADS)

    Sánchez, H. T.; Estrems, M.; Franco, P.; Faura, F.

    2009-11-01

    In recent years, the market of heat exchangers is increasingly demanding new products in short cycle time, which means that both the design and manufacturing stages must be extremely reduced. The design stage can be reduced by means of CAD-based parametric design techniques. The methodology presented in this proceeding is based on the optimized control of geometric parameters of a service chamber of a heat exchanger by means of the Application Programming Interface (API) provided by the Solidworks CAD package. Using this implementation, a set of different design configurations of the service chamber made of stainless steel AISI 316 are studied by means of the FE method. As a result of this study, a set of knowledge rules based on the fatigue behaviour are constructed and integrated into the design optimization process.

  10. 3D surface parameterization using manifold learning for medial shape representation

    NASA Astrophysics Data System (ADS)

    Ward, Aaron D.; Hamarneh, Ghassan

    2007-03-01

    The choice of 3D shape representation for anatomical structures determines the effectiveness with which segmentation, visualization, deformation, and shape statistics are performed. Medial axis-based shape representations have attracted considerable attention due to their inherent ability to encode information about the natural geometry of parts of the anatomy. In this paper, we propose a novel approach, based on nonlinear manifold learning, to the parameterization of medial sheets and object surfaces based on the results of skeletonization. For each single-sheet figure in an anatomical structure, we skeletonize the figure, and classify its surface points according to whether they lie on the upper or lower surface, based on their relationship to the skeleton points. We then perform nonlinear dimensionality reduction on the skeleton, upper, and lower surface points, to find the intrinsic 2D coordinate system of each. We then center a planar mesh over each of the low-dimensional representations of the points, and map the meshes back to 3D using the mappings obtained by manifold learning. Correspondence between mesh vertices, established in their intrinsic 2D coordinate spaces, is used in order to compute the thickness vectors emanating from the medial sheet. We show results of our algorithm on real brain and musculoskeletal structures extracted from MRI, as well as an artificial multi-sheet example. The main advantages to this method are its relative simplicity and noniterative nature, and its ability to correctly compute nonintersecting thickness vectors for a medial sheet regardless of both the amount of coincident bending and thickness in the object, and of the incidence of local concavities and convexities in the object's surface.

  11. Integration of SRTM and TRMM date into the GIS-based hydrological model for the purpose of flood modelling

    NASA Astrophysics Data System (ADS)

    Akbari, A.; Abu Samah, A.; Othman, F.

    2012-04-01

    Due to land use and climate changes, more severe and frequent floods occur worldwide. Flood simulation as the first step in flood risk management can be robustly conducted with integration of GIS, RS and flood modeling tools. The primary goal of this research is to examine the practical use of public domain satellite data and GIS-based hydrologic model. Firstly, database development process is described. GIS tools and techniques were used in the light of relevant literature to achieve the appropriate database. Watershed delineation and parameterizations were carried out using cartographic DEM derived from digital topography at a scale of 1:25 000 with 30 m cell size and SRTM elevation data at 30 m cell size. The SRTM elevation dataset is evaluated and compared with cartographic DEM. With the assistance of statistical measures such as Correlation coefficient (r), Nash-Sutcliffe efficiency (NSE), Percent Bias (PBias) or Percent of Error (PE). According to NSE index, SRTM-DEM can be used for watershed delineation and parameterization with 87% similarity with Topo-DEM in a complex and underdeveloped terrains. Primary TRMM (V6) data was used as satellite based hytograph for rainfall-runoff simulation. The SCS-CN approach was used for losses and kinematic routing method employed for hydrograph transformation through the reaches. It is concluded that TRMM estimates do not give adequate information about the storms as it can be drawn from the rain gauges. Event-based flood modeling using HEC-HMS proved that SRTM elevation dataset has the ability to obviate the lack of terrain data for hydrologic modeling where appropriate data for terrain modeling and simulation of hydrological processes is unavailable. However, TRMM precipitation estimates failed to explain the behavior of rainfall events and its resultant peak discharge and time of peak.

  12. Developing a Physiologically-Based Pharmacokinetic Model Knowledgebase in Support of Provisional Model Construction - poster

    EPA Science Inventory

    Building new physiologically based pharmacokinetic (PBPK) models requires a lot data, such as the chemical-specific parameters and in vivo pharmacokinetic data. Previously-developed, well-parameterized, and thoroughly-vetted models can be great resource for supporting the constr...

  13. Parameterizing water quality analysis and simulation program (WASP) for carbon-based nanomaterials

    EPA Science Inventory

    Carbon nanotubes (CNT) and graphenes are among the most popular carbon-based nanomaterials due to their unique electronic, mechanic and structural properties. Exposure modeling of these nanomaterials in the aquatic environment is necessary to predict the fate of these materials. ...

  14. Creating and parameterizing patient-specific deep brain stimulation pathway-activation models using the hyperdirect pathway as an example

    PubMed Central

    Gunalan, Kabilar; Chaturvedi, Ashutosh; Howell, Bryan; Duchin, Yuval; Lempka, Scott F.; Patriat, Remi; Sapiro, Guillermo; Harel, Noam; McIntyre, Cameron C.

    2017-01-01

    Background Deep brain stimulation (DBS) is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports. Objective Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM) and predict the response of the hyperdirect pathway to clinical stimulation. Methods Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python) enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson’s disease (PD). This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution. Results Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings. Conclusion Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation. PMID:28441410

  15. Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone

    NASA Astrophysics Data System (ADS)

    Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo

    2017-12-01

    The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.

  16. Extensions and applications of a second-order landsurface parameterization

    NASA Technical Reports Server (NTRS)

    Andreou, S. A.; Eagleson, P. S.

    1983-01-01

    Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.

  17. Influence of the vertical mixing parameterization on the modeling results of the Arctic Ocean hydrology

    NASA Astrophysics Data System (ADS)

    Iakshina, D. F.; Golubeva, E. N.

    2017-11-01

    The vertical distribution of the hydrological characteristics in the upper ocean layer is mostly formed under the influence of turbulent and convective mixing, which are not resolved in the system of equations for large-scale ocean. Therefore it is necessary to include additional parameterizations of these processes into the numerical models. In this paper we carry out a comparative analysis of the different vertical mixing parameterizations in simulations of climatic variability of the Arctic water and sea ice circulation. The 3D regional numerical model for the Arctic and North Atlantic developed in the ICMMG SB RAS (Institute of Computational Mathematics and Mathematical Geophysics of the Siberian Branch of the Russian Academy of Science) and package GOTM (General Ocean Turbulence Model1,2, http://www.gotm.net/) were used as the numerical instruments . NCEP/NCAR reanalysis data were used for determination of the surface fluxes related to ice and ocean. The next turbulence closure schemes were used for the vertical mixing parameterizations: 1) Integration scheme based on the Richardson criteria (RI); 2) Second-order scheme TKE with coefficients Canuto-A3 (CANUTO); 3) First-order scheme TKE with coefficients Schumann and Gerz4 (TKE-1); 4) Scheme KPP5 (KPP). In addition we investigated some important characteristics of the Arctic Ocean state including the intensity of Atlantic water inflow, ice cover state and fresh water content in Beaufort Sea.

  18. GHI calculation sensitivity on microphysics, land- and cumulus parameterization in WRF over the Reunion Island

    NASA Astrophysics Data System (ADS)

    De Meij, A.; Vinuesa, J.-F.; Maupas, V.

    2018-05-01

    The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.

  19. Interpreting activity in H(2)O-H(2)SO(4) binary nucleation.

    PubMed

    Bein, Keith J; Wexler, Anthony S

    2007-09-28

    Sulfuric acid-water nucleation is thought to be a key atmospheric mechanism for forming new condensation nuclei. In earlier literature, measurements of sulfuric acid activity were interpreted as the total (monomer plus hydrate) concentration above solution. Due to recent reinterpretations, most literature values for H(2)SO(4) activity are thought to represent the number density of monomers. Based on this reinterpretation, the current work uses the most recent models of H(2)O-H(2)SO(4) binary nucleation along with perturbation analyses to predict a decrease in critical cluster mole fraction, increase in critical cluster diameter, and orders of magnitude decrease in nucleation rate. Nucleation rate parameterizations available in the literature, however, give opposite trends. To resolve these discrepancies, nucleation rates were calculated for both interpretations of H(2)SO(4) activity and directly compared to the available parameterizations as well as the perturbation analysis. Results were in excellent agreement with older parameterizations that assumed H(2)SO(4) activity represents the total concentration and duplicated the predicted trends from the perturbation analysis, but differed by orders of magnitude from more recent parameterizations that assume H(2)SO(4) activity represents only the monomer. Comparison with experimental measurements available in the literature revealed that the calculations of the current work assuming a(a) represents the total concentration are most frequently in agreement with observations.

  20. Parameterization of Mixed Layer and Deep-Ocean Mesoscales Including Nonlinearity

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Cheng, Y.; Dubovikov, M. S.; Howard, A. M.; Leboissetier, A.

    2018-01-01

    In 2011, Chelton et al. carried out a comprehensive census of mesoscales using altimetry data and reached the following conclusions: "essentially all of the observed mesoscale features are nonlinear" and "mesoscales do not move with the mean velocity but with their own drift velocity," which is "the most germane of all the nonlinear metrics."� Accounting for these results in a mesoscale parameterization presents conceptual and practical challenges since linear analysis is no longer usable and one needs a model of nonlinearity. A mesoscale parameterization is presented that has the following features: 1) it is based on the solutions of the nonlinear mesoscale dynamical equations, 2) it describes arbitrary tracers, 3) it includes adiabatic (A) and diabatic (D) regimes, 4) the eddy-induced velocity is the sum of a Gent and McWilliams (GM) term plus a new term representing the difference between drift and mean velocities, 5) the new term lowers the transfer of mean potential energy to mesoscales, 6) the isopycnal slopes are not as flat as in the GM case, 7) deep-ocean stratification is enhanced compared to previous parameterizations where being more weakly stratified allowed a large heat uptake that is not observed, 8) the strength of the Deacon cell is reduced. The numerical results are from a stand-alone ocean code with Coordinated Ocean-Ice Reference Experiment I (CORE-I) normal-year forcing.

Top