NASA Technical Reports Server (NTRS)
Saylor, Rick D.; Wolfe, Glenn M.; Meyers, Tilden P.; Hicks, Bruce B.
2014-01-01
The Multilayer Model (MLM) has been used for many years to infer dry deposition fluxes from measured trace species concentrations and standard meteorological measurements for national networks in the U.S., including the U.S. Environmental Protection Agency's Clean Air Status and Trends Network (CASTNet). MLM utilizes a resistance analogy to calculate deposition velocities appropriate for whole vegetative canopies, while employing a multilayer integration to account for vertically varying meteorology, canopy morphology and radiative transfer within the canopy. However, the MLM formulation, as it was originally presented and as it has been subsequently employed, contains a non-physical representation related to the leaf-level quasi-laminar boundary layer resistance that affects the calculation of the total canopy resistance. In this note, the non-physical representation of the canopy resistance as originally formulated in MLM is discussed and a revised, physically consistent, formulation is suggested as a replacement. The revised canopy resistance formulation reduces estimates of HNO3 deposition velocities by as much as 38% during mid-day as compared to values generated by the original formulation. Inferred deposition velocities for SO2 and O3 are not significantly altered by the change in formulation (less than 3%). Inferred deposition loadings of oxidized and total nitrogen from CASTNet data may be reduced by 10-20% and 5-10%, respectively, for the Eastern U. S. when employing the revised formulation of MLM as compared to the original formulation.
Origin of the sensitivity in modeling the glide behaviour of dislocations
Pei, Zongrui; Stocks, George Malcolm
2018-03-26
The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less
NASA Astrophysics Data System (ADS)
Park, Sohyun
2018-02-01
We examine the origin of two opposite results for the growth of perturbations in the Deser-Woodard (DW) nonlocal gravity model. One group previously analyzed the model in its original nonlocal form and showed that the growth of structure in the DW model is enhanced compared to general relativity (GR) and thus concluded that the model was ruled out. Recently, however, another group has reanalyzed it by localizing the model and found that the growth in their localized version is suppressed even compared to the one in GR. The question was whether the discrepancy originates from an intrinsic difference between the nonlocal and localized formulations or is due to their different implementations of the subhorizon limit. We show that the nonlocal and local formulations give the same solutions for the linear perturbations as long as the initial conditions are set the same. The different implementations of the subhorizon limit lead to different transient behaviors of some perturbation variables; however, they do not affect the growth of matter perturbations at the sub-horizon scale much. In the meantime, we also report an error in the numerical calculation code of the former group and verify that after fixing the error the nonlocal version also gives the suppressed growth. Finally, we discuss two alternative definitions of the effective gravitational constant taken by the two groups and some open problems.
Adjacent-Categories Mokken Models for Rater-Mediated Assessments
Wind, Stefanie A.
2016-01-01
Molenaar extended Mokken’s original probabilistic-nonparametric scaling models for use with polytomous data. These polytomous extensions of Mokken’s original scaling procedure have facilitated the use of Mokken scale analysis as an approach to exploring fundamental measurement properties across a variety of domains in which polytomous ratings are used, including rater-mediated educational assessments. Because their underlying item step response functions (i.e., category response functions) are defined using cumulative probabilities, polytomous Mokken models can be classified as cumulative models based on the classifications of polytomous item response theory models proposed by several scholars. In order to permit a closer conceptual alignment with educational performance assessments, this study presents an adjacent-categories variation on the polytomous monotone homogeneity and double monotonicity models. Data from a large-scale rater-mediated writing assessment are used to illustrate the adjacent-categories approach, and results are compared with the original formulations. Major findings suggest that the adjacent-categories models provide additional diagnostic information related to individual raters’ use of rating scale categories that is not observed under the original formulation. Implications are discussed in terms of methods for evaluating rating quality. PMID:29795916
NASA Astrophysics Data System (ADS)
Krčmár, Roman; Šamaj, Ladislav
2018-01-01
The partition function of the symmetric (zero electric field) eight-vertex model on a square lattice can be formulated either in the original "electric" vertex format or in an equivalent "magnetic" Ising-spin format. In this paper, both electric and magnetic versions of the model are studied numerically by using the corner transfer matrix renormalization-group method which provides reliable data. The emphasis is put on the calculation of four specific critical exponents, related by two scaling relations, and of the central charge. The numerical method is first tested in the magnetic format, the obtained dependencies of critical exponents on the model's parameters agree with Baxter's exact solution, and weak universality is confirmed within the accuracy of the method due to the finite size of the system. In particular, the critical exponents η and δ are constant as required by weak universality. On the other hand, in the electric format, analytic formulas based on the scaling relations are derived for the critical exponents ηe and δe which agree with our numerical data. These exponents depend on the model's parameters which is evidence for the full nonuniversality of the symmetric eight-vertex model in the original electric formulation.
Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A; Smith, David L
2015-08-01
Since the original Ross-Macdonald formulations of vector-borne disease transmission, there has been a broad proliferation of mathematical models of vector-borne disease, but many of these models retain most to all of the simplifying assumptions of the original formulations. Recently, there has been a new expansion of mathematical frameworks that contain explicit representations of the vector life cycle including aquatic stages, multiple vector species, host heterogeneity in biting rate, realistic vector feeding behavior, and spatial heterogeneity. In particular, there are now multiple frameworks for spatially explicit dynamics with movements of vector, host, or both. These frameworks are flexible and powerful, but require additional data to take advantage of these features. For a given question posed, utilizing a range of models with varying complexity and assumptions can provide a deeper understanding of the answers derived from models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei, Zongrui; Stocks, George Malcolm
The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less
Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2)
NASA Astrophysics Data System (ADS)
Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus
2017-07-01
The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs) and Earth system models (ESMs) to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx), HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect caused by the original training on satellite data. Hence, the new version allows for an implementation into climate models in combination with an existing stratospheric transport scheme. Finally, the model is now formulated on several vertical levels encompassing the vertical range in which polar ozone depletion is observed. The results of the Polar SWIFT model are validated with independent Microwave Limb Sounder (MLS) satellite observations and output from the original detailed chemistry model of ATLAS.
Dynamic subfilter-scale stress model for large-eddy simulations
NASA Astrophysics Data System (ADS)
Rouhi, A.; Piomelli, U.; Geurts, B. J.
2016-08-01
We present a modification of the integral length-scale approximation (ILSA) model originally proposed by Piomelli et al. [Piomelli et al., J. Fluid Mech. 766, 499 (2015), 10.1017/jfm.2015.29] and apply it to plane channel flow and a backward-facing step. In the ILSA models the length scale is expressed in terms of the integral length scale of turbulence and is determined by the flow characteristics, decoupled from the simulation grid. In the original formulation the model coefficient was constant, determined by requiring a desired global contribution of the unresolved subfilter scales (SFSs) to the dissipation rate, known as SFS activity; its value was found by a set of coarse-grid calculations. Here we develop two modifications. We de-fine a measure of SFS activity (based on turbulent stresses), which adds to the robustness of the model, particularly at high Reynolds numbers, and removes the need for the prior coarse-grid calculations: The model coefficient can be computed dynamically and adapt to large-scale unsteadiness. Furthermore, the desired level of SFS activity is now enforced locally (and not integrated over the entire volume, as in the original model), providing better control over model activity and also improving the near-wall behavior of the model. Application of the local ILSA to channel flow and a backward-facing step and comparison with the original ILSA and with the dynamic model of Germano et al. [Germano et al., Phys. Fluids A 3, 1760 (1991), 10.1063/1.857955] show better control over the model contribution in the local ILSA, while the positive properties of the original formulation (including its higher accuracy compared to the dynamic model on coarse grids) are maintained. The backward-facing step also highlights the advantage of the decoupling of the model length scale from the mesh.
Comparison of Models for Ball Bearing Dynamic Capacity and Life
NASA Technical Reports Server (NTRS)
Gupta, Pradeep K.; Oswald, Fred B.; Zaretsky, Erwin V.
2015-01-01
Generalized formulations for dynamic capacity and life of ball bearings, based on the models introduced by Lundberg and Palmgren and Zaretsky, have been developed and implemented in the bearing dynamics computer code, ADORE. Unlike the original Lundberg-Palmgren dynamic capacity equation, where the elastic properties are part of the life constant, the generalized formulations permit variation of elastic properties of the interacting materials. The newly updated Lundberg-Palmgren model allows prediction of life as a function of elastic properties. For elastic properties similar to those of AISI 52100 bearing steel, both the original and updated Lundberg-Palmgren models provide identical results. A comparison between the Lundberg-Palmgren and the Zaretsky models shows that at relatively light loads the Zaretsky model predicts a much higher life than the Lundberg-Palmgren model. As the load increases, the Zaretsky model provides a much faster drop off in life. This is because the Zaretsky model is much more sensitive to load than the Lundberg-Palmgren model. The generalized implementation where all model parameters can be varied provides an effective tool for future model validation and enhancement in bearing life prediction capabilities.
Dust bands in the asteroid belt
NASA Technical Reports Server (NTRS)
Sykes, Mark V.; Greenberg, Richard; Dermott, Stanley F.; Nicholson, Philip D.; Burns, Joseph A.
1989-01-01
This paper describes the original IRAS observations leading to the discovery of the three dust bands in the asteroid belt and the analysis of data. Special attention is given to an analytical model of the dust band torus and to theories concerning the origin of the dust bands, with special attention given to the collisional equilibrium (asteroid family), the nonequilibrium (random collision), and the comet hypotheses of dust-band origin. It is noted that neither the equilibrium nor nonequilibrium models, as currently formulated, present a complete picture of the IRAS dust-band observations.
NASA Technical Reports Server (NTRS)
Dilley, Arthur D.; McClinton, Charles R. (Technical Monitor)
2001-01-01
Results from a study to assess the accuracy of turbulent heating and skin friction prediction techniques for hypersonic applications are presented. The study uses the original and a modified Baldwin-Lomax turbulence model with a space marching code. Grid converged turbulent predictions using the wall damping formulation (original model) and local damping formulation (modified model) are compared with experimental data for several flat plates. The wall damping and local damping results are similar for hot wall conditions, but differ significantly for cold walls, i.e., T(sub w) / T(sub t) < 0.3, with the wall damping heating and skin friction 10-30% above the local damping results. Furthermore, the local damping predictions have reasonable or good agreement with the experimental heating data for all cases. The impact of the two formulations on the van Driest damping function and the turbulent eddy viscosity distribution for a cold wall case indicate the importance of including temperature gradient effects. Grid requirements for accurate turbulent heating predictions are also studied. These results indicate that a cell Reynolds number of 1 is required for grid converged heating predictions, but coarser grids with a y(sup +) less than 2 are adequate for design of hypersonic vehicles. Based on the results of this study, it is recommended that the local damping formulation be used with the Baldwin-Lomax and Cebeci-Smith turbulence models in design and analysis of Hyper-X and future hypersonic vehicles.
Potential formulation of sleep dynamics
NASA Astrophysics Data System (ADS)
Phillips, A. J. K.; Robinson, P. A.
2009-02-01
A physiologically based model of the mechanisms that control the human sleep-wake cycle is formulated in terms of an equivalent nonconservative mechanical potential. The potential is analytically simplified and reduced to a quartic two-well potential, matching the bifurcation structure of the original model. This yields a dynamics-based model that is analytically simpler and has fewer parameters than the original model, allowing easier fitting to experimental data. This model is first demonstrated to semiquantitatively match the dynamics of the physiologically based model from which it is derived, and is then fitted directly to a set of experimentally derived criteria. These criteria place rigorous constraints on the parameter values, and within these constraints the model is shown to reproduce normal sleep-wake dynamics and recovery from sleep deprivation. Furthermore, this approach enables insights into the dynamics by direct analogies to phenomena in well studied mechanical systems. These include the relation between friction in the mechanical system and the timecourse of neurotransmitter action, and the possible relation between stochastic resonance and napping behavior. The model derived here also serves as a platform for future investigations of sleep-wake phenomena from a dynamical perspective.
A feedback control model for network flow with multiple pure time delays
NASA Technical Reports Server (NTRS)
Press, J.
1972-01-01
A control model describing a network flow hindered by multiple pure time (or transport) delays is formulated. Feedbacks connect each desired output with a single control sector situated at the origin. The dynamic formulation invokes the use of differential difference equations. This causes the characteristic equation of the model to consist of transcendental functions instead of a common algebraic polynomial. A general graphical criterion is developed to evaluate the stability of such a problem. A digital computer simulation confirms the validity of such criterion. An optimal decision making process with multiple delays is presented.
NASA Astrophysics Data System (ADS)
Sayre, George Anthony
The purpose of this dissertation was to develop the C ++ program Emergency Dose to calculate transport of radionuclides through indoor spaces using intermediate fidelity physics that provides improved spatial heterogeneity over well-mixed models such as MELCORRTM and much lower computation times than CFD codes such as FLUENTRTM . Modified potential flow theory, which is an original formulation of potential flow theory with additions of turbulent jet and natural convection approximations, calculates spatially heterogeneous velocity fields that well-mixed models cannot predict. Other original contributions of MPFT are: (1) generation of high fidelity boundary conditions relative to well-mixed-CFD coupling methods (conflation), (2) broadening of potential flow applications to arbitrary indoor spaces previously restricted to specific applications such as exhaust hood studies, and (3) great reduction of computation time relative to CFD codes without total loss of heterogeneity. Additionally, the Lagrangian transport module, which is discussed in Sections 1.3 and 2.4, showcases an ensemble-based formulation thought to be original to interior studies. Velocity and concentration transport benchmarks against analogous formulations in COMSOLRTM produced favorable results with discrepancies resulting from the tetrahedral meshing used in COMSOLRTM outperforming the Cartesian method used by Emergency Dose. A performance comparison of the concentration transport modules against MELCORRTM showed that Emergency Dose held advantages over the well-mixed model especially in scenarios with many interior partitions and varied source positions. A performance comparison of velocity module against FLUENTRTM showed that viscous drag provided the largest error between Emergency Dose and CFD velocity calculations, but that Emergency Dose's turbulent jets well approximated the corresponding CFD jets. Overall, Emergency Dose was found to provide a viable intermediate solution method for concentration transport with relatively low computation times.
2010-10-01
Mathematics , Indiana University Northwest, Gary, IN 3Department of Epidemiology and Biostatistics, Memorial Sloan-Kettering Cancer Center, NY 4H...however, is mathematically more parsimonious. The original DCA formulation required several mathematical manipulations making the simplicity of regret...into treatment administration examples; IH developed the mathematical formulation of the model; AV is the author of DCA; BD proposed the regret theory
Using the Global Forest Products Model (GFPM version 2016 with BPMPD)
Joseph Buongiorno; Shushuai Zhu
2016-01-01
 The GFPM is an economic model of global production, consumption and trade of forest products. The original formulation and several applications are described in Buongiorno et al. (2003). However, subsequent versions, including the GFPM 2016 reflect significant changes and extensions. The GFPM 2016 software uses the...
Turbulence Model Predictions of Strongly Curved Flow in a U-Duct
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Gatski, Thomas B.; Morrison, Joseph H.
2000-01-01
The ability of three types of turbulence models to accurately predict the effects of curvature on the flow in a U-duct is studied. An explicit algebraic stress model performs slightly better than one- or two-equation linear eddy viscosity models, although it is necessary to fully account for the variation of the production-to-dissipation-rate ratio in the algebraic stress model formulation. In their original formulations, none of these turbulence models fully captures the suppressed turbulence near the convex wall, whereas a full Reynolds stress model does. Some of the underlying assumptions used in the development of algebraic stress models are investigated and compared with the computed flowfield from the full Reynolds stress model. Through this analysis, the assumption of Reynolds stress anisotropy equilibrium used in the algebraic stress model formulation is found to be incorrect in regions of strong curvature. By the accounting for the local variation of the principal axes of the strain rate tensor, the explicit algebraic stress model correctly predicts the suppressed turbulence in the outer part of the boundary layer near the convex wall.
Efficient kinetic method for fluid simulation beyond the Navier-Stokes equation.
Zhang, Raoyang; Shan, Xiaowen; Chen, Hudong
2006-10-01
We present a further theoretical extension to the kinetic-theory-based formulation of the lattice Boltzmann method of Shan [J. Fluid Mech. 550, 413 (2006)]. In addition to the higher-order projection of the equilibrium distribution function and a sufficiently accurate Gauss-Hermite quadrature in the original formulation, a regularization procedure is introduced in this paper. This procedure ensures a consistent order of accuracy control over the nonequilibrium contributions in the Galerkin sense. Using this formulation, we construct a specific lattice Boltzmann model that accurately incorporates up to third-order hydrodynamic moments. Numerical evidence demonstrates that the extended model overcomes some major defects existing in conventionally known lattice Boltzmann models, so that fluid flows at finite Knudsen number Kn can be more quantitatively simulated. Results from force-driven Poiseuille flow simulations predict the Knudsen's minimum and the asymptotic behavior of flow flux at large Kn.
NASA Technical Reports Server (NTRS)
Arnold, Steven M; Bednarcyk, Brett; Aboydi, Jacob
2004-01-01
The High-Fidelity Generalized Method of Cells (HFGMC) micromechanics model has recently been reformulated by Bansal and Pindera (in the context of elastic phases with perfect bonding) to maximize its computational efficiency. This reformulated version of HFGMC has now been extended to include both inelastic phases and imperfect fiber-matrix bonding. The present paper presents an overview of the HFGMC theory in both its original and reformulated forms and a comparison of the results of the two implementations. The objective is to establish the correlation between the two HFGMC formulations and document the improved efficiency offered by the reformulation. The results compare the macro and micro scale predictions of the continuous reinforcement (doubly-periodic) and discontinuous reinforcement (triply-periodic) versions of both formulations into the inelastic regime, and, in the case of the discontinuous reinforcement version, with both perfect and weak interfacial bonding. The results demonstrate that identical predictions are obtained using either the original or reformulated implementations of HFGMC aside from small numerical differences in the inelastic regime due to the different implementation schemes used for the inelastic terms present in the two formulations. Finally, a direct comparison of execution times is presented for the original formulation and reformulation code implementations. It is shown that as the discretization employed in representing the composite repeating unit cell becomes increasingly refined (requiring a larger number of sub-volumes), the reformulated implementation becomes significantly (approximately an order of magnitude at best) more computationally efficient in both the continuous reinforcement (doubly-periodic) and discontinuous reinforcement (triply-periodic) cases.
NASA Astrophysics Data System (ADS)
Tscharnuter, W. M.
1980-02-01
Modes and model concept of star formation are reviewed, beginning with the theory of Kant (1755), via Newton's exact mathematical formulation of the laws of motion, his recognition of the universal validity of general gravitation, to modern concepts and hypotheses. Axisymmetric and spherically symmetric collapse models are discussed, and the origin of double and multiple star systems is examined.
An enhanced version of a bone-remodelling model based on the continuum damage mechanics theory.
Mengoni, M; Ponthot, J P
2015-01-01
The purpose of this work was to propose an enhancement of Doblaré and García's internal bone remodelling model based on the continuum damage mechanics (CDM) theory. In their paper, they stated that the evolution of the internal variables of the bone microstructure, and its incidence on the modification of the elastic constitutive parameters, may be formulated following the principles of CDM, although no actual damage was considered. The resorption and apposition criteria (similar to the damage criterion) were expressed in terms of a mechanical stimulus. However, the resorption criterion is lacking a dimensional consistency with the remodelling rate. We propose here an enhancement to this resorption criterion, insuring the dimensional consistency while retaining the physical properties of the original remodelling model. We then analyse the change in the resorption criterion hypersurface in the stress space for a two-dimensional (2D) analysis. We finally apply the new formulation to analyse the structural evolution of a 2D femur. This analysis gives results consistent with the original model but with a faster and more stable convergence rate.
Mathematical Metaphors: Problem Reformulation and Analysis Strategies
NASA Technical Reports Server (NTRS)
Thompson, David E.
2005-01-01
This paper addresses the critical need for the development of intelligent or assisting software tools for the scientist who is working in the initial problem formulation and mathematical model representation stage of research. In particular, examples of that representation in fluid dynamics and instability theory are discussed. The creation of a mathematical model that is ready for application of certain solution strategies requires extensive symbolic manipulation of the original mathematical model. These manipulations can be as simple as term reordering or as complicated as discovery of various symmetry groups embodied in the equations, whereby Backlund-type transformations create new determining equations and integrability conditions or create differential Grobner bases that are then solved in place of the original nonlinear PDEs. Several examples are presented of the kinds of problem formulations and transforms that can be frequently encountered in model representation for fluids problems. The capability of intelligently automating these types of transforms, available prior to actual mathematical solution, is advocated. Physical meaning and assumption-understanding can then be propagated through the mathematical transformations, allowing for explicit strategy development.
Evaluation of candidate working fluid formulations for the electrothermal - chemical wind tunnel
NASA Technical Reports Server (NTRS)
Akyurtlu, Jale F.; Akyurtlu, Ates
1991-01-01
Various candidate chemical formulations are evaluated as a precursor for the working fluid to be used in the electrothermal hypersonic test facility which was under study at the NASA LaRC Hypersonic Propulsion Branch, and the formulations which would most closely satisfy the goals set for the test facility are identified. Out of the four tasks specified in the original proposal, the first two, literature survey and collection of kinetic data, are almost completed. The third task, work on a mathematical model of the ET wind tunnel operation, was started and concentrated on the expansion in the nozzle with finite rate kinetics.
Energy considerations in the Community Atmosphere Model (CAM)
Williamson, David L.; Olson, Jerry G.; Hannay, Cécile; ...
2015-06-30
An error in the energy formulation in the Community Atmosphere Model (CAM) is identified and corrected. Ten year AMIP simulations are compared using the correct and incorrect energy formulations. Statistics of selected primary variables all indicate physically insignificant differences between the simulations, comparable to differences with simulations initialized with rounding sized perturbations. The two simulations are so similar mainly because of an inconsistency in the application of the incorrect energy formulation in the original CAM. CAM used the erroneous energy form to determine the states passed between the parameterizations, but used a form related to the correct formulation for themore » state passed from the parameterizations to the dynamical core. If the incorrect form is also used to determine the state passed to the dynamical core the simulations are significantly different. In addition, CAM uses the incorrect form for the global energy fixer, but that seems to be less important. The difference of the magnitude of the fixers using the correct and incorrect energy definitions is very small.« less
NASA Astrophysics Data System (ADS)
Kim, Jongho; Ivanov, Valeriy Y.; Katopodes, Nikolaos D.
2013-09-01
A novel two-dimensional, physically based model of soil erosion and sediment transport coupled to models of hydrological and overland flow processes has been developed. The Hairsine-Rose formulation of erosion and deposition processes is used to account for size-selective sediment transport and differentiate bed material into original and deposited soil layers. The formulation is integrated within the framework of the hydrologic and hydrodynamic model tRIBS-OFM, Triangulated irregular network-based, Real-time Integrated Basin Simulator-Overland Flow Model. The integrated model explicitly couples the hydrodynamic formulation with the advection-dominated transport equations for sediment of multiple particle sizes. To solve the system of equations including both the Saint-Venant and the Hairsine-Rose equations, the finite volume method is employed based on Roe's approximate Riemann solver on an unstructured grid. The formulation yields space-time dynamics of flow, erosion, and sediment transport at fine scale. The integrated model has been successfully verified with analytical solutions and empirical data for two benchmark cases. Sensitivity tests to grid resolution and the number of used particle sizes have been carried out. The model has been validated at the catchment scale for the Lucky Hills watershed located in southeastern Arizona, USA, using 10 events for which catchment-scale streamflow and sediment yield data were available. Since the model is based on physical laws and explicitly uses multiple types of watershed information, satisfactory results were obtained. The spatial output has been analyzed and the driving role of topography in erosion processes has been discussed. It is expected that the integrated formulation of the model has the promise to reduce uncertainties associated with typical parameterizations of flow and erosion processes. A potential for more credible modeling of earth-surface processes is thus anticipated.
A BRST formulation for the conic constrained particle
NASA Astrophysics Data System (ADS)
Barbosa, Gabriel D.; Thibes, Ronaldo
2018-04-01
We describe the gauge invariant BRST formulation of a particle constrained to move in a general conic. The model considered constitutes an explicit example of an originally second-class system which can be quantized within the BRST framework. We initially impose the conic constraint by means of a Lagrange multiplier leading to a consistent second-class system which generalizes previous models studied in the literature. After calculating the constraint structure and the corresponding Dirac brackets, we introduce a suitable first-order Lagrangian, the resulting modified system is then shown to be gauge invariant. We proceed to the extended phase space introducing fermionic ghost variables, exhibiting the BRST symmetry transformations and writing the Green’s function generating functional for the BRST quantized model.
Leung, Cassandra Ming Shan; Tong, Zhenbo; Zhou, Qi Tony; Chan, John Gar Yan; Tang, Patricia; Sun, Siping; Yang, Runyu; Chan, Hak-Kim
2016-09-01
The design of a dry powder inhaler device has significant influence on aerosol performance; however, such influence may be different between the drug-only and carrier-based formulations. The present study aims to examine the potential difference on the dispersion between these distinct types of formulations, using Aerolizer(®) as a model inhaler with the original or modified (cross-grid) designs. A coupled CFD-discrete element method analysis was employed to determine the flow characteristics and particle impaction. Micronized salbutamol sulphate as a drug-only formulation and three lactose carrier-based formulations with various drug-to-carrier weight ratios 1:5, 1:10 and 1:100 were used. The in vitro aerosolization performance was assessed by a next-generation impactor operating at 100 L/min. Using the original device, FPFloaded was reduced from 47.5 ± 3.8% for the drug-only formulation to 31.8 ± 0.7%, 32.1 ± 0.7% and 12.9 ± 1.0% for the 1:5, 1:10 and 1:100 formulations, respectively. With the cross-grid design, powder-mouthpiece impaction was increased, which caused not only powder deagglomeration but also significant drug retention (doubling or more) in the mouthpiece, and the net result is a significant decrease in FPFloaded to 36.8 ± 1.2%, 20.9 ± 2.6% and 21.9 ± 1.5% for the drug-only, 1:5 and 1:10 formulations, respectively. In contrast, the FPFloaded of the 1:100 formulation remained the same at 12.1 ± 1.3%, indicating the increased mouthpiece drug retention was compensated by increased drug detachment from carriers caused by increased powder-mouthpiece impaction. In conclusion, this study has elucidated different effects and the mechanism on the aerosolization of varied dry powder inhaler formulations due to the grid design.
NASA Astrophysics Data System (ADS)
Lyu, Dandan; Li, Shaofan
2017-10-01
Crystal defects have microstructure, and this microstructure should be related to the microstructure of the original crystal. Hence each type of crystals may have similar defects due to the same failure mechanism originated from the same microstructure, if they are under the same loading conditions. In this work, we propose a multiscale crystal defect dynamics (MCDD) model that models defects by considering its intrinsic microstructure derived from the microstructure or material genome of the original perfect crystal. The main novelties of present work are: (1) the discrete exterior calculus and algebraic topology theory are used to construct a scale-up (coarse-grained) dual lattice model for crystal defects, which may represent all possible defect modes inside a crystal; (2) a higher order Cauchy-Born rule (up to the fourth order) is adopted to construct atomistic-informed constitutive relations for various defect process zones, and (3) an hierarchical strain gradient theory based finite element formulation is developed to support an hierarchical multiscale cohesive (process) zone model for various defects in a unified formulation. The efficiency of MCDD computational algorithm allows us to simulate dynamic defect evolution at large scale while taking into account atomistic interaction. The MCDD model has been validated by comparing of the results of MCDD simulations with that of molecular dynamics (MD) in the cases of nanoindentation and uniaxial tension. Numerical simulations have shown that MCDD model can predict dislocation nucleation induced instability and inelastic deformation, and thus it may provide an alternative solution to study crystal plasticity.
1989-07-21
formulation of physiologically-based pharmacokinetic models. Adult male Sprague-Dawley rats and male beagle dogs will be administered equal doses...experiments in the 0 dog . Physiologically-based pharmacokinetic models will be developed and validated for oral and inhalation exposures to halocarbons...of conducting experiments in dogs . The original physiolo ic model for the rat will be scaled up to predict halocarbon pharmacokinetics in the dog . The
On the Frozen Soil Scheme for High Latitude Regions
NASA Astrophysics Data System (ADS)
Ganji, A.; Sushama, L.
2014-12-01
Regional and global climate model simulated streamflows for high-latitude regions show systematic biases, particularly in the timing and magnitude of spring peak flows. Though these biases could be related to the snow water equivalent and spring temperature biases in models, a good part of these biases is due to the unaccounted effects of non-uniform infiltration capacity of the frozen ground and other related processes. In this paper, the frozen scheme in the Canadian Land Surface Scheme (CLASS), which is used in the Canadian regional and global climate models, is modified to include fractional permeable area, supercooled liquid water and a new formulation for hydraulic conductivity. Interflow is also included in these experiments presented in this study to better explain the steamflows after snow melt season. The impact of these modifications on the regional hydrology, particularly streamflow, is assessed by comparing three simulations, performed with the original and two modified versions of CLASS, driven by atmospheric forcing data from the European Centre for Medium-Range Weather Forecasts (ECMWF) reanalysis data (ERA-Interim), for the 1990-2001 period, over a northeast Canadian domain. The two modified versions of CLASS differ in the soil hydraulic conductivity and matric potential formulations, with one version being based on formulations from a previous study and the other one is newly proposed. Results suggest statistically significant decreases in infiltration for the simulation with the new hydraulic conductivity and matric potential formulations and fractional permeable area concept, compared to the original version of CLASS, which is also reflected in the increased spring surface runoff and streamflows in this simulation with modified CLASS, over most of the study domain. The simulated spring peaks and their timing in this simulation is also in better agreement to those observed.
On improving cold region hydrological processes in the Canadian Land Surface Scheme
NASA Astrophysics Data System (ADS)
Ganji, Arman; Sushama, Laxmi; Verseghy, Diana; Harvey, Richard
2017-01-01
Regional and global climate model simulated streamflows for high-latitude regions show systematic biases, particularly in the timing and magnitude of spring peak flows. Though these biases could be related to the snow water equivalent and spring temperature biases in models, a good part of these biases is due to the unaccounted effects of non-uniform infiltration capacity of the frozen ground and other related processes. In this paper, the treatment of frozen water in the Canadian Land Surface Scheme (CLASS), which is used in the Canadian regional and global climate models, is modified to include fractional permeable area, supercooled liquid water and a new formulation for hydraulic conductivity. The impact of these modifications on the regional hydrology, particularly streamflow, is assessed by comparing three simulations performed with the original and two modified versions of CLASS, driven by atmospheric forcing data from the European Centre for Medium-Range Weather Forecast (ECMWF) reanalysis (ERA-Interim) for the 1990-2001 period over a northeast Canadian domain. The two modified versions of CLASS differ in the soil hydraulic conductivity and matric potential formulations, with one version being based on formulations from a previous study and the other one is newly proposed. Results suggest statistically significant decreases in infiltration and therefore soil moisture during the snowmelt season for the simulation with the new hydraulic conductivity and matric potential formulations and fractional permeable area concept compared to the original version of CLASS, which is also reflected in the increased spring surface runoff and streamflows in this simulation with modified CLASS over most of the study domain. The simulated spring peaks and their timing in this simulation are also in better agreement to those observed. This study thus demonstrates the importance of treatment of frozen water for realistic simulation of streamflows.
Mathematical modeling of the aerodynamic characteristics in flight dynamics
NASA Technical Reports Server (NTRS)
Tobak, M.; Chapman, G. T.; Schiff, L. B.
1984-01-01
Basic concepts involved in the mathematical modeling of the aerodynamic response of an aircraft to arbitrary maneuvers are reviewed. The original formulation of an aerodynamic response in terms of nonlinear functionals is shown to be compatible with a derivation based on the use of nonlinear functional expansions. Extensions of the analysis through its natural connection with ideas from bifurcation theory are indicated.
Yamamoto, H; Kojima, Y; Okuyama, T; Abasolo, W P; Gril, J
2002-08-01
In this study, a basic model is introduced to describe the biomechanical properties of the wood from the viewpoint of the composite structure of its cell wall. First, the mechanical interaction between the cellulose microfibril (CMF) as a bundle framework and the lignin-hemicellulose as a matrix (MT) skeleton in the secondary wall is formulated based on "the two phase approximation." Thereafter, the origins of (1) tree growth stress, (2) shrinkage or swelling anisotropy of the wood, and (3) moisture dependency of the Young's modulus of wood along the grain were simulated using the newly introduced model. Through the model formulation; (1) the behavior of the cellulose microfibril (CMF) and the matrix substance (MT) during cell wall maturation was estimated; (2) the moisture reactivity of each cell wall constituent was investigated; and (3) a realistic model of the fine composite structure of the matured cell wall was proposed. Thus, it is expected that the fine structure and internal property of each cell wall constituent can be estimated through the analyses of the macroscopic behaviors of wood based on the two phase approximation.
NASA Astrophysics Data System (ADS)
Hayami, Masao; Seino, Junji; Nakai, Hiromi
2018-03-01
This article proposes a gauge-origin independent formalism of the nuclear magnetic shielding constant in the two-component relativistic framework based on the unitary transformation. The proposed scheme introduces the gauge factor and the unitary transformation into the atomic orbitals. The two-component relativistic equation is formulated by block-diagonalizing the Dirac Hamiltonian together with gauge factors. This formulation is available for arbitrary relativistic unitary transformations. Then, the infinite-order Douglas-Kroll-Hess (IODKH) transformation is applied to the present formulation. Next, the analytical derivatives of the IODKH Hamiltonian for the evaluation of the nuclear magnetic shielding constant are derived. Results obtained from the numerical assessments demonstrate that the present formulation removes the gauge-origin dependence completely. Furthermore, the formulation with the IODKH transformation gives results that are close to those in four-component and other two-component relativistic schemes.
Improvements, testing and development of the ADM-τ sub-grid surface tension model for two-phase LES
NASA Astrophysics Data System (ADS)
Aniszewski, Wojciech
2016-12-01
In this paper, a specific subgrid term occurring in Large Eddy Simulation (LES) of two-phase flows is investigated. This and other subgrid terms are presented, we subsequently elaborate on the existing models for those and re-formulate the ADM-τ model for sub-grid surface tension previously published by these authors. This paper presents a substantial, conceptual simplification over the original model version, accompanied by a decrease in its computational cost. At the same time, it addresses the issues the original model version faced, e.g. introduces non-isotropic applicability criteria based on resolved interface's principal curvature radii. Additionally, this paper introduces more throughout testing of the ADM-τ, in both simple and complex flows.
Yang, Xiaoxia; Duan, John; Fisher, Jeffrey
2016-01-01
A previously presented physiologically-based pharmacokinetic model for immediate release (IR) methylphenidate (MPH) was extended to characterize the pharmacokinetic behaviors of oral extended release (ER) MPH formulations in adults for the first time. Information on the anatomy and physiology of the gastrointestinal (GI) tract, together with the biopharmaceutical properties of MPH, was integrated into the original model, with model parameters representing hepatic metabolism and intestinal non-specific loss recalibrated against in vitro and in vivo kinetic data sets with IR MPH. A Weibull function was implemented to describe the dissolution of different ER formulations. A variety of mathematical functions can be utilized to account for the engineered release/dissolution technologies to achieve better model performance. The physiological absorption model tracked well the plasma concentration profiles in adults receiving a multilayer-release MPH formulation or Metadate CD, while some degree of discrepancy was observed between predicted and observed plasma concentration profiles for Ritalin LA and Medikinet Retard. A local sensitivity analysis demonstrated that model parameters associated with the GI tract significantly influenced model predicted plasma MPH concentrations, albeit to varying degrees, suggesting the importance of better understanding the GI tract physiology, along with the intestinal non-specific loss of MPH. The model provides a quantitative tool to predict the biphasic plasma time course data for ER MPH, helping elucidate factors responsible for the diverse plasma MPH concentration profiles following oral dosing of different ER formulations. PMID:27723791
Piret, Jocelyne; Laforest, Geneviève; Bussières, Martin; Bergeron, Michel G
2008-03-01
The safety of an ethylene oxide/propylene oxide gel formulation containing sodium lauryl sulfate (2%, w/w), that could be a potent candidate as a topical microbicide, has been evaluated. More specifically, the subchronic (26- and 52-week) toxicity of the formulation when applied intravaginally as well as its irritating potential for the rectal, penile, eye, skin and buccal mucosa have been examined in animal models. The results showed that the vaginal administration of the gel formulation containing sodium lauryl sulfate once and twice daily (with doses 12 +/- 2 h apart) for 26 weeks to rats and for 52 weeks to rabbits induced slight to moderate histopathological alterations. When the formulation was applied intrarectally to male and female rabbits once and twice daily (with doses 12 +/- 2 h apart) for 14 days, no macroscopic or microscopic changes were reported. For both vaginal and rectal dosing, no effect was seen on the haematology, coagulation and serum chemistry parameters as well as on the body weight of animals and the relative organ weights. Other sporadic macroscopic and histopathological findings were incidental in origin and of no toxicological significance. The gel formulation containing sodium lauryl sulfate was considered as mildly irritating for the penile mucosa of rabbits, non-irritating for the eye of rabbits, mildly irritating for the skin in a rabbit model and non-irritating for the hamster cheek pouch. It is suggested that the gel formulation containing sodium lauryl sulfate is safe for most tissues that could be exposed to the product under normal use.
Lee, Janice Soo Fern; Sagaon Teyssier, Luis; Dongmo Nguimfack, Boniface; Collins, Intira Jeannie; Lallemant, Marc; Perriens, Joseph; Moatti, Jean-Paul
2016-03-15
The pediatric antiretroviral (ARV) market is poorly described in the literature, resulting in gaps in understanding treatment access. We analyzed the pediatric ARV market from 2004 to 2012 and assessed pricing trends and associated factors. Data on donor funded procurements of pediatric ARV formulations reported to the Global Price Reporting Mechanism database from 2004 to 2012 were analyzed. Outcomes of interest were the volume and mean price per patient-year ARV formulation based on WHO ARV dosing recommendations for a 10 kg child. Factors associated with the price of formulations were assessed using linear regression; potential predictors included: country income classification, geographical region, market segment (originator versus generic ARVs), and number of manufacturers per formulation. All analyses were adjusted for type of formulations (single, dual or triple fixed-dose combinations (FDCs)) Data from 111 countries from 2004 to 2012 were included, with procurement of 33 formulations at a total value of USD 204 million. Use of dual and triple FDC formulations increased substantially over time, but with limited changes in price. Upon multivariate analysis, prices of originator formulations were found to be on average 72 % higher than generics (p < 0.001). A 10 % increase in procurement volume was associated with a 1 % decrease (p < 0.001) in both originator and generic prices. The entry of one additional manufacturer producing a formulation was associated with a decrease in prices of 2 % (p < 0.001) and 8 % (p < 0.001) for originator and generic formulations, respectively. The mean generic ARV price did not differ by country income level. Prices of originator ARVs were 48 % (p < 0.001) and 14 % (p < 0.001) higher in upper-middle income and lower-middle income countries compared to low income countries respectively, with the exception of South Africa, which had lower prices despite being an upper-middle income country. The donor funded pediatric ARV market as represented by the GPRM database is small, and lacks price competition. It is dominated by generic drugs due to the lower prices offered and the practicality of FDC formulations. This market requires continued donor support and the current initiatives to protect it are important to ensure market viability, especially if new formulations are to be introduced in the future.
Rethinking the logistic approach for population dynamics of mutualistic interactions.
García-Algarra, Javier; Galeano, Javier; Pastor, Juan Manuel; Iriondo, José María; Ramasco, José J
2014-12-21
Mutualistic communities have an internal structure that makes them resilient to external perturbations. Late research has focused on their stability and the topology of the relations between the different organisms to explain the reasons of the system robustness. Much less attention has been invested in analyzing the systems dynamics. The main population models in use are modifications of the r-K formulation of logistic equation with additional terms to account for the benefits produced by the interspecific interactions. These models have shortcomings as the so-called r-K formulation diverges under some conditions. In this work, we introduce a model for population dynamics under mutualism that preserves the original logistic formulation. It is mathematically simpler than the widely used type II models, although it shows similar complexity in terms of fixed points and stability of the dynamics. We perform an analytical stability analysis and numerical simulations to study the model behavior in general interaction scenarios including tests of the resilience of its dynamics under external perturbations. Despite its simplicity, our results indicate that the model dynamics shows an important richness that can be used to gain further insights in the dynamics of mutualistic communities. Copyright © 2014 Elsevier Ltd. All rights reserved.
Some Fundamental Issues of Mathematical Simulation in Biology
NASA Astrophysics Data System (ADS)
Razzhevaikin, V. N.
2018-02-01
Some directions of simulation in biology leading to original formulations of mathematical problems are overviewed. Two of them are discussed in detail: the correct solvability of first-order linear equations with unbounded coefficients and the construction of a reaction-diffusion equation with nonlinear diffusion for a model of genetic wave propagation.
Eulerian formulation of the interacting particle representation model of homogeneous turbulence
Campos, Alejandro; Duraisamy, Karthik; Iaccarino, Gianluca
2016-10-21
The Interacting Particle Representation Model (IPRM) of homogeneous turbulence incorporates information about the morphology of turbulent structures within the con nes of a one-point model. In the original formulation [Kassinos & Reynolds, Center for Turbulence Research: Annual Research Briefs, 31{51, (1996)], the IPRM was developed in a Lagrangian setting by evolving second moments of velocity conditional on a given gradient vector. In the present work, the IPRM is re-formulated in an Eulerian framework and evolution equations are developed for the marginal PDFs. Eulerian methods avoid the issues associated with statistical estimators used by Lagrangian approaches, such as slow convergence. Amore » specific emphasis of this work is to use the IPRM to examine the long time evolution of homogeneous turbulence. We first describe the derivation of the marginal PDF in spherical coordinates, which reduces the number of independent variables and the cost associated with Eulerian simulations of PDF models. Next, a numerical method based on radial basis functions over a spherical domain is adapted to the IPRM. Finally, results obtained with the new Eulerian solution method are thoroughly analyzed. The sensitivity of the Eulerian simulations to parameters of the numerical scheme, such as the size of the time step and the shape parameter of the radial basis functions, is examined. A comparison between Eulerian and Lagrangian simulations is performed to discern the capabilities of each of the methods. Finally, a linear stability analysis based on the eigenvalues of the discrete differential operators is carried out for both the new Eulerian solution method and the original Lagrangian approach.« less
Cervera, Miguel; Tesei, Claudia
2017-01-01
In this paper, an energy-equivalent orthotropic d+/d− damage model for cohesive-frictional materials is formulated. Two essential mechanical features are addressed, the damage-induced anisotropy and the microcrack closure-reopening (MCR) effects, in order to provide an enhancement of the original d+/d− model proposed by Faria et al. 1998, while keeping its high algorithmic efficiency unaltered. First, in order to ensure the symmetry and positive definiteness of the secant operator, the new formulation is developed in an energy-equivalence framework. This proves thermodynamic consistency and allows one to describe a fundamental feature of the orthotropic damage models, i.e., the reduction of the Poisson’s ratio throughout the damage process. Secondly, a “multidirectional” damage procedure is presented to extend the MCR capabilities of the original model. The fundamental aspects of this approach, devised for generic cyclic conditions, lie in maintaining only two scalar damage variables in the constitutive law, while preserving memory of the degradation directionality. The enhanced unilateral capabilities are explored with reference to the problem of a panel subjected to in-plane cyclic shear, with or without vertical pre-compression; depending on the ratio between shear and pre-compression, an absent, a partial or a complete stiffness recovery is simulated with the new multidirectional procedure. PMID:28772793
Using data tagging to improve the performance of Kanerva's sparse distributed memory
NASA Technical Reports Server (NTRS)
Rogers, David
1988-01-01
The standard formulation of Kanerva's sparse distributed memory (SDM) involves the selection of a large number of data storage locations, followed by averaging the data contained in those locations to reconstruct the stored data. A variant of this model is discussed, in which the predominant pattern is the focus of reconstruction. First, one architecture is proposed which returns the predominant pattern rather than the average pattern. However, this model will require too much storage for most uses. Next, a hybrid model is proposed, called tagged SDM, which approximates the results of the predominant pattern machine, but is nearly as efficient as Kanerva's original formulation. Finally, some experimental results are shown which confirm that significant improvements in the recall capability of SDM can be achieved using the tagged architecture.
Character expansion methods for matrix models of dually weighted graphs
NASA Astrophysics Data System (ADS)
Kazakov, Vladimir A.; Staudacher, Matthias; Wynter, Thomas
1996-04-01
We consider generalized one-matrix models in which external fields allow control over the coordination numbers on both the original and dual lattices. We rederive in a simple fashion a character expansion formula for these models originally due to Itzykson and Di Francesco, and then demonstrate how to take the large N limit of this expansion. The relationship to the usual matrix model resolvent is elucidated. Our methods give as a by-product an extremely simple derivation of the Migdal integral equation describing the large N limit of the Itzykson-Zuber formula. We illustrate and check our methods by analysing a number of models solvable by traditional means. We then proceed to solve a new model: a sum over planar graphys possessing even coordination numbers on both the original and the dual lattice. We conclude by formulating the equations for the case of arbitrary sets of even, self-dual coupling constants. This opens the way for studying the deep problems of phase transitions from random to flat lattices. January 1995
Antovska, Packa; Ugarkovic, Sonja; Petruševski, Gjorgji; Stefanova, Bosilka; Manchevska, Blagica; Petkovska, Rumenka; Makreski, Petre
2017-11-01
Development, experimental design and in vitro in vivo correlation (IVIVC) of controlled-release matrix formulation. Development of novel oral controlled delivery system for indapamide hemihydrate, optimization of the formulation by experimental design and evaluation regarding IVIVC on a pilot scale batch as a confirmation of a well-established formulation. In vitro dissolution profiles of controlled-release tablets of indapamide hemihydrate from four different matrices had been evaluated in comparison to the originator's product Natrilix (Servier) as a direction for further development and optimization of a hydroxyethylcellulose-based matrix controlled-release formulation. A central composite factorial design had been applied for the optimization of a chosen controlled-release tablet formulation. The controlled-release tablets with appropriate physical and technological properties had been obtained with a matrix: binder concentration variations in the range: 20-40w/w% for the matrix and 1-3w/w% for the binder. The experimental design had defined the design space for the formulation and was prerequisite for extraction of a particular formulation that would be a subject for transfer on pilot scale and IVIV correlation. The release model of the optimized formulation has shown best fit to the zero order kinetics depicted with the Hixson-Crowell erosion-dependent mechanism of release. Level A correlation was obtained.
Yassin, Samy; Goodwin, Daniel J; Anderson, Andrew; Sibik, Juraj; Wilson, D Ian; Gladden, Lynn F; Zeitler, J Axel
2015-01-01
Disintegration performance was measured by analysing both water ingress and tablet swelling of pure microcrystalline cellulose (MCC) and in mixture with croscarmellose sodium using terahertz pulsed imaging (TPI). Tablets made from pure MCC with porosities of 10% and 15% showed similar swelling and transport kinetics: within the first 15 s, tablets had swollen by up to 33% of their original thickness and water had fully penetrated the tablet following Darcy flow kinetics. In contrast, MCC tablets with a porosity of 5% exhibited much slower transport kinetics, with swelling to only 17% of their original thickness and full water penetration reached after 100 s, dominated by case II transport kinetics. The effect of adding superdisintegrant to the formulation and varying the temperature of the dissolution medium between 20°C and 37°C on the swelling and transport process was quantified. We have demonstrated that TPI can be used to non-invasively analyse the complex disintegration kinetics of formulations that take place on timescales of seconds and is a promising tool to better understand the effect of dosage form microstructure on its performance. By relating immediate-release formulations to mathematical models used to describe controlled release formulations, it becomes possible to use this data for formulation design. © 2015 The Authors. Journal of Pharmaceutical Sciences published by Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci 104:3440–3450, 2015 PMID:26073446
The Finite Strain Johnson Cook Plasticity and Damage Constitutive Model in ALEGRA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, Jason James
A finite strain formulation of the Johnson Cook plasticity and damage model and it's numerical implementation into the ALEGRA code is presented. The goal of this work is to improve the predictive material failure capability of the Johnson Cook model. The new implementation consists of a coupling of damage and the stored elastic energy as well as the minimum failure strain criteria for spall included in the original model development. This effort establishes the necessary foundation for a thermodynamically consistent and complete continuum solid material model, for which all intensive properties derive from a common energy. The motivation for developingmore » such a model is to improve upon ALEGRA's present combined model framework. Several applications of the new Johnson Cook implementation are presented. Deformation driven loading paths demonstrate the basic features of the new model formulation. Use of the model produces good comparisons with experimental Taylor impact data. Localized deformation leading to fragmentation is produced for expanding ring and exploding cylinder applications.« less
The macrodynamics of international migration as a sociocultural diffusion process. Part A: theory.
Diamantides, N D
1992-11-01
"This study formulates a model of the macrodynamics of international migration using a differential equation to capture the push-pull forces that propel it. The model's architecture rests on the functioning of information feedback between settled friends and family at the destination and potential emigrants at the origin." The author tests the model using data on Greek emigration to the United States since 1820 and on total emigration from Cyprus since 1946. excerpt
NASA Astrophysics Data System (ADS)
Norouzzadeh, A.; Ansari, R.; Rouhi, H.
2017-05-01
Differential form of Eringen's nonlocal elasticity theory is widely employed to capture the small-scale effects on the behavior of nanostructures. However, paradoxical results are obtained via the differential nonlocal constitutive relations in some cases such as in the vibration and bending analysis of cantilevers, and recourse must be made to the integral (original) form of Eringen's theory. Motivated by this consideration, a novel nonlocal formulation is developed herein based on the original formulation of Eringen's theory to study the buckling behavior of nanobeams. The governing equations are derived according to the Timoshenko beam theory, and are represented in a suitable vector-matrix form which is applicable to the finite-element analysis. In addition, an isogeometric analysis (IGA) is conducted for the solution of buckling problem. Construction of exact geometry using non-uniform rational B-splines and easy implementation of geometry refinement tools are the main advantages of IGA. A comparison study is performed between the predictions of integral and differential nonlocal models for nanobeams under different kinds of end conditions.
NASA Astrophysics Data System (ADS)
Boughariou, F.; Chouikhi, S.; Kallel, A.; Belgaroui, E.
2015-12-01
In this paper, we present a new theoretical and numerical formulation for the electrical and thermal breakdown phenomena, induced by charge packet dynamics, in low-density polyethylene (LDPE) insulating film under dc high applied field. The theoretical physical formulation is composed by the equations of bipolar charge transport as well as by the thermo-electric coupled equation associated for the first time in modeling to the bipolar transport problem. This coupled equation is resolved by the finite-element numerical model. For the first time, all bipolar transport results are obtained under non-uniform temperature distributions in the sample bulk. The principal original results show the occurring of very sudden abrupt increase in local temperature associated to a very sharp increase in external and conduction current densities appearing during the steady state. The coupling between these electrical and thermal instabilities reflects physically the local coupling between electrical conduction and thermal joule effect. The results of non-uniform temperature distributions induced by non-uniform electrical conduction current are also presented for several times. According to our formulation, the strong injection current is the principal factor of the electrical and thermal breakdown of polymer insulating material. This result is shown in this work. Our formulation is also validated experimentally.
Empathy deficit in antisocial personality disorder: a psychodynamic formulation.
Malancharuvil, Joseph M
2012-09-01
Empathic difficulty is a highly consequential characteristic of antisocial personality structure. The origin, maintenance, and possible resolution of this profound deficit are not very clear. While reconstructing empathic ability is of primary importance in the treatment of antisocial personality, not many proven procedures are in evidence. In this article, the author offers a psychodynamic formulation of the origin, character, and maintenance of the empathic deficiency in antisocial personality. The author discusses some of the treatment implications from this dynamic formulation.
Coupled Hydro-Mechanical Constitutive Model for Vegetated Soils: Validation and Applications
NASA Astrophysics Data System (ADS)
Switala, Barbara Maria; Veenhof, Rick; Wu, Wei; Askarinejad, Amin
2016-04-01
It is well known, that presence of vegetation influences stability of the slope. However, the quantitative assessment of this contribution remains challenging. It is essential to develop a numerical model, which combines mechanical root reinforcement and root water uptake, and allows modelling rainfall induced landslides of vegetated slopes. Therefore a novel constitutive formulation is proposed, which is based on the modified Cam-clay model for unsaturated soils. Mechanical root reinforcement is modelled introducing a new constitutive parameter, which governs the evolution of the Cam-clay failure surface with the degree of root reinforcement. Evapotranspiration is modelled in terms of the root water uptake, defined as a sink term in the water flow continuity equation. The original concept is extended for different shapes of the root architecture in three dimensions, and combined with the mechanical model. The model is implemented in the research finite element code Comes-Geo, and in the commercial software Abaqus. The formulation is tested, performing a series of numerical examples, which allow validation of the concept. The direct shear test and the triaxial test are modelled in order to test the performance of the mechanical part of the model. In order to validate the hydrological part of the constitutive formulation, evapotranspiration from the vegetated box is simulated and compared with the experimental results. Obtained numerical results exhibit a good agreement with the experimental data. The implemented model is capable of reproducing results of basic geotechnical laboratory tests. Moreover, the constitutive formulation can be used to model rainfall induced landslides of vegetated slopes, taking into account the most important factors influencing the slope stability (root reinforcement and evapotranspiration).
Perturbative tests for a large-N reduced model of {N} = {4} super Yang-Mills theory
NASA Astrophysics Data System (ADS)
Ishiki, Goro; Shimasaki, Shinji; Tsuchiya, Asato
2011-11-01
We study a non-perturbative formulation of {N} = {4} super Yang-Mills theory (SYM) on R × S 3 in the planar limit proposed in arXiv:0807.2352. This formulation is based on the large- N reduction, and the theory can be described as a particular large- N limit of the plane wave matrix model (PWMM), which is obtained by dimensionally reducing the original theory over S 3. In this paper, we perform some tests for this proposal. We construct an operator in the PWMM that corresponds to the Wilson loop in SYM in the continuum limit and calculate the vacuum expectation value of the operator for the case of the circular contour. We find that our result indeed agrees with the well-known result first obtained by Erickson, Semenoff and Zarembo. We also compute the beta function at the 1-loop level based on this formulation and see that it is indeed vanishing.
Perturbative tests for a large-N reduced model of mathcal{N} = {4} super Yang-Mills theory
NASA Astrophysics Data System (ADS)
Ishiki, Goro; Shimasaki, Shinji; Tsuchiya, Asato
2012-02-01
We study a non-perturbative formulation of mathcal{N} = {4} super Yang-Mills theory (SYM) on R × S 3 in the planar limit proposed in arXiv:0807.2352. This formulation is based on the large- N reduction, and the theory can be described as a particular large- N limit of the plane wave matrix model (PWMM), which is obtained by dimensionally reducing the original theory over S 3. In this paper, we perform some tests for this proposal. We construct an operator in the PWMM that corresponds to the Wilson loop in SYM in the continuum limit and calculate the vacuum expectation value of the operator for the case of the circular contour. We find that our result indeed agrees with the well-known result first obtained by Erickson, Semenoff and Zarembo. We also compute the beta function at the 1-loop level based on this formulation and see that it is indeed vanishing.
NASA Astrophysics Data System (ADS)
Dobson, B.; Pianosi, F.; Reed, P. M.; Wagener, T.
2017-12-01
In previous work, we have found that water supply companies are typically hesitant to use reservoir operation tools to inform their release decisions. We believe that this is, in part, due to a lack of faith in the fidelity of the optimization exercise with regards to its ability to represent the real world. In an attempt to quantify this, recent literature has studied the impact on performance from uncertainty arising in: forcing (e.g. reservoir inflows), parameters (e.g. parameters for the estimation of evaporation rate) and objectives (e.g. worst first percentile or worst case). We suggest that there is also epistemic uncertainty in the choices made during model creation, for example in the formulation of an evaporation model or aggregating regional storages. We create `rival framings' (a methodology originally developed to demonstrate the impact of uncertainty arising from alternate objective formulations), each with different modelling choices, and determine their performance impacts. We identify the Pareto approximate set of policies for several candidate formulations and then make them compete with one another in a large ensemble re-evaluation in each other's modelled spaces. This enables us to distinguish the impacts of different structural changes in the model used to evaluate system performance in an effort to generalize the validity of the optimized performance expectations.
Peacock, Amy; Degenhardt, Louisa; Hordern, Antonia; Larance, Briony; Cama, Elena; White, Nancy; Kihas, Ivana; Bruno, Raimondo
2015-12-01
In April 2014, a tamper-resistant controlled-release oxycodone formulation was introduced into the Australian market. This study aimed to identify the level and methods of tampering with reformulated oxycodone, demographic and clinical characteristics of those who reported tampering with reformulated oxycodone, and perceived attractiveness of original and reformulated oxycodone for misuse (via tampering). A prospective cohort of 522 people who regularly tampered with pharmaceutical opioids and had tampered with the original oxycodone product in their lifetime completed two interviews before (January-March 2014: Wave 1) and after (May-August 2014: Wave 2) introduction of reformulated oxycodone. Four-fifths (81%) had tampered with the original oxycodone formulation in the month prior to Wave 1; use and attempted tampering with reformulated oxycodone amongst the sample was comparatively low at Wave 2 (29% and 19%, respectively). Reformulated oxycodone was primarily swallowed (15%), with low levels of recent successful injection (6%), chewing (2%), drinking/dissolving (1%), and smoking (<1%). Participants who tampered with original and reformulated oxycodone were socio-demographically and clinically similar to those who had only tampered with the original formulation, except the former were more likely to report prescribed oxycodone use and stealing pharmaceutical opioid, and less likely to report moderate/severe anxiety. There was significant diversity in the methods for tampering, with attempts predominantly prompted by self-experimentation (rather than informed by word-of-mouth or the internet). Participants rated reformulated oxycodone as more difficult to prepare and inject and less pleasant to use compared to the original formulation. Current findings suggest that the introduction of the tamper-resistant product has been successful at reducing, although not necessarily eliminating, tampering with the controlled-release oxycodone formulation, with lower attractiveness for misuse. Appropriate, effective treatment options must be available with increasing availability of abuse-deterrent products, given the reduction of oxycodone tampering and use amongst a group with high rates of pharmaceutical opioid dependence. Copyright © 2015 Elsevier B.V. All rights reserved.
An adaptive multi-feature segmentation model for infrared image
NASA Astrophysics Data System (ADS)
Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa
2016-04-01
Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.
1983-09-01
which serve as aquifers. The aquifers include, in ascending order, the Patuxent, the Patapso, the Magothy , and the Aquia Formations. These aquifer...consist typically of sand layers of varying thickness interbedded with clays. The general thickness of the Patuxent, Patapsco, Magothy and Aquia in the...Aquifers. This was accomplished using a digital simulation model originally developed by the USGS for the Magothy Aquifer. The model uses a finite
Yassin, Samy; Goodwin, Daniel J; Anderson, Andrew; Sibik, Juraj; Wilson, D Ian; Gladden, Lynn F; Zeitler, J Axel
2015-10-01
Disintegration performance was measured by analysing both water ingress and tablet swelling of pure microcrystalline cellulose (MCC) and in mixture with croscarmellose sodium using terahertz pulsed imaging (TPI). Tablets made from pure MCC with porosities of 10% and 15% showed similar swelling and transport kinetics: within the first 15 s, tablets had swollen by up to 33% of their original thickness and water had fully penetrated the tablet following Darcy flow kinetics. In contrast, MCC tablets with a porosity of 5% exhibited much slower transport kinetics, with swelling to only 17% of their original thickness and full water penetration reached after 100 s, dominated by case II transport kinetics. The effect of adding superdisintegrant to the formulation and varying the temperature of the dissolution medium between 20°C and 37°C on the swelling and transport process was quantified. We have demonstrated that TPI can be used to non-invasively analyse the complex disintegration kinetics of formulations that take place on timescales of seconds and is a promising tool to better understand the effect of dosage form microstructure on its performance. By relating immediate-release formulations to mathematical models used to describe controlled release formulations, it becomes possible to use this data for formulation design. © 2015 The Authors. Journal of Pharmaceutical Sciences published by Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci 104:3440-3450, 2015. © 2015 The Authors. Journal of Pharmaceutical Sciences published by Wiley Periodicals, Inc. and the American Pharmacists Association.
ERIC Educational Resources Information Center
Madigan, Sheri; Moran, Greg; Schuengel, Carlo; Pederson, David R.; Otten, Roy
2007-01-01
Background: Attachment theory's original formulation was substantially driven by Bowlby's (1969/1982) quest for a meaningful model of the development of psychopathology. Bowlby posited that aberrant experiences of parenting increase the child's risk of psychopathological outcomes, and that these risks are mediated by the quality of the attachment…
Tang, Zhao-qi; Liu, Ying; Shu, Ru-xin; Yang, Kai; Zhao, Long-lian; Zhang, Lu-da; Zhang Ye-hui; Li, Jun-hui
2014-12-01
In this paper, the 7 different origin before redrying raw tobacco & after redrying sheet tobacco's online near infrared spectroscopy were collected from sorting & redrying production line specifically for "ZHONGHUA" brand. By using the projection model bulit by different origin tobacco's online spectroscopy and the method of variance and correlation analysis, we studied the uniformity and similarity quality characteristics change before and after the redrying of tobacco, which can provide support for understanding the quality of the tobacco material and cigarette product formulations. This study show that selecting about 10,000 by equally spaced sampling time from a huge number of online near infrared spectroscopy, for modeling are feasible, and representative. After manual sorting, threshing, and redrying, the uiformity of each origin tobacco near-infrared spectroscopy can be increased by 10%~35%, homogeneity of the tobacco leaf has been significantly improved. After redrying, the similar relationship embodied in the origin also have significant changes, overall it reduce significantly, that shows the quality differences embodied by origin significantly improve, which can provide greater space for formulations, it shows the need for high-quality Chinese cigarette production requires large amounts of financial and human resources to implement cured tobacco processing. The traditional means of chemical analysis, it takes a lot of time and effort, it is difficult to control the entire processing chain, Near Infrared Spectroscopy with its rapid, non-destructive advantage, not only can achieve real-time detection and quality control, but also can take full advantage of near-infrared spectroscopy information created in the production process, which is a very promising online analytical detection technology in many industries especially in the agricultural and food processing industries.
Paukkonen, Heli; Ukkonen, Anni; Szilvay, Geza; Yliperttula, Marjo; Laaksonen, Timo
2017-03-30
The purpose of this study was to construct biopolymer-based oil-in-water emulsion formulations for encapsulation and release of poorly water soluble model compounds naproxen and ibuprofen. Class II hydrophobin protein HFBII from Trichoderma reesei was used as a surfactant to stabilize the oil/water interfaces of the emulsion droplets in the continuous aqueous phase. Nanofibrillated cellulose (NFC) was used as a viscosity modifier to further stabilize the emulsions and encapsulate protein coated oil droplets in NFC fiber network. The potential of both native and oxidized NFC were studied for this purpose. Various emulsion formulations were prepared and the abilities of different formulations to control the drug release rate of naproxen and ibuprofen, used as model compounds, were evaluated. The optimal formulation for sustained drug release consisted of 0.01% of drug, 0.1% HFBII, 0.15% oxidized NFC, 10% soybean oil and 90% water phase. By comparison, the use of native NFC in combination with HFBII resulted in an immediate drug release for both of the compounds. The results indicate that these NFC originated biopolymers are suitable for pharmaceutical emulsion formulations. The native and oxidized NFC grades can be used as emulsion stabilizers in sustained and immediate drug release applications. Furthermore, stabilization of the emulsions was achieved with low concentrations of both HFBII and NFC, which may be an advantage when compared to surfactant concentrations of conventional excipients traditionally used in pharmaceutical emulsion formulations. Copyright © 2017 Elsevier B.V. All rights reserved.
Inverse Optimization: A New Perspective on the Black-Litterman Model.
Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch
2012-12-11
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct "BL"-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new "BL"-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views.
NASA Astrophysics Data System (ADS)
Neumann, R. B.; Cardon, Z. G.; Rockwell, F. E.; Teshera-Levye, J.; Zwieniecki, M.; Holbrook, N. M.
2013-12-01
The movement of water from moist to dry soil layers through the root systems of plants, referred to as hydraulic redistribution (HR), occurs throughout the world and is thought to influence carbon and water budgets and ecosystem functioning. The realized hydrologic, biogeochemical, and ecological consequences of HR depend on the amount of redistributed water, while the ability to assess these impacts requires models that correctly capture HR magnitude and timing. Using several soil types and two eco-types of Helianthus annuus L. in split-pot experiments, we examined how well the widely used HR modeling formulation developed by Ryel et al. (2002) could match experimental determination of HR across a range of water potential driving gradients. H. annuus carries out extensive nighttime transpiration, and though over the last decade it has become more widely recognized that nighttime transpiration occurs in multiple species and many ecosystems, the original Ryel et al. (2002) formulation does not include the effect of nighttime transpiration on HR. We developed and added a representation of nighttime transpiration into the formulation, and only then was the model able to capture the dynamics and magnitude of HR we observed as soils dried and nighttime stomatal behavior changed, both influencing HR.
An intermittency model for predicting roughness induced transition
NASA Astrophysics Data System (ADS)
Ge, Xuan; Durbin, Paul
2014-11-01
An extended model for roughness-induced transition is proposed based on an intermittency transport equation for RANS modeling formulated in local variables. To predict roughness effects in the fully turbulent boundary layer, published boundary conditions for k and ω are used, which depend on the equivalent sand grain roughness height, and account for the effective displacement of wall distance origin. Similarly in our approach, wall distance in the transition model for smooth surfaces is modified by an effective origin, which depends on roughness. Flat plate test cases are computed to show that the proposed model is able to predict the transition onset in agreement with a data correlation of transition location versus roughness height, Reynolds number, and inlet turbulence intensity. Experimental data for a turbine cascade are compared with the predicted results to validate the applicability of the proposed model. Supported by NSF Award Number 1228195.
Liu, Jing; Hu, Rui; Liu, Jianwei; Zhang, Butian; Wang, Yucheng; Liu, Xin; Law, Wing-Cheung; Liu, Liwei; Ye, Ling; Yong, Ken-Tye
2015-12-01
The toxicity of quantum dots (QDs) has been extensively studied over the past decade. Some common factors that originate the QD toxicity include releasing of heavy metal ions from degraded QDs and the generation of reactive oxygen species on the QD surface. In addition to these factors, we should also carefully examine other potential QD toxicity causes that will play crucial roles in impacting the overall biological system. In this contribution, we have performed cytotoxicity assessment of four types of QD formulations in two different human cancer cell models. The four types of QD formulations, namely, mercaptopropionic acid modified CdSe/CdS/ZnS QDs (CdSe-MPA), PEGylated phospholipid encapsulated CdSe/CdS/ZnS QDs (CdSe-Phos), PEGylated phospholipid encapsulated InP/ZnS QDs (InP-Phos) and Pluronic F127 encapsulated CdTe/ZnS QDs (CdTe-F127), are representatives for the commonly used QD formulations in biomedical applications. Both the core materials and the surface modifications have been taken into consideration as the key factors for the cytotoxicity assessment. Through side-by-side comparison and careful evaluations, we have found that the toxicity of QDs does not solely depend on a single factor in initiating the toxicity in biological system but rather it depends on a combination of elements from the particle formulations. More importantly, our toxicity assessment shows different cytotoxicity trend for all the prepared formulations tested on gastric adenocarcinoma (BGC-823) and neuroblastoma (SH-SY5Y) cell lines. We have further proposed that the cellular uptake of these nanocrystals plays an important role in determining the final faith of the toxicity impact of the formulation. The result here suggests that the toxicity of QDs is rather complex and it cannot be generalized under a few assumptions reported previously. We suggest that one have to evaluate the QD toxicity on a case to case basis and this indicates that standard procedures and comprehensive protocols are urgently needed to be developed and employed for fully assessing and understanding the origins of the toxicity arising from different QD formulations. Copyright © 2015. Published by Elsevier B.V.
Life cycle, individual thrift, and the wealth of nations.
Modigliani, F
1986-11-07
One theory of the determinants of individual and national thrift has come to be known as the life cycle hypothesis of saving. The state of the art on the eve of the formulation of the hypothesis some 30 years ago is reviewed. Then the theoretical foundations of the model in its original formulation and later amendment are set forth, calling attention to various implications, some distinctive to it and some counterintuitive. A number of crucial empirical tests, both at the individual and the aggregate level, are presented as well as some applications of the life cycle hypothesis of saving to current policy issues.
NASA Astrophysics Data System (ADS)
Stöckl, Stefan; Rotach, Mathias W.; Kljun, Natascha
2018-01-01
We discuss the results of Gibson and Sailor (Boundary-Layer Meteorol 145:399-406, 2012) who suggest several corrections to the mathematical formulation of the Lagrangian particle dispersion model of Rotach et al. (Q J R Meteorol Soc 122:367-389, 1996). While most of the suggested corrections had already been implemented in the 1990s, one suggested correction raises a valid point, but results in a violation of the well-mixed criterion. Here we improve their idea and test the impact on model results using a well-mixed test and a comparison with wind-tunnel experimental data. The new approach results in similar dispersion patterns as the original approach, while the approach suggested by Gibson and Sailor leads to erroneously reduced concentrations near the ground in convective and especially forced convective conditions.
NASA Astrophysics Data System (ADS)
Kruk, D.; Earle, K. A.; Mielczarek, A.; Kubica, A.; Milewska, A.; Moscicki, J.
2011-12-01
A general theory of lineshapes in nuclear quadrupole resonance (NQR), based on the stochastic Liouville equation, is presented. The description is valid for arbitrary motional conditions (particularly beyond the valid range of perturbation approaches) and interaction strengths. It can be applied to the computation of NQR spectra for any spin quantum number and for any applied magnetic field. The treatment presented here is an adaptation of the "Swedish slow motion theory," [T. Nilsson and J. Kowalewski, J. Magn. Reson. 146, 345 (2000), 10.1006/jmre.2000.2125] originally formulated for paramagnetic systems, to NQR spectral analysis. The description is formulated for simple (Brownian) diffusion, free diffusion, and jump diffusion models. The two latter models account for molecular cooperativity effects in dense systems (such as liquids of high viscosity or molecular glasses). The sensitivity of NQR slow motion spectra to the mechanism of the motional processes modulating the nuclear quadrupole interaction is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, David L.; Olson, Jerry G.; Hannay, Cécile
An error in the energy formulation in the Community Atmosphere Model (CAM) is identified and corrected. Ten year AMIP simulations are compared using the correct and incorrect energy formulations. Statistics of selected primary variables all indicate physically insignificant differences between the simulations, comparable to differences with simulations initialized with rounding sized perturbations. The two simulations are so similar mainly because of an inconsistency in the application of the incorrect energy formulation in the original CAM. CAM used the erroneous energy form to determine the states passed between the parameterizations, but used a form related to the correct formulation for themore » state passed from the parameterizations to the dynamical core. If the incorrect form is also used to determine the state passed to the dynamical core the simulations are significantly different. In addition, CAM uses the incorrect form for the global energy fixer, but that seems to be less important. The difference of the magnitude of the fixers using the correct and incorrect energy definitions is very small.« less
NASA Astrophysics Data System (ADS)
Zhang, Taiping; Stackhouse, Paul W.; Gupta, Shashi K.; Cox, Stephen J.; Mikovitz, J. Colleen
2017-02-01
Occasionally, a need arises to downscale a time series of data from a coarse temporal resolution to a finer one, a typical example being from monthly means to daily means. For this case, daily means derived as such are used as inputs of climatic or atmospheric models so that the model results may exhibit variance on the daily time scale and retain the monthly mean of the original data set without an abrupt change from the end of one month to the beginning of the next. Different methods have been developed which often need assumptions, free parameters and the solution of simultaneous equations. Here we derive a generalized formulation by means of Fourier transform and inversion so that it can be used to directly compute daily means from a series of an arbitrary number of monthly means. The formulation can be used to transform any coarse temporal resolution to a finer one. From the derived results, the original data can be recovered almost identically. As a real application, we use this method to derive the daily counterpart of the MAC-v1 aerosol climatology that provides monthly mean aerosol properties for 18 shortwave bands and 12 longwave bands for the years from 1860 to 2100. The derived daily means are to be used as inputs of the shortwave and longwave algorithms of the NASA GEWEX SRB project.
Maximum Entropy Principle for Transportation
NASA Astrophysics Data System (ADS)
Bilich, F.; DaSilva, R.
2008-11-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnamoorthy, Sriram; Daily, Jeffrey A.; Vishnu, Abhinav
2015-11-01
Global Arrays (GA) is a distributed-memory programming model that allows for shared-memory-style programming combined with one-sided communication, to create a set of tools that combine high performance with ease-of-use. GA exposes a relatively straightforward programming abstraction, while supporting fully-distributed data structures, locality of reference, and high-performance communication. GA was originally formulated in the early 1990’s to provide a communication layer for the Northwest Chemistry (NWChem) suite of chemistry modeling codes that was being developed concurrently.
An implicit dispersive transport algorithm for the US Geological Survey MOC3D solute-transport model
Kipp, K.L.; Konikow, Leonard F.; Hornberger, G.Z.
1998-01-01
This report documents an extension to the U.S. Geological Survey MOC3D transport model that incorporates an implicit-in-time difference approximation for the dispersive transport equation, including source/sink terms. The original MOC3D transport model (Version 1) uses the method of characteristics to solve the transport equation on the basis of the velocity field. The original MOC3D solution algorithm incorporates particle tracking to represent advective processes and an explicit finite-difference formulation to calculate dispersive fluxes. The new implicit procedure eliminates several stability criteria required for the previous explicit formulation. This allows much larger transport time increments to be used in dispersion-dominated problems. The decoupling of advective and dispersive transport in MOC3D, however, is unchanged. With the implicit extension, the MOC3D model is upgraded to Version 2. A description of the numerical method of the implicit dispersion calculation, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. Version 2 of MOC3D was evaluated for the same set of problems used for verification of Version 1. These test results indicate that the implicit calculation of Version 2 matches the accuracy of Version 1, yet is more efficient than the explicit calculation for transport problems that are characterized by a grid Peclet number less than about 1.0.
Multi-Stage Convex Relaxation Methods for Machine Learning
2013-03-01
Many problems in machine learning can be naturally formulated as non-convex optimization problems. However, such direct nonconvex formulations have...original nonconvex formulation. We will develop theoretical properties of this method and algorithmic consequences. Related convex and nonconvex machine learning methods will also be investigated.
Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds
NASA Astrophysics Data System (ADS)
Yun, Yuxing; Penner, Joyce E.
2012-04-01
A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.
NASA Astrophysics Data System (ADS)
Abd El Baky, Hussien
This research work is devoted to theoretical and numerical studies on the flexural behaviour of FRP-strengthened concrete beams. The objectives of this research are to extend and generalize the results of simple experiments, to recommend new design guidelines based on accurate numerical tools, and to enhance our comprehension of the bond performance of such beams. These numerical tools can be exploited to bridge the existing gaps in the development of analysis and modelling approaches that can predict the behaviour of FRP-strengthened concrete beams. The research effort here begins with the formulation of a concrete model and development of FRP/concrete interface constitutive laws, followed by finite element simulations for beams strengthened in flexure. Finally, a statistical analysis is carried out taking the advantage of the aforesaid numerical tools to propose design guidelines. In this dissertation, an alternative incremental formulation of the M4 microplane model is proposed to overcome the computational complexities associated with the original formulation. Through a number of numerical applications, this incremental formulation is shown to be equivalent to the original M4 model. To assess the computational efficiency of the incremental formulation, the "arc-length" numerical technique is also considered and implemented in the original Bazant et al. [2000] M4 formulation. Finally, the M4 microplane concrete model is coded in FORTRAN and implemented as a user-defined subroutine into the commercial software package ADINA, Version 8.4. Then this subroutine is used with the finite element package to analyze various applications involving FRP strengthening. In the first application a nonlinear micromechanics-based finite element analysis is performed to investigate the interfacial behaviour of FRP/concrete joints subjected to direct shear loadings. The intention of this part is to develop a reliable bond--slip model for the FRP/concrete interface. The bond--slip relation is developed considering the interaction between the interfacial normal and shear stress components along the bonded length. A new approach is proposed to describe the entire tau-s relationship based on three separate models. The first model captures the shear response of an orthotropic FRP laminate. The second model simulates the shear characteristics of an adhesive layer, while the third model represents the shear nonlinearity of a thin layer inside the concrete, referred to as the interfacial layer. The proposed bond--slip model reflects the geometrical and material characteristics of the FRP, concrete, and adhesive layers. Two-dimensional and three-dimensional nonlinear displacement-controlled finite element (FE) models are then developed to investigate the flexural and FRP/concrete interfacial responses of FRP-strengthened reinforced concrete beams. The three-dimensional finite element model is created to accommodate cases of beams having FRP anchorage systems. Discrete interface elements are proposed and used to simulate the FRP/concrete interfacial behaviour before and after cracking. The FE models are capable of simulating the various failure modes, including debonding of the FRP either at the plate end or at intermediate cracks. Particular attention is focused on the effect of crack initiation and propagation on the interfacial behaviour. This study leads to an accurate and refined interpretation of the plate-end and intermediate crack debonding failure mechanisms for FRP-strengthened beams with and without FRP anchorage systems. Finally, the FE models are used to conduct a parametric study to generalize the findings of the FE analysis. The variables under investigation include two material characteristics; namely, the concrete compressive strength and axial stiffness of the FRP laminates as well as three geometric properties; namely, the steel reinforcement ratio, the beam span length and the beam depth. The parametric study is followed by a statistical analysis for 43 strengthened beams involving the five aforementioned variables. The response surface methodology (RSM) technique is employed to optimize the accuracy of the statistical models while minimizing the numbers of finite element runs. In particular, a face-centred design (FCD) is applied to evaluate the influence of the critical variables on the debonding load and debonding strain limits in the FRP laminates. Based on these statistical models, a nonlinear statistical regression analysis is used to propose design guidelines for the FRP flexural strengthening of reinforced concrete beams. (Abstract shortened by UMI.)
Neumann, Rebecca B; Cardon, Zoe G; Teshera-Levye, Jennifer; Rockwell, Fulton E; Zwieniecki, Maciej A; Holbrook, N Michele
2014-04-01
The movement of water from moist to dry soil layers through the root systems of plants, referred to as hydraulic redistribution (HR), occurs throughout the world and is thought to influence carbon and water budgets and ecosystem functioning. The realized hydrologic, biogeochemical and ecological consequences of HR depend on the amount of redistributed water, whereas the ability to assess these impacts requires models that correctly capture HR magnitude and timing. Using several soil types and two ecotypes of sunflower (Helianthus annuus L.) in split-pot experiments, we examined how well the widely used HR modelling formulation developed by Ryel et al. matched experimental determination of HR across a range of water potential driving gradients. H. annuus carries out extensive night-time transpiration, and although over the last decade it has become more widely recognized that night-time transpiration occurs in multiple species and many ecosystems, the original Ryel et al. formulation does not include the effect of night-time transpiration on HR. We developed and added a representation of night-time transpiration into the formulation, and only then was the model able to capture the dynamics and magnitude of HR we observed as soils dried and night-time stomatal behaviour changed, both influencing HR. © 2013 John Wiley & Sons Ltd.
Cierniak, Robert; Lorent, Anna
2016-09-01
The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Experience with a vectorized general circulation weather model on Star-100
NASA Technical Reports Server (NTRS)
Soll, D. B.; Habra, N. R.; Russell, G. L.
1977-01-01
A version of an atmospheric general circulation model was vectorized to run on a CDC STAR 100. The numerical model was coded and run in two different vector languages, CDC and LRLTRAN. A factor of 10 speed improvement over an IBM 360/95 was realized. Efficient use of the STAR machine required some redesigning of algorithms and logic. This precludes the application of vectorizing compilers on the original scalar code to achieve the same results. Vector languages permit a more natural and efficient formulation for such numerical codes.
Diamantides, N D
1992-12-01
"This study formulates a model of the macrodynamics of international migration using a differential equation to capture the push-pull forces that propel it. The model's architecture rests on the functioning of information feedback between settled friends and family at the destination and potential emigrants at the origin.... Two specific paradigms of diverse nature serve to demonstrate the model's tenets and pertinence, one being Greek emigration to the United States since 1820, and the other total out-migration from Cyprus since statehood (1946)." excerpt
Multi-Body Analysis of a Tiltrotor Configuration
NASA Technical Reports Server (NTRS)
Ghiringhelli, G. L.; Masarati, P.; Mantegazza, P.; Nixon, M. W.
1997-01-01
The paper describes the aeroelastic analysis of a tiltrotor configuration. The 1/5 scale wind tunnel semispan model of the V-22 tiltrotor aircraft is considered. The analysis is performed by means of a multi-body code, based on an original formulation. The differential equilibrium problem is stated in terms of first order differential equations. The equilibrium equations of every rigid body are written, together with the definitions of the momenta. The bodies are connected by kinematic constraints, applied in form of Lagrangian multipliers. Deformable components are mainly modelled by means of beam elements, based on an original finite volume formulation. Multi-disciplinar problems can be solved by adding user-defined differential equations. In the presented analysis the equations related to the control of the swash-plate of the model are considered. Advantages of a multi-body aeroelastic code over existing comprehensive rotorcraft codes include the exact modelling of the kinematics of the hub, the detailed modelling of the flexibility of critical hub components, and the possibility to simulate steady flight conditions as well as wind-up and maneuvers. The simulations described in the paper include: 1) the analysis of the aeroelastic stability, with particular regard to the proprotor/pylon instability that is peculiar to tiltrotors, 2) the determination of the dynamic behavior of the system and of the loads due to typical maneuvers, with particular regard to the conversion from helicopter to airplane mode, and 3) the stress evaluation in critical components, such as the pitch links and the conversion downstop spring.
Simulating ground water-lake interactions: Approaches and insights
Hunt, R.J.; Haitjema, H.M.; Krohelski, J.T.; Feinstein, D.T.
2003-01-01
Approaches for modeling lake-ground water interactions have evolved significantly from early simulations that used fixed lake stages specified as constant head to sophisticated LAK packages for MODFLOW. Although model input can be complex, the LAK package capabilities and output are superior to methods that rely on a fixed lake stage and compare well to other simple methods where lake stage can be calculated. Regardless of the approach, guidelines presented here for model grid size, location of three-dimensional flow, and extent of vertical capture can facilitate the construction of appropriately detailed models that simulate important lake-ground water interactions without adding unnecessary complexity. In addition to MODFLOW approaches, lake simulation has been formulated in terms of analytic elements. The analytic element lake package had acceptable agreement with a published LAK1 problem, even though there were differences in the total lake conductance and number of layers used in the two models. The grid size used in the original LAK1 problem, however, violated a grid size guideline presented in this paper. Grid sensitivity analyses demonstrated that an appreciable discrepancy in the distribution of stream and lake flux was related to the large grid size used in the original LAK1 problem. This artifact is expected regardless of MODFLOW LAK package used. When the grid size was reduced, a finite-difference formulation approached the analytic element results. These insights and guidelines can help ensure that the proper lake simulation tool is being selected and applied.
Heat and mass transfer in combustion - Fundamental concepts and analytical techniques
NASA Technical Reports Server (NTRS)
Law, C. K.
1984-01-01
Fundamental combustion phenomena and the associated flame structures in laminar gaseous flows are discussed on physical bases within the framework of the three nondimensional parameters of interest to heat and mass transfer in chemically-reacting flows, namely the Damkoehler number, the Lewis number, and the Arrhenius number which is the ratio of the reaction activation energy to the characteristic thermal energy. The model problems selected for illustration are droplet combustion, boundary layer combustion, and the propagation, flammability, and stability of premixed flames. Fundamental concepts discussed include the flame structures for large activation energy reactions, S-curve interpretation of the ignition and extinctin states, reaction-induced local-similarity and non-similarity in boundary layer flows, the origin and removal of the cold boundary difficulty in modeling flame propagation, and effects of flame stretch and preferential diffusion on flame extinction and stability. Analytical techniques introduced include the Shvab-Zeldovich formulation, the local Shvab-Zeldovich formulation, flame-sheet approximation and the associated jump formulation, and large activation energy matched asymptotic analysis. Potentially promising research areas are suggested.
Properties of finite difference models of non-linear conservative oscillators
NASA Technical Reports Server (NTRS)
Mickens, R. E.
1988-01-01
Finite-difference (FD) approaches to the numerical solution of the differential equations describing the motion of a nonlinear conservative oscillator are investigated analytically. A generalized formulation of the Duffing and modified Duffing equations is derived and analyzed using several FD techniques, and it is concluded that, although it is always possible to contstruct FD models of conservative oscillators which are themselves conservative, caution is required to avoid numerical solutions which do not accurately reflect the properties of the original equation.
The first geocenter estimation results using GPS measurements
NASA Technical Reports Server (NTRS)
Malla, R. P.; Wu, S. C.
1990-01-01
The center of mass of the Earth is the natural and unambiguous origin of a geocentric satellite dynamical system. A geocentric reference frame assumes that the origin of its coordinate axes is at the geocenter, in which all relevant observations and results can be referred and in which geodynamic theories or models for the dynamic behavior of Earth can be formulated. In practice, however, a kinematically obtained terrestrial reference frame may assume an origin other than the geocenter. A fast and accurate method of determining origin offset from the geocenter is highly desirable. Global Positioning System (GPS) measurements, because of their abundance and broad distribution, provide a powerful tool to obtain this origin offset in a short period of time. Two effective strategies have been devised. Data from the first Central and South America (Casa Uno) global GPS experiment were studied to demonstrate the ability of recovering the geocenter location with present-day GPS satellites and receivers.
A class of multi-period semi-variance portfolio for petroleum exploration and development
NASA Astrophysics Data System (ADS)
Guo, Qiulin; Li, Jianzhong; Zou, Caineng; Guo, Yujuan; Yan, Wei
2012-10-01
Variance is substituted by semi-variance in Markowitz's portfolio selection model. For dynamic valuation on exploration and development projects, one period portfolio selection is extended to multi-period. In this article, a class of multi-period semi-variance exploration and development portfolio model is formulated originally. Besides, a hybrid genetic algorithm, which makes use of the position displacement strategy of the particle swarm optimiser as a mutation operation, is applied to solve the multi-period semi-variance model. For this class of portfolio model, numerical results show that the mode is effective and feasible.
Inverse Optimization: A New Perspective on the Black-Litterman Model
Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch.
2014-01-01
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct “BL”-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new “BL”-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views. PMID:25382873
Maximum entropy principal for transportation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilich, F.; Da Silva, R.
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utilitymore » concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.« less
Dynamic rupture modeling with laboratory-derived constitutive relations
Okubo, P.G.
1989-01-01
A laboratory-derived state variable friction constitutive relation is used in the numerical simulation of the dynamic growth of an in-plane or mode II shear crack. According to this formulation, originally presented by J.H. Dieterich, frictional resistance varies with the logarithm of the slip rate and with the logarithm of the frictional state variable as identified by A.L. Ruina. Under conditions of steady sliding, the state variable is proportional to (slip rate)-1. Following suddenly introduced increases in slip rate, the rate and state dependencies combine to produce behavior which resembles slip weakening. When rupture nucleation is artificially forced at fixed rupture velocity, rupture models calculated with the state variable friction in a uniformly distributed initial stress field closely resemble earlier rupture models calculated with a slip weakening fault constitutive relation. Model calculations suggest that dynamic rupture following a state variable friction relation is similar to that following a simpler fault slip weakening law. However, when modeling the full cycle of fault motions, rate-dependent frictional responses included in the state variable formulation are important at low slip rates associated with rupture nucleation. -from Author
A systematic description of shocks in gamma-ray bursts - I. Formulation
NASA Astrophysics Data System (ADS)
Ziaeepour, Houri
2009-07-01
Since the suggestion of relativistic shocks as the origin of gamma-ray bursts (GRBs) in the early 1990s, the mathematical formulation of this process has stayed at a phenomenological level. One of the reasons for the slow development of theoretical works has been the simple power-law behaviour of the afterglows hours or days after the prompt gamma-ray emission. It was believed that they could be explained with these formulations. Nowadays, with the launch of the Swift satellite and implementation of robotic ground follow-ups, GRBs and their afterglow can be observed at multi-wavelengths from a few tens of seconds after trigger onwards. These observations have led to the discovery of features unexplainable by the simple formulation of the shocks and emission processes used up to now. Some of these features can be inherent in the nature and activities of the GRBs' central engines which are not yet well understood. On the other hand, the devil is in the detail and others may be explained with a more detailed formulation of these phenomena and without ad hoc addition of new processes. Such a formulation is the goal of this work. We present a consistent formulation of the kinematics and dynamics of the collision between two spherical relativistic shells, their energy dissipation and their coalescence. It can be applied to both internal and external shocks. Notably, we propose two phenomenological models for the evolution of the emitting region during the collision. One of these models is more suitable for the prompt/internal shocks and late external shocks, and the other for the afterglow/external collisions as well as the onset of internal shocks. We calculate a number of observables such as flux, lag between energy bands and hardness ratios. One of our aims has been a formulation complex enough to include the essential processes, but simple enough such that the data can be directly compared with the theory to extract the value and evolution of physical quantities. To accomplish this goal, we also suggest a procedure for extracting parameters of the model from data. In a companion paper, we numerically calculate the evolution of some simulated models and compare their features with the properties of the observed GRBs.
Kubo–Greenwood approach to conductivity in dense plasmas with average atom models
Starrett, C. E.
2016-04-13
In this study, a new formulation of the Kubo–Greenwood conductivity for average atom models is given. The new formulation improves upon previous treatments by explicitly including the ionic-structure factor. Calculations based on this new expression lead to much improved agreement with ab initio results for DC conductivity of warm dense hydrogen and beryllium, and for thermal conductivity of hydrogen. We also give and test a slightly modified Ziman–Evans formula for the resistivity that includes a non-free electron density of states, thus removing an ambiguity in the original Ziman–Evans formula. Again, results based on this expression are in good agreement withmore » ab initio simulations for warm dense beryllium and hydrogen. However, for both these expressions, calculations of the electrical conductivity of warm dense aluminum lead to poor agreement at low temperatures compared to ab initio simulations.« less
Extension of non-linear beam models with deformable cross sections
NASA Astrophysics Data System (ADS)
Sokolov, I.; Krylov, S.; Harari, I.
2015-12-01
Geometrically exact beam theory is extended to allow distortion of the cross section. We present an appropriate set of cross-section basis functions and provide physical insight to the cross-sectional distortion from linear elastostatics. The beam formulation in terms of material (back-rotated) beam internal force resultants and work-conjugate kinematic quantities emerges naturally from the material description of virtual work of constrained finite elasticity. The inclusion of cross-sectional deformation allows straightforward application of three-dimensional constitutive laws in the beam formulation. Beam counterparts of applied loads are expressed in terms of the original three-dimensional data. Special attention is paid to the treatment of the applied stress, keeping in mind applications such as hydrogel actuators under environmental stimuli or devices made of electroactive polymers. Numerical comparisons show the ability of the beam model to reproduce finite elasticity results with good efficiency.
An alternative Biot's displacement formulation for porous materials.
Dazel, Olivier; Brouard, Bruno; Depollier, Claude; Griffiths, Stéphane
2007-06-01
This paper proposes an alternative displacement formulation of Biot's linear model for poroelastic materials. Its advantage is a simplification of the formalism without making any additional assumptions. The main difference between the method proposed in this paper and the original one is the choice of the generalized coordinates. In the present approach, the generalized coordinates are chosen in order to simplify the expression of the strain energy, which is expressed as the sum of two decoupled terms. Hence, new equations of motion are obtained whose elastic forces are decoupled. The simplification of the formalism is extended to Biot and Willis thought experiments, and simpler expressions of the parameters of the three Biot waves are also provided. A rigorous derivation of equivalent and limp models is then proposed. It is finally shown that, for the particular case of sound-absorbing materials, additional simplifications of the formalism can be obtained.
An inverse model for a free-boundary problem with a contact line: Steady case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, Oleg; Protas, Bartosz
2009-07-20
This paper reformulates the two-phase solidification problem (i.e., the Stefan problem) as an inverse problem in which a cost functional is minimized with respect to the position of the interface and subject to PDE constraints. An advantage of this formulation is that it allows for a thermodynamically consistent treatment of the interface conditions in the presence of a contact point involving a third phase. It is argued that such an approach in fact represents a closure model for the original system and some of its key properties are investigated. We describe an efficient iterative solution method for the Stefan problemmore » formulated in this way which uses shape differentiation and adjoint equations to determine the gradient of the cost functional. Performance of the proposed approach is illustrated with sample computations concerning 2D steady solidification phenomena.« less
NASA Astrophysics Data System (ADS)
Lechner, M.; Hellmich, Ch.; Mang, H. A.
Embedded in a thermochemoplastic material law set up in the framework of thermodynamics, the focus of the work is on the creep characteristics of shotcrete. Short-term creep, with a characteristic duration of several days, turns out to be a fundamental feature for realistic modelling of the structural behaviour of tunnels driven according to the New Austrian Tunnelling Method (NATM). Its origin is a stress-induced water movement within the capillary pores of concrete. This process is related to the accumulation of hydrates, which are initially free of micro-stress. Hence, an incremental formulation for aging viscoelasticity turns out to be a proper tool for modelling this kind of creep. The usefulness of this formulation is tested by re-analyzing a relaxation test with non-constant prescribed strains, showing quantitatively correct results for concrete and qualitatively correct results for shotcrete. The latter results indicate the necessity of classical creep tests for shotcrete.
Peters, N.E.; Freer, J.; Beven, K.
2003-01-01
Preliminary modelling results for a new version of the rainfall-runoff model TOPMODEL, dynamic TOPMODEL, are compared with those of the original TOPMODEL formulation for predicting streamflow at the Panola Mountain Research Watershed, Georgia. Dynamic TOPMODEL uses a kinematic wave routing of subsurface flow, which allows for dynamically variable upslope contributing areas, while retaining the concept of hydrological similarity to increase computational efficiency. Model performance in predicting discharge was assessed for the original TOPMODEL and for one landscape unit (LU) and three LU versions of the dynamic TOPMODEL (a bare rock area, hillslope with regolith <1 m, and a riparian zone with regolith ???5 m). All simulations used a 30 min time step for each of three water years. Each 1-LU model underpredicted the peak streamflow, and generally overpredicted recession streamflow during wet periods and underpredicted during dry periods. The difference between predicted recession streamflow generally was less for the dynamic TOPMODEL and smallest for the 3-LU model. Bayesian combination of results for different water years within the GLUE methodology left no behavioural original or 1-LU dynamic models and only 168 (of 96 000 sample parameter sets) for the 3-LU model. The efficiency for the streamflow prediction of the best 3-LU model was 0.83 for an individual year, but the results suggest that further improvements could be made. ?? 2003 John Wiley & Sons, Ltd.
2013-01-01
Background 3,3′-Diindolylmethane (DIM) is known as an agent of natural origin that provides protection against different cancers due to the broad spectrum of its biological activities in vivo. However, this substance has a very poor biodistribution and absorption in animal tissues. This preclinical trial was conducted to evaluate the pharmacokinetics and bioavailability of various DIM formulations in animal model. Materials and methods The pharmacokinetic parameters of one crystalline DIM formulation and one liquid DIM formulation (oil solution) compared to non-formulated crystalline DIM (control) were tested in 200 rats. The formulations were orally administered to animals by gavage at doses of 200 mg/kg per DIM (crystalline DIM formulation and non-formulated crystalline DIM) and 0.1 mg/kg per DIM (DIM in oil solution). DIM plasma elimination was measured using HPLC method; after that, the area under the curve (AUC), relative bioavailability, and absolute bioavailability were estimated for two formulations in relation to non-formulated crystalline DIM. Results and conclusion The highest bioavailability was achieved by administering liquid DIM (oil solution), containing cod liver oil and polysorbate. The level of DIM in rat blood plasma was about fivefold higher, though the 2,000-fold lower dose was administered compared to crystalline DIM forms. The novel pharmacological DIM substance with high bioavailability may be considered as a promising targeted antitumor chemopreventive agent. It could be used to prevent breast and ovarian cancer development in patients with heterozygous inherited and sporadic BRCA1 gene mutations. Further preclinical and clinical trials are needed to prove this concept. PMID:24325835
The dual process model of coping with bereavement: a decade on.
Stroebe, Margaret; Schut, Henk
2010-01-01
The Dual Process Model of Coping with Bereavement (DPM; Stroebe & Schut, 1999) is described in this article. The rationale is given as to why this model was deemed necessary and how it was designed to overcome limitations of earlier models of adaptive coping with loss. Although building on earlier theoretical formulations, it contrasts with other models along a number of dimensions which are outlined. In addition to describing the basic parameters of the DPM, theoretical and empirical developments that have taken place since the original publication of the model are summarized. Guidelines for future research are given focusing on principles that should be followed to put the model to stringent empirical test.
Matos, Ely Edison; Campos, Fernanda; Braga, Regina; Palazzi, Daniele
2010-02-01
The amount of information generated by biological research has lead to an intensive use of models. Mathematical and computational modeling needs accurate description to share, reuse and simulate models as formulated by original authors. In this paper, we introduce the Cell Component Ontology (CelO), expressed in OWL-DL. This ontology captures both the structure of a cell model and the properties of functional components. We use this ontology in a Web project (CelOWS) to describe, query and compose CellML models, using semantic web services. It aims to improve reuse and composition of existent components and allow semantic validation of new models.
Random walk in degree space and the time-dependent Watts-Strogatz model
NASA Astrophysics Data System (ADS)
Casa Grande, H. L.; Cotacallapa, M.; Hase, M. O.
2017-01-01
In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.
Random walk in degree space and the time-dependent Watts-Strogatz model.
Casa Grande, H L; Cotacallapa, M; Hase, M O
2017-01-01
In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.
On a two-particle bound system on the half-line
NASA Astrophysics Data System (ADS)
Kerner, Joachim; Mühlenbruch, Tobias
2017-10-01
In this paper we provide an extension of the model discussed in [10] describing two singularly interacting particles on the half-line ℝ+. In this model, the particles are interacting only whenever at least one particle is situated at the origin. Stimulated by [11] we then provide a generalisation of this model in order to include additional interactions between the particles leading to a molecular-like state. We give a precise mathematical formulation of the Hamiltonian of the system and perform spectral analysis. In particular, we are interested in the effect of the singular two-particle interactions onto the molecule.
Disorder trapping by rapidly moving phase interface in an undercooled liquid
NASA Astrophysics Data System (ADS)
Galenko, Peter; Danilov, Denis; Nizovtseva, Irina; Reuther, Klemens; Rettenmayr, Markus
2017-08-01
Non-equilibrium phenomena such as the disappearance of solute drag, the origin of solute trapping and evolution of disorder trapping occur during fast transformations with originating metastable phases [D.M. Herlach, P.K. Galenko, D. Holland-Moritz, Metastable solids from undrercooled melts (Elsevier, Amsterdam, 2007)]. In the present work, a theoretical investigation of disorder trapping by a rapidly moving phase interface is presented. Using a model of fast phase transformations, a system of governing equations for the diffusion of atoms, and the evolution of both long-range order parameter and phase field variable is formulated. First numerical solutions are carried out for a congruently melting binary alloy system.
Palti, Elías
2018-01-01
This paper analyzes how Latin American historiography has addressed the issue of "the ideological origins of the revolution of independence," and how the formulation of that topic implies assumptions proper to the tradition of the history of ideas and leads to anachronistic conceptual transpositions. Halperín Donghi's work models a different approach, illuminating how a series of meaningful torsions within traditional languages provided the ideological framework for a result incompatible with those languages. This paradox forces a break with the frameworks of the history of ideas and the set of antinomies intrinsic to them, such as that between "tradition" and "modernity."
Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-07-01
Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.
A simple shear limited, single size, time dependent flocculation model
NASA Astrophysics Data System (ADS)
Kuprenas, R.; Tran, D. A.; Strom, K.
2017-12-01
This research focuses on the modeling of flocculation of cohesive sediment due to turbulent shear, specifically, investigating the dependency of flocculation on the concentration of cohesive sediment. Flocculation is important in larger sediment transport models as cohesive particles can create aggregates which are orders of magnitude larger than their unflocculated state. As the settling velocity of each particle is determined by the sediment size, density, and shape, accounting for this aggregation is important in determining where the sediment is deposited. This study provides a new formulation for flocculation of cohesive sediment by modifying the Winterwerp (1998) flocculation model (W98) so that it limits floc size to that of the Kolmogorov micro length scale. The W98 model is a simple approach that calculates the average floc size as a function of time. Because of its simplicity, the W98 model is ideal for implementing into larger sediment transport models; however, the model tends to over predict the dependency of the floc size on concentration. It was found that the modification of the coefficients within the original model did not allow for the model to capture the dependency on concentration. Therefore, a new term within the breakup kernel of the W98 formulation was added. The new formulation results is a single size, shear limited, and time dependent flocculation model that is able to effectively capture the dependency of the equilibrium size of flocs on both suspended sediment concentration and the time to equilibrium. The overall behavior of the new model is explored and showed align well with other studies on flocculation. Winterwerp, J. C. (1998). A simple model for turbulence induced flocculation of cohesive sediment. .Journal of Hydraulic Research, 36(3):309-326.
Age- and bite-structured models for vector-borne diseases.
Rock, K S; Wood, D A; Keeling, M J
2015-09-01
The biology and behaviour of biting insects is a vitally important aspect in the spread of vector-borne diseases. This paper aims to determine, through the use of mathematical models, what effect incorporating vector senescence and realistic feeding patterns has on disease. A novel model is developed to enable the effects of age- and bite-structure to be examined in detail. This original PDE framework extends previous age-structured models into a further dimension to give a new insight into the role of vector biting and its interaction with vector mortality and spread of disease. Through the PDE model, the roles of the vector death and bite rates are examined in a way which is impossible under the traditional ODE formulation. It is demonstrated that incorporating more realistic functions for vector biting and mortality in a model may give rise to different dynamics than those seen under a more simple ODE formulation. The numerical results indicate that the efficacy of control methods that increase vector mortality may not be as great as predicted under a standard host-vector model, whereas other controls including treatment of humans may be more effective than previously thought. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Song, Y. T.
2002-01-01
It is found that two adaptive parametric functions can be introduced into the basic ocean equations for utilizing the optimal or hybrid features of commonly used z-level, terrain- following, isopycnal, and pressure coordinates in numerical ocean models. The two parametric functions are formulated by combining three techniques: the arbitrary vertical coordinate system of Kasahara (1 974), the Jacobian pressure gradient formulation of Song (1 998), and a newly developed metric factor that permits both compressible (non-Boussinesq) and incompressible (Boussinesq) approximations. Based on the new formulation, an adaptive modeling strategy is proposed and a staggered finite volume method is designed to ensure conservation of important physical properties and numerical accuracy. Implementation of the combined techniques to SCRUM (Song and Haidvogel1994) shows that the adaptive modeling strategy can be applied to any existing ocean model without incurring computational expense or altering the original numerical schemes. Such a generalized coordinate model is expected to benefit diverse ocean modelers for easily choosing optimal vertical structures and sharing modeling resources based on a common model platform. Several representing oceanographic problems with different scales and characteristics, such as coastal canyons, basin-scale circulation, and global ocean circulation, are used to demonstrate the model's capability for multiple applications. New results show that the model is capable of simultaneously resolving both Boussinesq and non-Boussinesq, and both small- and large-scale processes well. This talk will focus on its applications of multiple satellite sensing data in eddy-resolving simulations of Asian Marginal Sea and Kurosio. Attention will be given to how Topex/Poseidon SSH, TRMM SST; and GRACE ocean bottom pressure can be correctly represented in a non- Boussinesq model.
Aerodynamic mathematical modeling - basic concepts
NASA Technical Reports Server (NTRS)
Tobak, M.; Schiff, L. B.
1981-01-01
The mathematical modeling of the aerodynamic response of an aircraft to arbitrary maneuvers is reviewed. Bryan's original formulation, linear aerodynamic indicial functions, and superposition are considered. These concepts are extended into the nonlinear regime. The nonlinear generalization yields a form for the aerodynamic response that can be built up from the responses to a limited number of well defined characteristic motions, reproducible in principle either in wind tunnel experiments or flow field computations. A further generalization leads to a form accommodating the discontinuous and double valued behavior characteristics of hysteresis in the steady state aerodynamic response.
NASA Astrophysics Data System (ADS)
Ebrahimi Zade, Amir; Sadegheih, Ahmad; Lotfi, Mohammad Mehdi
2014-07-01
Hubs are centers for collection, rearrangement, and redistribution of commodities in transportation networks. In this paper, non-linear multi-objective formulations for single and multiple allocation hub maximal covering problems as well as the linearized versions are proposed. The formulations substantially mitigate complexity of the existing models due to the fewer number of constraints and variables. Also, uncertain shipments are studied in the context of hub maximal covering problems. In many real-world applications, any link on the path from origin to destination may fail to work due to disruption. Therefore, in the proposed bi-objective model, maximizing safety of the weakest path in the network is considered as the second objective together with the traditional maximum coverage goal. Furthermore, to solve the bi-objective model, a modified version of NSGA-II with a new dynamic immigration operator is developed in which the accurate number of immigrants depends on the results of the other two common NSGA-II operators, i.e. mutation and crossover. Besides validating proposed models, computational results confirm a better performance of modified NSGA-II versus traditional one.
Vibrational properties of nanocrystals from the Debye Scattering Equation
Scardi, P.; Gelisio, L.
2016-02-26
One hundred years after the original formulation by Petrus J.W. Debije (aka Peter Debye), the Debye Scattering Equation (DSE) is still the most accurate expression to model the diffraction pattern from nanoparticle systems. A major limitation in the original form of the DSE is that it refers to a static domain, so that including thermal disorder usually requires rescaling the equation by a Debye-Waller thermal factor. The last is taken from the traditional diffraction theory developed in Reciprocal Space (RS), which is opposed to the atomistic paradigm of the DSE, usually referred to as Direct Space (DS) approach. Besides beingmore » a hybrid of DS and RS expressions, rescaling the DSE by the Debye-Waller factor is an approximation which completely misses the contribution of Temperature Diffuse Scattering (TDS). The present work proposes a solution to include thermal effects coherently with the atomistic approach of the DSE. Here, a deeper insight into the vibrational dynamics of nanostructured materials can be obtained with few changes with respect to the standard formulation of the DSE, providing information on the correlated displacement of vibrating atoms.« less
An efficient algorithm for the generalized Foldy-Lax formulation
NASA Astrophysics Data System (ADS)
Huang, Kai; Li, Peijun; Zhao, Hongkai
2013-02-01
Consider the scattering of a time-harmonic plane wave incident on a two-scale heterogeneous medium, which consists of scatterers that are much smaller than the wavelength and extended scatterers that are comparable to the wavelength. In this work we treat those small scatterers as isotropic point scatterers and use a generalized Foldy-Lax formulation to model wave propagation and capture multiple scattering among point scatterers and extended scatterers. Our formulation is given as a coupled system, which combines the original Foldy-Lax formulation for the point scatterers and the regular boundary integral equation for the extended obstacle scatterers. The existence and uniqueness of the solution for the formulation is established in terms of physical parameters such as the scattering coefficient and the separation distances. Computationally, an efficient physically motivated Gauss-Seidel iterative method is proposed to solve the coupled system, where only a linear system of algebraic equations for point scatterers or a boundary integral equation for a single extended obstacle scatterer is required to solve at each step of iteration. The convergence of the iterative method is also characterized in terms of physical parameters. Numerical tests for the far-field patterns of scattered fields arising from uniformly or randomly distributed point scatterers and single or multiple extended obstacle scatterers are presented.
NASA Astrophysics Data System (ADS)
Boutillier, J.; Ehrhardt, L.; De Mezzo, S.; Deck, C.; Magnan, P.; Naz, P.; Willinger, R.
2018-03-01
With the increasing use of improvised explosive devices (IEDs), the need for better mitigation, either for building integrity or for personal security, increases in importance. Before focusing on the interaction of the shock wave with a target and the potential associated damage, knowledge must be acquired regarding the nature of the blast threat, i.e., the pressure-time history. This requirement motivates gaining further insight into the triple point (TP) path, in order to know precisely which regime the target will encounter (simple reflection or Mach reflection). Within this context, the purpose of this study is to evaluate three existing TP path empirical models, which in turn are used in other empirical models for the determination of the pressure profile. These three TP models are the empirical function of Kinney, the Unified Facilities Criteria (UFC) curves, and the model of the Natural Resources Defense Council (NRDC). As discrepancies are observed between these models, new experimental data were obtained to test their reliability and a new promising formulation is proposed for scaled heights of burst ranging from 24.6-172.9 cm/kg^{1/3}.
NASA Astrophysics Data System (ADS)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
Recovery of a geocentric reference frame using the present-day GPS system
NASA Technical Reports Server (NTRS)
Malla, Rajendra P.; Wu, Sien-Chong
1990-01-01
A geocentric reference frame adopts the center of mass of the earth as the origin of the coordinate axes. The center of mass of the earth is the natural and unambiguous origin of a geocentric satellite dynamical system. But in practice a kinematically obtained terrestrial reference frame may assume an origin other than the geocenter. The establishment of a geocentric reference frame, to which all relevant observations and results can be referred and in which geodynamic theories or models for the dynamic behavior of earth can be formulated, requires the ability to accurately recover a given coordinate frame origin offset from the geocenter. GPS measurements, because of their abundance and broad distribution, provide a powerful tool to obtain this origin offset in a short period of time. Two effective strategies have been devised. Data from the First Central And South America (Casa Uno) geodynamics experiment has been studied, in order to demonstrate the ability of recovering the geocenter location with present day GPS satellites and receivers.
NASA Technical Reports Server (NTRS)
Bellan, J.; Lathouwers, D.
2000-01-01
A novel multiphase flow model is presented for describing the pyrolysis of biomass in a 'bubbling' fluidized bed reactor. The mixture of biomass and sand in a gaseous flow is conceptualized as a particulate phase composed of two classes interacting with the carrier gaseous flow. The solid biomass is composed of three initial species: cellulose, hemicellulose and lignin. From each of these initial species, two new solid species originate during pyrolysis: an 'active' species and a char, thus totaling seven solid-biomass species. The gas phase is composed of the original carrier gas (steam), tar and gas; the last two species originate from the volumetric pyrolysis reaction. The conservation equations are derived from the Boltzmann equations through ensemble averaging. Stresses in the gaseous phase are the sum of the Newtonian and Reynolds (turbulent) contributions. The particulate phase stresses are the sum of collisional and Reynolds contributions. Heat transfer between phases, and heat transfer between classes in the particulate phase is modeled, the last resulting from collisions between sand and biomass. Closure of the equations must be performed by modeling the Reynolds stresses for both phases. The results of a simplified version (first step) of the model are presented.
Lattice Boltzmann Methods to Address Fundamental Boiling and Two-Phase Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uddin, Rizwan
2012-01-01
This report presents the progress made during the fourth (no cost extension) year of this three-year grant aimed at the development of a consistent Lattice Boltzmann formulation for boiling and two-phase flows. During the first year, a consistent LBM formulation for the simulation of a two-phase water-steam system was developed. Results of initial model validation in a range of thermo-dynamic conditions typical for Boiling Water Reactors (BWRs) were shown. Progress was made on several fronts during the second year. Most important of these included the simulation of the coalescence of two bubbles including the surface tension effects. Work during themore » third year focused on the development of a new lattice Boltzmann model, called the artificial interface lattice Boltzmann model (AILB model) for the 3 simulation of two-phase dynamics. The model is based on the principle of free energy minimization and invokes the Gibbs-Duhem equation in the formulation of non-ideal forcing function. This was reported in detail in the last progress report. Part of the efforts during the last (no-cost extension) year were focused on developing a parallel capability for the 2D as well as for the 3D codes developed in this project. This will be reported in the final report. Here we report the work carried out on testing the AILB model for conditions including the thermal effects. A simplified thermal LB model, based on the thermal energy distribution approach, was developed. The simplifications are made after neglecting the viscous heat dissipation and the work done by pressure in the original thermal energy distribution model. Details of the model are presented here, followed by a discussion of the boundary conditions, and then results for some two-phase thermal problems.« less
On the origin of dual Lax pairs and their r-matrix structure
NASA Astrophysics Data System (ADS)
Avan, Jean; Caudrelier, Vincent
2017-10-01
We establish the algebraic origin of the following observations made previously by the authors and coworkers: (i) A given integrable PDE in 1 + 1 dimensions within the Zakharov-Shabat scheme related to a Lax pair can be cast in two distinct, dual Hamiltonian formulations; (ii) Associated to each formulation is a Poisson bracket and a phase space (which are not compatible in the sense of Magri); (iii) Each matrix in the Lax pair satisfies a linear Poisson algebra a la Sklyanin characterized by the same classical r matrix. We develop the general concept of dual Lax pairs and dual Hamiltonian formulation of an integrable field theory. We elucidate the origin of the common r-matrix structure by tracing it back to a single Lie-Poisson bracket on a suitable coadjoint orbit of the loop algebra sl(2 , C) ⊗ C(λ ,λ-1) . The results are illustrated with the examples of the nonlinear Schrödinger and Gerdjikov-Ivanov hierarchies.
Mouri, Chika; Mikage, Masayuki
2015-01-01
The original formulation for "Tusujiu," which Japanese people still consume on the morning of January 1st, was created by Hua Tuo, but has not been studied in detail. The book Huatuo Shenyi Bizhuan, found in 1918, describes a concoction, "Biyijiu," that shows great similarity to the current Tusujiu; the ingredients for Biyijiu being rhubarb, atractylodes rhizome, cinnamon bark, platycodon root, zanthoxylum fruit, processed aconite root and smilax rhizome. The procedures for preparing and drinking it are to "pound the ingredients and then put them into a silk bag dyed with madder. During the daytime of the last day of the year, hang the bag in a well to soften the powder. Take the bag out early in the morning of the next day, the first day of the year. Heat the bag in fermented liquor until simmering. Drink the liquid with all family members, doing so while facing east. If one person drinks it, there will be no disease in the family. If the whole family drinks it, there will be no disease in their neighborhood in an area of one square 'li'. In this study, to determine the original formulation for Tusujiu, we examined a number of ancient medical texts from the 3rd to the 13th century that discuss Biyijiu and Tusujiu. As a result, we concluded that "Biyijiu" is likely to be the original formulation developed by Hua Tuo.
Benchmarking optimization software with COPS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, E.D.; More, J.J.
2001-01-08
The COPS test set provides a modest selection of difficult nonlinearly constrained optimization problems from applications in optimal design, fluid dynamics, parameter estimation, and optimal control. In this report we describe version 2.0 of the COPS problems. The formulation and discretization of the original problems have been streamlined and improved. We have also added new problems. The presentation of COPS follows the original report, but the description of the problems has been streamlined. For each problem we discuss the formulation of the problem and the structural data in Table 0.1 on the formulation. The aim of presenting this data ismore » to provide an approximate idea of the size and sparsity of the problem. We also include the results of computational experiments with the LANCELOT, LOQO, MINOS, and SNOPT solvers. These computational experiments differ from the original results in that we have deleted problems that were considered to be too easy. Moreover, in the current version of the computational experiments, each problem is tested with four variations. An important difference between this report and the original report is that the tables that present the computational experiments are generated automatically from the testing script. This is explained in more detail in the report.« less
On the origin of amplitude reduction mechanism in tapping mode atomic force microscopy
NASA Astrophysics Data System (ADS)
Keyvani, Aliasghar; Sadeghian, Hamed; Goosen, Hans; van Keulen, Fred
2018-04-01
The origin of amplitude reduction in Tapping Mode Atomic Force Microscopy (TM-AFM) is typically attributed to the shift in resonance frequency of the cantilever due to the nonlinear tip-sample interactions. In this paper, we present a different insight into the same problem which, besides explaining the amplitude reduction mechanism, provides a simple reasoning for the relationship between tip-sample interactions and operation parameters (amplitude and frequency). The proposed formulation, which attributes the amplitude reduction to an interference between the tip-sample and dither force, only deals with the linear part of the system; however, it fully agrees with experimental results and numerical solutions of the full nonlinear model of TM-AFM.
Modelling memory colour region for preference colour reproduction
NASA Astrophysics Data System (ADS)
Zeng, Huanzhao; Luo, Ronnier
2010-01-01
Colour preference adjustment is an essential step for colour image enhancement and perceptual gamut mapping. In colour reproduction for pictorial images, properly shifting colours away from their colorimetric originals may produce more preferred colour reproduction result. Memory colours, as a portion of the colour regions for colour preference adjustment, are especially important for preference colour reproduction. Identifying memory colours or modelling the memory colour region is a basic step to study preferred memory colour enhancement. In this study, we first created gamut for each memory colour region represented as a convex hull, and then used the convex hull to guide mathematical modelling to formulate the colour region for colour enhancement.
Compressible magma/mantle dynamics: 3-D, adaptive simulations in ASPECT
NASA Astrophysics Data System (ADS)
Dannberg, Juliane; Heister, Timo
2016-12-01
Melt generation and migration are an important link between surface processes and the thermal and chemical evolution of the Earth's interior. However, their vastly different timescales make it difficult to study mantle convection and melt migration in a unified framework, especially for 3-D global models. And although experiments suggest an increase in melt volume of up to 20 per cent from the depth of melt generation to the surface, previous computations have neglected the individual compressibilities of the solid and the fluid phase. Here, we describe our extension of the finite element mantle convection code ASPECT that adds melt generation and migration. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. Applying adaptive mesh refinement to this type of problems is particularly advantageous, as the resolution can be increased in areas where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. Together with a high-performance, massively parallel implementation, this allows for high-resolution, 3-D, compressible, global mantle convection simulations coupled with melt migration. We evaluate the functionality and potential of this method using a series of benchmarks and model setups, compare results of the compressible and incompressible formulation, and show the effectiveness of adaptive mesh refinement when applied to melt migration. Our model of magma dynamics provides a framework for modelling processes on different scales and investigating links between processes occurring in the deep mantle and melt generation and migration. This approach could prove particularly useful applied to modelling the generation of komatiites or other melts originating in greater depths. The implementation is available in the Open Source ASPECT repository.
Generation of calibrated tungsten target x-ray spectra: modified TBC model.
Costa, Paulo R; Nersissian, Denise Y; Salvador, Fernanda C; Rio, Patrícia B; Caldas, Linda V E
2007-01-01
In spite of the recent advances in the experimental detection of x-ray spectra, theoretical or semi-empirical approaches for determining realistic x-ray spectra in the range of diagnostic energies are important tools for planning experiments, estimating radiation doses in patients, and formulating radiation shielding models. The TBC model is one of the most useful approaches since it allows for straightforward computer implementation, and it is able to accurately reproduce the spectra generated by tungsten target x-ray tubes. However, as originally presented, the TBC model fails in situations where the determination of x-ray spectra produced by an arbitrary waveform or the calculation of realistic values of air kerma for a specific x-ray system is desired. In the present work, the authors revisited the assumptions used in the original paper published by . They proposed a complementary formulation for taking into account the waveform and the representation of the calculated spectra in a dosimetric quantity. The performance of the proposed model was evaluated by comparing values of air kerma and first and second half value layers from calculated and measured spectra by using different voltages and filtrations. For the output, the difference between experimental and calculated data was better then 5.2%. First and second half value layers presented differences of 23.8% and 25.5% in the worst case. The performance of the model in accurately calculating these data was better for lower voltage values. Comparisons were also performed with spectral data measured using a CZT detector. Another test was performed by the evaluation of the model when considering a waveform distinct of a constant potential. In all cases the model results can be considered as a good representation of the measured data. The results from the modifications to the TBC model introduced in the present work reinforce the value of the TBC model for application of quantitative evaluations in radiation physics.
Adapting Active Shape Models for 3D segmentation of tubular structures in medical images.
de Bruijne, Marleen; van Ginneken, Bram; Viergever, Max A; Niessen, Wiro J
2003-07-01
Active Shape Models (ASM) have proven to be an effective approach for image segmentation. In some applications, however, the linear model of gray level appearance around a contour that is used in ASM is not sufficient for accurate boundary localization. Furthermore, the statistical shape model may be too restricted if the training set is limited. This paper describes modifications to both the shape and the appearance model of the original ASM formulation. Shape model flexibility is increased, for tubular objects, by modeling the axis deformation independent of the cross-sectional deformation, and by adding supplementary cylindrical deformation modes. Furthermore, a novel appearance modeling scheme that effectively deals with a highly varying background is developed. In contrast with the conventional ASM approach, the new appearance model is trained on both boundary and non-boundary points, and the probability that a given point belongs to the boundary is estimated non-parametrically. The methods are evaluated on the complex task of segmenting thrombus in abdominal aortic aneurysms (AAA). Shape approximation errors were successfully reduced using the two shape model extensions. Segmentation using the new appearance model significantly outperformed the original ASM scheme; average volume errors are 5.1% and 45% respectively.
McLerran, Larry; Skokov, Vladimir V.
2016-09-19
We modify the McLerran–Venugopalan model to include only a finite number of sources of color charge. In the effective action for such a system of a finite number of sources, there is a point-like interaction and a Coulombic interaction. The point interaction generates the standard fluctuation term in the McLerran–Venugopalan model. The Coulomb interaction generates the charge screening originating from well known evolution in x. Such a model may be useful for computing angular harmonics of flow measured in high energy hadron collisions for small systems. In this study we provide a basic formulation of the problem on a lattice.
Nonequilibrium thermodynamics of the shear-transformation-zone model
NASA Astrophysics Data System (ADS)
Luo, Alan M.; Ã-ttinger, Hans Christian
2014-02-01
The shear-transformation-zone (STZ) model has been applied numerous times to describe the plastic deformation of different types of amorphous systems. We formulate this model within the general equation for nonequilibrium reversible-irreversible coupling (GENERIC) framework, thereby clarifying the thermodynamic structure of the constitutive equations and guaranteeing thermodynamic consistency. We propose natural, physically motivated forms for the building blocks of the GENERIC, which combine to produce a closed set of time evolution equations for the state variables, valid for any choice of free energy. We demonstrate an application of the new GENERIC-based model by choosing a simple form of the free energy. In addition, we present some numerical results and contrast those with the original STZ equations.
Aircraft Pitch Control With Fixed Order LQ Compensators
NASA Technical Reports Server (NTRS)
Green, James; Ashokkumar, C. R.; Homaifar, Abdollah
1997-01-01
This paper considers a given set of fixed order compensators for aircraft pitch control problem. By augmenting compensator variables to the original state equations of the aircraft, a new dynamic model is considered to seek a LQ controller. While the fixed order compensators can achieve a set of desired poles in a specified region, LQ formulation provides the inherent robustness properties. The time response for ride quality is significantly improved with a set of dynamic compensators.
Aircraft Pitch Control with Fixed Order LQ Compensators
NASA Technical Reports Server (NTRS)
Green, James; Ashokkumar, Cr.; Homaifar, A.
1997-01-01
This paper considers a given set of fixed order compensators for aircraft pitch control problem. By augmenting compensator variables to the original state equations of the aircraft, a new dynamic model is considered to seek a LQ controller. While the fixed order compensators can achieve a set of desired poles in a specified region, LQ formulation provides the inherent robustness properties. The time response for ride quality is significantly improved with a set of dynamic compensators.
Chirikjian; Wang
2000-07-01
Partial differential equations (PDE's) for the probability density function (PDF) of the position and orientation of the distal end of a stiff macromolecule relative to its proximal end are derived and solved. The Kratky-Porod wormlike chain, the Yamakawa helical wormlike chain, and the original and revised Marko-Siggia models are examples of stiffness models to which the present formulation is applied. The solution technique uses harmonic analysis on the rotation and motion groups to convert PDE's governing the PDF's of interest into linear algebraic equations which have mathematically elegant solutions.
Lee, Woong Ryeol; Oh, Kyung Taek; Park, So Young; Yoo, Na Young; Ahn, Yong Sik; Lee, Don Haeng; Youn, Yu Seok; Lee, Deok-Keun; Cha, Kyung-Hoi; Lee, Eun Seong
2011-07-01
Herein, we describe magnetic cell levitation models using conventional polymeric microparticles or nanoparticles as a substrate for the three-dimensional tumor cell culture. When the magnetic force originating from the ring-shaped magnets overcame the gravitational force, the magnetic field-levitated KB tumor cells adhered to the surface area of magnetic iron oxide (Fe(3)O(4))-encapsulated nano/microparticles and concentrated clusters of levitated cells, ultimately developing tumor cells to tumor spheroids. These simple cell culture models may prove useful for the screening of anticancer drugs and their formulations. Copyright © 2011 Elsevier B.V. All rights reserved.
A noise model for the evaluation of defect states in solar cells
Landi, G.; Barone, C.; Mauro, C.; Neitzert, H. C.; Pagano, S.
2016-01-01
A theoretical model, combining trapping/detrapping and recombination mechanisms, is formulated to explain the origin of random current fluctuations in silicon-based solar cells. In this framework, the comparison between dark and photo-induced noise allows the determination of important electronic parameters of the defect states. A detailed analysis of the electric noise, at different temperatures and for different illumination levels, is reported for crystalline silicon-based solar cells, in the pristine form and after artificial degradation with high energy protons. The evolution of the dominating defect properties is studied through noise spectroscopy. PMID:27412097
Pelat, Adrien; Felix, Simon; Pagneux, Vincent
2011-03-01
In modeling the wave propagation within a street canyon, particular attention must be paid to the description of both the multiple reflections of the wave on the building facades and the radiation in the free space above the street. The street canyon being considered as an open waveguide with a discontinuously varying cross-section, a coupled modal-finite element formulation is proposed to solve the three-dimensional wave equation within. The originally open configuration-the street canyon open in the sky above-is artificially turned into a close waveguiding structure by using perfectly matched layers that truncate the infinite sky without introducing numerical reflection. Then the eigenmodes of the resulting waveguide are determined by a finite element method computation in the cross-section. The eigensolutions can finally be used in a multimodal formulation of the wave propagation along the canyon, given its geometry and the end conditions at its extremities: initial field condition at the entrance and radiation condition at the output. © 2011 Acoustical Society of America
Generalized cable equation model for myelinated nerve fiber.
Einziger, Pinchas D; Livshitz, Leonid M; Mizrahi, Joseph
2005-10-01
Herein, the well-known cable equation for nonmyelinated axon model is extended analytically for myelinated axon formulation. The myelinated membrane conductivity is represented via the Fourier series expansion. The classical cable equation is thereby modified into a linear second order ordinary differential equation with periodic coefficients, known as Hill's equation. The general internal source response, expressed via repeated convolutions, uniformly converges provided that the entire periodic membrane is passive. The solution can be interpreted as an extended source response in an equivalent nonmyelinated axon (i.e., the response is governed by the classical cable equation). The extended source consists of the original source and a novel activation function, replacing the periodic membrane in the myelinated axon model. Hill's equation is explicitly integrated for the specific choice of piecewise constant membrane conductivity profile, thereby resulting in an explicit closed form expression for the transmembrane potential in terms of trigonometric functions. The Floquet's modes are recognized as the nerve fiber activation modes, which are conventionally associated with the nonlinear Hodgkin-Huxley formulation. They can also be incorporated in our linear model, provided that the periodic membrane point-wise passivity constraint is properly modified. Indeed, the modified condition, enforcing the periodic membrane passivity constraint on the average conductivity only leads, for the first time, to the inclusion of the nerve fiber activation modes in our novel model. The validity of the generalized transmission-line and cable equation models for a myelinated nerve fiber, is verified herein through a rigorous Green's function formulation and numerical simulations for transmembrane potential induced in three-dimensional myelinated cylindrical cell. It is shown that the dominant pole contribution of the exact modal expansion is the transmembrane potential solution of our generalized model.
Suarez-Kurtz, Guilherme; Ribeiro, Frederico Mota; Vicente, Flávio L.; Struchiner, Claudio J.
2001-01-01
Amoxicillin plasma concentrations (n = 1,152) obtained from 48 healthy subjects in two bioequivalence studies were used to develop limited-sampling strategy (LSS) models for estimating the area under the concentration-time curve (AUC), the maximum concentration of drug in plasma (Cmax), and the time interval of concentration above MIC susceptibility breakpoints in plasma (T>MIC). Each subject received 500-mg amoxicillin, as reference and test capsules or suspensions, and plasma concentrations were measured by a validated microbiological assay. Linear regression analysis and a “jack-knife” procedure revealed that three-point LSS models accurately estimated (R2, 0.92; precision, <5.8%) the AUC from 0 h to infinity (AUC0-∞) of amoxicillin for the four formulations tested. Validation tests indicated that a three-point LSS model (1, 2, and 5 h) developed for the reference capsule formulation predicts the following accurately (R2, 0.94 to 0.99): (i) the individual AUC0-∞ for the test capsule formulation in the same subjects, (ii) the individual AUC0-∞ for both reference and test suspensions in 24 other subjects, and (iii) the average AUC0-∞ following single oral doses (250 to 1,000 mg) of various amoxicillin formulations in 11 previously published studies. A linear regression equation was derived, using the same sampling time points of the LSS model for the AUC0-∞, but using different coefficients and intercept, for estimating Cmax. Bioequivalence assessments based on LSS-derived AUC0-∞'s and Cmax's provided results similar to those obtained using the original values for these parameters. Finally, two-point LSS models (R2 = 0.86 to 0.95) were developed for T>MICs of 0.25 or 2.0 μg/ml, which are representative of microorganisms susceptible and resistant to amoxicillin. PMID:11600352
NASA Technical Reports Server (NTRS)
Groth, Clinton P. T.; Roe, Philip L.
1998-01-01
Six months of funding was received for the proposed three year research program (funding for the period from March 1, 1997 to August 31, 1997). Although the official starting date for the project was March 1, 1997, no funding for the project was received until July 1997. In the funded research period, considerable progress was made on Phase I of the proposed research program. The initial research efforts concentrated on applying the 10-, 20-, and 35-moment Gaussian-based closures to a series of standard two-dimensional non-reacting single species test flow problems, such as the flat plate, couette, channel, and rearward facing step flows, and to some other two-dimensional flows having geometries similar to those encountered in chemical-vapor deposition (CVD) reactors. Eigensystem analyses for these systems for the case of two spatial dimensions was carried out and efficient formulations of approximate Riemann solvers have been formulated using these eigenstructures. Formulations to include rotational non-equilibrium effects into the moment closure models for the treatment of polyatomic gases were explored, as the original formulations of the closure models were developed strictly for gases composed of monatomic molecules. The development of a software library and computer code for solving relaxing hyperbolic systems in two spatial dimensions of the type arising from the closure models was also initiated. The software makes use of high-resolution upwind finite-volumes schemes, multi-stage point implicit time stepping, and automatic adaptive mesh refinement (AMR) to solve the governing conservation equations for the moment closures. The initial phase of the code development was completed and a numerical investigation of the solutions of the 10-moment closure model for the simple two-dimensional test cases mentioned above was initiated. Predictions of the 10-moment model were compared to available theoretical solutions and the results of direct-simulation Monte Carlo (DSMC) calculations. The first results of this study were presented at a meeting last year.
Observable Emission Features of Black Hole GRMHD Jets on Event Horizon Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pu, Hung-Yi; Wu, Kinwah; Younsi, Ziri
The general-relativistic magnetohydrodynamical (GRMHD) formulation for black hole-powered jets naturally gives rise to a stagnation surface, where inflows and outflows along magnetic field lines that thread the black hole event horizon originate. We derive a conservative formulation for the transport of energetic electrons, which are initially injected at the stagnation surface and subsequently transported along flow streamlines. With this formulation the energy spectra evolution of the electrons along the flow in the presence of radiative and adiabatic cooling is determined. For flows regulated by synchrotron radiative losses and adiabatic cooling, the effective radio emission region is found to be finite,more » and geometrically it is more extended along the jet central axis. Moreover, the emission from regions adjacent to the stagnation surface is expected to be the most luminous as this is where the freshly injected energetic electrons are concentrated. An observable stagnation surface is thus a strong prediction of the GRMHD jet model with the prescribed non-thermal electron injection. Future millimeter/submillimeter (mm/sub-mm) very-long-baseline interferometric observations of supermassive black hole candidates, such as the one at the center of M87, can verify this GRMHD jet model and its associated non-thermal electron injection mechanism.« less
DOT National Transportation Integrated Search
2017-01-01
Rate sensitive foams are often used in aircraft seat designs; recently, the formulation of one of the more common types of foam, Confor, was changed. The previous Standard version came in four stiffness levels, which all met aircraft flammability ...
NASA Technical Reports Server (NTRS)
Wetherill, George W.
1989-01-01
Earlier and current concepts relevant to the origin of the asteroid belt are discussed and are considered in the framework of the solar system origin. Numerical and analytical solutions of the dynamical theory of planetesimal accumulation are characterized by bifurcations into runaway and nonrunaway solutions, and it is emphasized that the differences in time scales resulting from runaway and nonrunaway growth can be more important than conventional time scale differences determined by heliocentric distances. It is concluded that, in principle, it is possible to combine new calculations with previous work to formulate a theory of the asteroidal accumulation consistent with the meteoritic record and with work on the formation of terrestrial planets. Problems remaining to be addressed before a mature theory can be formulated are discussed.
NASA Astrophysics Data System (ADS)
Belokurov, S. V.; Rodionova, N. S.; Belokurova, E. V.; Alexeeva, T. V.
2018-05-01
The work presents data on the effect of non-traditional powdered semi-finished products of plant origin: chokeberry, walnut partitions and sea buckthorn berries on the lifting power of baker's yeast. Various amounts of powdered semi-finished products of plant origin are introduced into the dough, directly at the stage of introducing the main components of the formulation, replacing them with some of the wheat flour. Studies have shown that the addition of small amounts of unconventional powdered plant-based semi-finished products (1 ... 5%) makes it possible to correct the lifting power of baking yeast, which, in consequence, affects the quality indicators of finished products. The paper presents a mathematical model of the change in the lift strength of baker's yeast, depending on the nature and amount of the powdered semi-finished product introduced.
On improving the efficiency of tensor voting.
Moreno, Rodrigo; Garcia, Miguel Angel; Puig, Domenec; Pizarro, Luis; Burgeth, Bernhard; Weickert, Joachim
2011-11-01
This paper proposes two alternative formulations to reduce the high computational complexity of tensor voting, a robust perceptual grouping technique used to extract salient information from noisy data. The first scheme consists of numerical approximations of the votes, which have been derived from an in-depth analysis of the plate and ball voting processes. The second scheme simplifies the formulation while keeping the same perceptual meaning of the original tensor voting: The stick tensor voting and the stick component of the plate tensor voting must reinforce surfaceness, the plate components of both the plate and ball tensor voting must boost curveness, whereas junctionness must be strengthened by the ball component of the ball tensor voting. Two new parameters have been proposed for the second formulation in order to control the potentially conflictive influence of the stick component of the plate vote and the ball component of the ball vote. Results show that the proposed formulations can be used in applications where efficiency is an issue since they have a complexity of order O(1). Moreover, the second proposed formulation has been shown to be more appropriate than the original tensor voting for estimating saliencies by appropriately setting the two new parameters.
Improving the S-Shape Solar Radiation Estimation Method for Supporting Crop Models
Fodor, Nándor
2012-01-01
In line with the critical comments formulated in relation to the S-shape global solar radiation estimation method, the original formula was improved via a 5-step procedure. The improved method was compared to four-reference methods on a large North-American database. According to the investigated error indicators, the final 7-parameter S-shape method has the same or even better estimation efficiency than the original formula. The improved formula is able to provide radiation estimates with a particularly low error pattern index (PIdoy) which is especially important concerning the usability of the estimated radiation values in crop models. Using site-specific calibration, the radiation estimates of the improved S-shape method caused an average of 2.72 ± 1.02 (α = 0.05) relative error in the calculated biomass. Using only readily available site specific metadata the radiation estimates caused less than 5% relative error in the crop model calculations when they were used for locations in the middle, plain territories of the USA. PMID:22645451
A weakly-compressible Cartesian grid approach for hydrodynamic flows
NASA Astrophysics Data System (ADS)
Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.
2017-11-01
The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.
Uebbing, Lukas; Klumpp, Lukas; Webster, Gregory K; Löbenberg, Raimar
2017-01-01
Drug product performance testing is an important part of quality-by-design approaches, but this process often lacks the underlying mechanistic understanding of the complex interactions between the disintegration and dissolution processes involved. Whereas a recent draft guideline by the US Food and Drug Administration (FDA) has allowed the replacement of dissolution testing with disintegration testing, the mentioned criteria are not globally accepted. This study provides scientific justification for using disintegration testing rather than dissolution testing as a quality control method for certain immediate release (IR) formulations. A mechanistic approach, which is beyond the current FDA criteria, is presented. Dissolution testing via United States Pharmacopeial Convention Apparatus II at various paddle speeds was performed for immediate and extended release formulations of metronidazole. Dissolution profile fitting via DDSolver and dissolution profile predictions via DDDPlus™ were performed. The results showed that Fickian diffusion and drug particle properties (DPP) were responsible for the dissolution of the IR tablets, and that formulation factors (eg, coning) impacted dissolution only at lower rotation speeds. Dissolution was completely formulation controlled if extended release tablets were tested and DPP were not important. To demonstrate that disintegration is the most important dosage form attribute when dissolution is DPP controlled, disintegration, intrinsic dissolution and dissolution testing were performed in conventional and disintegration impacting media (DIM). Tablet disintegration was affected by DIM and model fitting to the Korsmeyer-Peppas equation showed a growing effect of the formulation in DIM. DDDPlus was able to predict tablet dissolution and the intrinsic dissolution profiles in conventional media and DIM. The study showed that disintegration has to occur before DPP-dependent dissolution can happen. The study suggests that disintegration can be used as performance test of rapidly disintegrating tablets beyond the FDA criteria. The scientific criteria and justification is that dissolution has to be DPP dependent, originated from active pharmaceutical ingredient characteristics and formulations factors have to be negligible.
Uebbing, Lukas; Klumpp, Lukas; Webster, Gregory K; Löbenberg, Raimar
2017-01-01
Drug product performance testing is an important part of quality-by-design approaches, but this process often lacks the underlying mechanistic understanding of the complex interactions between the disintegration and dissolution processes involved. Whereas a recent draft guideline by the US Food and Drug Administration (FDA) has allowed the replacement of dissolution testing with disintegration testing, the mentioned criteria are not globally accepted. This study provides scientific justification for using disintegration testing rather than dissolution testing as a quality control method for certain immediate release (IR) formulations. A mechanistic approach, which is beyond the current FDA criteria, is presented. Dissolution testing via United States Pharmacopeial Convention Apparatus II at various paddle speeds was performed for immediate and extended release formulations of metronidazole. Dissolution profile fitting via DDSolver and dissolution profile predictions via DDDPlus™ were performed. The results showed that Fickian diffusion and drug particle properties (DPP) were responsible for the dissolution of the IR tablets, and that formulation factors (eg, coning) impacted dissolution only at lower rotation speeds. Dissolution was completely formulation controlled if extended release tablets were tested and DPP were not important. To demonstrate that disintegration is the most important dosage form attribute when dissolution is DPP controlled, disintegration, intrinsic dissolution and dissolution testing were performed in conventional and disintegration impacting media (DIM). Tablet disintegration was affected by DIM and model fitting to the Korsmeyer–Peppas equation showed a growing effect of the formulation in DIM. DDDPlus was able to predict tablet dissolution and the intrinsic dissolution profiles in conventional media and DIM. The study showed that disintegration has to occur before DPP-dependent dissolution can happen. The study suggests that disintegration can be used as performance test of rapidly disintegrating tablets beyond the FDA criteria. The scientific criteria and justification is that dissolution has to be DPP dependent, originated from active pharmaceutical ingredient characteristics and formulations factors have to be negligible. PMID:28442890
Linearly Adjustable International Portfolios
NASA Astrophysics Data System (ADS)
Fonseca, R. J.; Kuhn, D.; Rustem, B.
2010-09-01
We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.
Ethical checklist for dental practice.
Rinchuse, D J; Rinchuse, D J; Deluzio, C
1995-01-01
A checklist for verification of unethical business practices, originally formulated by Drs. Blanchard and Peale, is adapted to dental practice. A scenario is used as a model to demonstrate the applicability of this instrument to dental practice. The instrument asks three questions in regards to an ethical dilemma: 1) Is it legal? 2) Is it fair? 3) How does it make you feel? The paper concludes the giving of gifts to general dentists by dental specialists for the referral of patients is unethical.
NASA Astrophysics Data System (ADS)
Erturk, A.; Anton, S. R.; Inman, D. J.
2009-03-01
This paper discusses the basic design factors for modifying an original wing spar to a multifunctional load-bearing - energy harvester wing spar. A distributed-parameter electromechanical formulation is given for modeling of a multilayer piezoelectric power generator beam for different combinations of the electrical outputs of piezoceramic layers. In addition to the coupled vibration response and voltage response expressions for a multimorph, strength formulations are given in order to estimate the maximum load input that can be sustained by the cantilevered structure without failure for a given safety factor. Embedding piezoceramics into an original wing spar for power generation tends to reduce the maximum load that can be sustained without failure and increase the total mass due to the brittle nature and large mass densities of typical piezoelectric ceramics. Two case studies are presented for demonstration. The theoretical case study discusses modification of a rectangular wing spar to a 3-layer generator wing spar with a certain restriction on mass addition for fixed dimensions. Power generation and strength analyses are provided using the electromechanical model. The experimental case study considers a 9-layer generator beam with aluminum, piezoceramic, Kapton and epoxy layers and investigates its power generation and load-bearing performances experimentally and analytically. This structure constitutes the main body of the multifunctional self-charging structure concept proposed by the authors. The second part of this work (experiments and storage applications) employs this multi-layer generator along with the thin-film battery layers in order to charge the battery layers using the electrical outputs of the piezoceramic layers.
Analytical Formulation of Equatorial Standing Wave Phenomena: Application to QBO and ENSO
NASA Astrophysics Data System (ADS)
Pukite, P. R.
2016-12-01
Key equatorial climate phenomena such as QBO and ENSO have never been adequately explained as deterministic processes. This in spite of recent research showing growing evidence of predictable behavior. This study applies the fundamental Laplace tidal equations with simplifying assumptions along the equator — i.e. no Coriolis force and a small angle approximation. To connect the analytical Sturm-Liouville results to observations, a first-order forcing consistent with a seasonally aliased Draconic or nodal lunar period (27.21d aliased into 2.36y) is applied. This has a plausible rationale as it ties a latitudinal forcing cycle via a cross-product to the longitudinal terms in the Laplace formulation. The fitted results match the features of QBO both qualitatively and quantitatively; adding second-order terms due to other seasonally aliased lunar periods provides finer detail while remaining consistent with the physical model. Further, running symbolic regression machine learning experiments on the data provided a validation to the approach, as it discovered the same analytical form and fitted values as the first principles Laplace model. These results conflict with Lindzen's QBO model, in that his original formulation fell short of making the lunar connection, even though Lindzen himself asserted "it is unlikely that lunar periods could be produced by anything other than the lunar tidal potential".By applying a similar analytical approach to ENSO, we find that the tidal equations need to be replaced with a Mathieu-equation formulation consistent with describing a sloshing process in the thermocline depth. Adapting the hydrodynamic math of sloshing, we find a biennial modulation coupled with angular momentum forcing variations matching the Chandler wobble gives an impressive match over the measured ENSO range of 1880 until the present. Lunar tidal periods and an additional triaxial nutation of 14 year period provide additional fidelity. The caveat is a phase inversion of the biennial mode lasting from 1980 to 1996. The parsimony of these analytical models arises from applying only known cyclic forcing terms to fundamental wave equation formulations. This raises the possibility that both QBO and ENSO can be predicted years in advance, apart from a metastable biennial phase inversion in ENSO.
Toward Improved Fidelity of Thermal Explosion Simulations
NASA Astrophysics Data System (ADS)
Nichols, A. L.; Becker, R.; Howard, W. M.; Wemhoff, A.
2009-12-01
We will present results of an effort to improve the thermal/chemical/mechanical modeling of HMX based explosives like LX04 and LX10 for thermal cook-off The original HMX model and analysis scheme were developed by Yoh et al. for use in the ALE3D modeling framework. The current results were built to remedy the deficiencies of that original model. We concentrated our efforts in four areas. The first area was addition of porosity to the chemical material model framework in ALE3D that is used to model the HMX explosive formulation. This is needed to handle the roughly 2% porosity in solid explosives. The second area was the improvement of the HMX reaction network, which included a reactive phase change model base on work by Henson et al. The third area required adding early decomposition gas species to the CHEETAH material database to develop more accurate equations of state for gaseous intermediates and products. Finally, it was necessary to improve the implicit mechanics module in ALE3D to more naturally handle the long time scales associated with thermal cook-off The application of the resulting framework to the analysis of the Scaled Thermal Explosion (STEX) experiments will be discussed.
Amelian, Aleksandra; Szekalska, Marta; Ciosek, Patrycja; Basa, Anna; Winnicka, Katarzyna
2017-03-01
Taste of a pharmaceutical formulation is an important parameter for the effectiveness of pharmacotherapy. Cetirizine dihydrochloride (CET) is a second-generation antihistamine that is commonly administered in allergy treatment. CET is characterized by extremely bitter taste and it is a great challenge to successfully mask its taste; therefore the goal of this work was to formulate and characterize the microparticles obtained by the spray drying method with CET and poly(butyl methacrylate-co-(2-dimethylaminoethyl) methacrylate-co-methyl methacrylate 1:2:1 copolymer (Eudragit E PO) as a barrier coating. Assessment of taste masking by the electronic tongue has revealed that designed formulations created an effective taste masking barrier. Taste masking effect was also confirmed by the in vivo model and the in vitro release profile of CET. Obtained data have shown that microparticles with a drug/polymer ratio (0.5:1) are promising CET carriers with efficient taste masking potential and might be further used in designing orodispersible dosage forms with CET.
The "Biogenetic Law" in zoology: from Ernst Haeckel's formulation to current approaches.
Olsson, Lennart; Levit, Georgy S; Hoßfeld, Uwe
2017-06-01
150 years ago, in 1866, Ernst Haeckel published a book in two volumes called "Generelle Morphologie der Organismen" (General Morphology of Organisms) in which he formulated his biogenetic law, famously stating that ontogeny recapitulates phylogeny. Here we describe Haeckel's original idea and follow its development in the thinking of two scientists inspired by Haeckel, Alexei Sewertzoff and Adolf Naef. Sewertzoff and Naef initially approached the problem of reformulating Haeckel's law in similar ways, and formulated comparable hypotheses at a purely descriptive level. But their theoretical viewpoints were crucially different. While Sewertzoff laid the foundations for a Darwinian evolutionary morphology and is regarded as a forerunner of the Modern Synthesis, Naef was one of the most important figures in 'idealistic morphology', usually seen as a type of anti-Darwinism. Both Naef and Sewertzoff aimed to revise Haeckel's biogenetic law and came to comparable conclusions at the empirical level. We end our review with a brief look at the present situation in which molecular data are used to test the "hour-glass model", which can be seen as a modern version of the biogenetic law.
NASA Technical Reports Server (NTRS)
Florschuetz, L. W.; Su, C. C.
1985-01-01
Spanwise average heat fluxes, resolved in the streamwise direction to one stream-wise hole spacing were measured for two-dimensional arrays of circular air jets impinging on a heat transfer surface parallel to the jet orifice plate. The jet flow, after impingement, was constrained to exit in a single direction along the channel formed by the jet orifice plate and heat transfer surface. The crossflow originated from the jets following impingement and an initial crossflow was present that approached the array through an upstream extension of the channel. The regional average heat fluxes are considered as a function of parameters associated with corresponding individual spanwise rows within the array. A linear superposition model was employed to formulate appropriate governing parameters for the individual row domain. The effects of flow history upstream of an individual row domain are also considered. The results are formulated in terms of individual spanwise row parameters. A corresponding set of streamwise resolved heat transfer characteristics formulated in terms of flow and geometric parameters characterizing the overall arrays is described.
Watari, Hidetoshi; Shigyo, Michiko; Tanabe, Norio; Tohda, Michihisa; Cho, Ki-Ho; Kyung, Park Su; Jung, Woo Sang; Shimada, Yutaka; Shibahara, Naotoshi; Kuboyama, Tomoharu; Tohda, Chihiro
2015-03-01
Traditional medicine is widely used in East Asia, and studies that demonstrate its usefulness have recently become more common. However, formulation-based studies are not globally understood because these studies are country-specific. There are many types of formulations that have been introduced to Japan and Korea from China. Establishing whether a same-origin formulation has equivalent effects in other countries is important for the development of studies that span multiple countries. The present study compared the effects of same-origin traditional medicine used in Japan and Korea in an in vivo experiment. We prepared drugs that had the same origin and the same components. The drugs are called kamikihito (KKT) in Japan and kami-guibi-tang (KGT) in Korea. KKT (500 mg extract/kg/day) and KGT (500 mg extract/kg/day) were administered to ddY mice, and object recognition and location memory tests were performed. KKT and KGT administration yielded equivalent normal memory enhancement effects. 3D-HPLC showed similar, but not identical, patterns of the detected compounds between KKT and KGT. This comparative research approach enables future global clinical studies of traditional medicine to be conducted through the use of the formulations prescribed in each country. Copyright © 2014 John Wiley & Sons, Ltd.
Order of accuracy of QUICK and related convection-diffusion schemes
NASA Technical Reports Server (NTRS)
Leonard, B. P.
1993-01-01
This report attempts to correct some misunderstandings that have appeared in the literature concerning the order of accuracy of the QUICK scheme for steady-state convective modeling. Other related convection-diffusion schemes are also considered. The original one-dimensional QUICK scheme written in terms of nodal-point values of the convected variable (with a 1/8-factor multiplying the 'curvature' term) is indeed a third-order representation of the finite volume formulation of the convection operator average across the control volume, written naturally in flux-difference form. An alternative single-point upwind difference scheme (SPUDS) using node values (with a 1/6-factor) is a third-order representation of the finite difference single-point formulation; this can be written in a pseudo-flux difference form. These are both third-order convection schemes; however, the QUICK finite volume convection operator is 33 percent more accurate than the single-point implementation of SPUDS. Another finite volume scheme, writing convective fluxes in terms of cell-average values, requires a 1/6-factor for third-order accuracy. For completeness, one can also write a single-point formulation of the convective derivative in terms of cell averages, and then express this in pseudo-flux difference form; for third-order accuracy, this requires a curvature factor of 5/24. Diffusion operators are also considered in both single-point and finite volume formulations. Finite volume formulations are found to be significantly more accurate. For example, classical second-order central differencing for the second derivative is exactly twice as accurate in a finite volume formulation as it is in single-point.
Anselmi, Pasquale; Stefanutti, Luca; de Chiusole, Debora; Robusto, Egidio
2017-11-01
The gain-loss model (GaLoM) is a formal model for assessing knowledge and learning. In its original formulation, the GaLoM assumes independence among the skills. Such an assumption is not reasonable in several domains, in which some preliminary knowledge is the foundation for other knowledge. This paper presents an extension of the GaLoM to the case in which the skills are not independent, and the dependence relation among them is described by a well-graded competence space. The probability of mastering skill s at the pretest is conditional on the presence of all skills on which s depends. The probabilities of gaining or losing skill s when moving from pretest to posttest are conditional on the mastery of s at the pretest, and on the presence at the posttest of all skills on which s depends. Two formulations of the model are presented, in which the learning path is allowed to change from pretest to posttest or not. A simulation study shows that models based on the true competence space obtain a better fit than models based on false competence spaces, and are also characterized by a higher assessment accuracy. An empirical application shows that models based on pedagogically sound assumptions about the dependencies among the skills obtain a better fit than models assuming independence among the skills. © 2017 The British Psychological Society.
A constrained robust least squares approach for contaminant release history identification
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.
2006-04-01
Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.
Keynesian multiplier versus velocity of money
NASA Astrophysics Data System (ADS)
Wang, Yougui; Xu, Yan; Liu, Li
2010-08-01
In this paper we present the relation between Keynesian multiplier and the velocity of money circulation in a money exchange model. For this purpose we modify the original exchange model by constructing the interrelation between income and expenditure. The random exchange yields an agent's income, which along with the amount of money he processed determines his expenditure. In this interactive process, both the circulation of money and Keynesian multiplier effect can be formulated. The equilibrium values of Keynesian multiplier are demonstrated to be closely related to the velocity of money. Thus the impacts of macroeconomic policies on aggregate income can be understood by concentrating solely on the variations of money circulation.
Physical lumping methods for developing linear reduced models for high speed propulsion systems
NASA Technical Reports Server (NTRS)
Immel, S. M.; Hartley, Tom T.; Deabreu-Garcia, J. Alex
1991-01-01
In gasdynamic systems, information travels in one direction for supersonic flow and in both directions for subsonic flow. A shock occurs at the transition from supersonic to subsonic flow. Thus, to simulate these systems, any simulation method implemented for the quasi-one-dimensional Euler equations must have the ability to capture the shock. In this paper, a technique combining both backward and central differencing is presented. The equations are subsequently linearized about an operating point and formulated into a linear state space model. After proper implementation of the boundary conditions, the model order is reduced from 123 to less than 10 using the Schur method of balancing. Simulations comparing frequency and step response of the reduced order model and the original system models are presented.
Yu, Wenjun; Ma, Mingyue; Chen, Xuemei; Min, Jiayu; Li, Lingru; Zheng, Yanfei; Li, Yingshuai; Wang, Ji; Wang, Qi
2017-01-01
Traditional Chinese medicine (TCM), Japanese-Chinese medicine, and Korean Sasang constitutional medicine have common origins. However, the constitutional medicines of China, Japan, and Korea differ because of the influence of geographical culture, social environment, national practices, and other factors. This paper aimed to compare the constitutional medicines of China, Japan, and Korea in terms of theoretical origin, constitutional classification, constitution and pathogenesis, clinical applications and basic studies that were conducted. The constitutional theories of the three countries are all derived from the Canon of Internal Medicine or Treatise on Febrile and Miscellaneous Diseases of Ancient China. However, the three countries have different constitutional classifications and criteria. Medical sciences in the three countries focus on the clinical applications of constitutional theory. They all agree that different pathogenic laws that guide the treatment of diseases govern different constitutions; thus, patients with different constitutions are treated differently. The three countries also differ in terms of drug formulations and medication. Japanese medicine is prescribed only based on constitution. Korean medicine is based on treatment, in which drugs cannot be mixed. TCM synthesize the treatment model of constitution differentiation, disease differentiation and syndrome differentiation with the treatment thought of treating disease according to three categories of etiologic factors, which reflect the constitution as the characteristic of individual precision treatment. In conclusion, constitutional medicines of China, Japan, and Korea have the same theoretical origin, but differ in constitutional classification, clinical application of constitutional theory on the treatment of diseases, drug formulations and medication.
Guédon, Yann; d'Aubenton-Carafa, Yves; Thermes, Claude
2006-03-01
The most commonly used models for analysing local dependencies in DNA sequences are (high-order) Markov chains. Incorporating knowledge relative to the possible grouping of the nucleotides enables to define dedicated sub-classes of Markov chains. The problem of formulating lumpability hypotheses for a Markov chain is therefore addressed. In the classical approach to lumpability, this problem can be formulated as the determination of an appropriate state space (smaller than the original state space) such that the lumped chain defined on this state space retains the Markov property. We propose a different perspective on lumpability where the state space is fixed and the partitioning of this state space is represented by a one-to-many probabilistic function within a two-level stochastic process. Three nested classes of lumped processes can be defined in this way as sub-classes of first-order Markov chains. These lumped processes enable parsimonious reparameterizations of Markov chains that help to reveal relevant partitions of the state space. Characterizations of the lumped processes on the original transition probability matrix are derived. Different model selection methods relying either on hypothesis testing or on penalized log-likelihood criteria are presented as well as extensions to lumped processes constructed from high-order Markov chains. The relevance of the proposed approach to lumpability is illustrated by the analysis of DNA sequences. In particular, the use of lumped processes enables to highlight differences between intronic sequences and gene untranslated region sequences.
NASA Technical Reports Server (NTRS)
Pindera, Marek-Jerzy; Bednarcyk, Brett A.
1997-01-01
An efficient implementation of the generalized method of cells micromechanics model is presented that allows analysis of periodic unidirectional composites characterized by repeating unit cells containing thousands of subcells. The original formulation, given in terms of Hill's strain concentration matrices that relate average subcell strains to the macroscopic strains, is reformulated in terms of the interfacial subcell tractions as the basic unknowns. This is accomplished by expressing the displacement continuity equations in terms of the stresses and then imposing the traction continuity conditions directly. The result is a mixed formulation wherein the unknown interfacial subcell traction components are related to the macroscopic strain components. Because the stress field throughout the repeating unit cell is piece-wise uniform, the imposition of traction continuity conditions directly in the displacement continuity equations, expressed in terms of stresses, substantially reduces the number of unknown subcell traction (and stress) components, and thus the size of the system of equations that must be solved. Further reduction in the size of the system of continuity equations is obtained by separating the normal and shear traction equations in those instances where the individual subcells are, at most, orthotropic. The reformulated version facilitates detailed analysis of the impact of the fiber cross-section geometry and arrangement on the response of multi-phased unidirectional composites with and without evolving damage. Comparison of execution times obtained with the original and reformulated versions of the generalized method of cells demonstrates the new version's efficiency.
NASA Technical Reports Server (NTRS)
Wu, Jie; Yu, Sheng-Tao; Jiang, Bo-nan
1996-01-01
In this paper a numerical procedure for simulating two-fluid flows is presented. This procedure is based on the Volume of Fluid (VOF) method proposed by Hirt and Nichols and the continuum surface force (CSF) model developed by Brackbill, et al. In the VOF method fluids of different properties are identified through the use of a continuous field variable (color function). The color function assigns a unique constant (color) to each fluid. The interfaces between different fluids are distinct due to sharp gradients of the color function. The evolution of the interfaces is captured by solving the convective equation of the color function. The CSF model is used as a means to treat surface tension effect at the interfaces. Here a modified version of the CSF model, proposed by Jacqmin, is used to calculate the tension force. In the modified version, the force term is obtained by calculating the divergence of a stress tensor defined by the gradient of the color function. In its analytical form, this stress formulation is equivalent to the original CSF model. Numerically, however, the use of the stress formulation has some advantages over the original CSF model, as it bypasses the difficulty in approximating the curvatures of the interfaces. The least-squares finite element method (LSFEM) is used to discretize the governing equation systems. The LSFEM has proven to be effective in solving incompressible Navier-Stokes equations and pure convection equations, making it an ideal candidate for the present applications. The LSFEM handles all the equations in a unified manner without any additional special treatment such as upwinding or artificial dissipation. Various bench mark tests have been carried out for both two dimensional planar and axisymmetric flows, including a dam breaking, oscillating and stationary bubbles and a conical liquid sheet in a pressure swirl atomizer.
Generalized formulation of free energy and application to photosynthesis
NASA Astrophysics Data System (ADS)
Zhang, Hwe Ik; Choi, M. Y.
2018-03-01
The origin of free energy on the earth is solar radiation. However, the amount of free energy it contains has seldom been investigated, because the free energy concept was believed to be inappropriate for a system of photons. Instead, the origin of free energy has been sought in the process of photosynthesis, imposing a limit of conversion given by the Carnot efficiency. Here we present a general formulation, capable of not only assessing accurately the available amount of free energy in the photon gas but also explaining the primary photosynthetic process more succinctly. In this formulation, the problem of "photosynthetic conversion of the internal energy of photons into the free energy of chlorophyll" is replaced by simple "free energy transduction" between the photons and chlorophyll. An analytic expression for the photosynthetic efficiency is derived and shown to deviate from the Carnot efficiency. Some predictions verifiable possibly by observation are also suggested.
A level set method for multiple sclerosis lesion segmentation.
Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming
2018-06-01
In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.
Determinants of generic drug substitution in Switzerland.
Decollogny, Anne; Eggli, Yves; Halfon, Patricia; Lufkin, Thomas M
2011-01-26
Since generic drugs have the same therapeutic effect as the original formulation but at generally lower costs, their use should be more heavily promoted. However, a considerable number of barriers to their wider use have been observed in many countries. The present study examines the influence of patients, physicians and certain characteristics of the generics' market on generic substitution in Switzerland. We used reimbursement claims' data submitted to a large health insurer by insured individuals living in one of Switzerland's three linguistic regions during 2003. All dispensed drugs studied here were substitutable. The outcome (use of a generic or not) was modelled by logistic regression, adjusted for patients' characteristics (gender, age, treatment complexity, substitution groups) and with several variables describing reimbursement incentives (deductible, co-payments) and the generics' market (prices, packaging, co-branded original, number of available generics, etc.). The overall generics' substitution rate for 173,212 dispensed prescriptions was 31%, though this varied considerably across cantons. Poor health status (older patients, complex treatments) was associated with lower generic use. Higher rates were associated with higher out-of-pocket costs, greater price differences between the original and the generic, and with the number of generics on the market, while reformulation and repackaging were associated with lower rates. The substitution rate was 13% lower among hospital physicians. The adoption of the prescribing practices of the canton with the highest substitution rate would increase substitution in other cantons to as much as 26%. Patient health status explained a part of the reluctance to substitute an original formulation by a generic. Economic incentives were efficient, but with a moderate global effect. The huge interregional differences indicated that prescribing behaviours and beliefs are probably the main determinant of generic substitution.
Schilling, Kristian; Krause, Frank
2015-01-01
Monoclonal antibodies represent the most important group of protein-based biopharmaceuticals. During formulation, manufacturing, or storage, antibodies may suffer post-translational modifications altering their physical and chemical properties. Such induced conformational changes may lead to the formation of aggregates, which can not only reduce their efficiency but also be immunogenic. Therefore, it is essential to monitor the amount of size variants to ensure consistency and quality of pharmaceutical antibodies. In many cases, antibodies are formulated at very high concentrations > 50 g/L, mostly along with high amounts of sugar-based excipients. As a consequence, all routine aggregation analysis methods, such as size-exclusion chromatography, cannot monitor the size distribution at those original conditions, but only after dilution and usually under completely different solvent conditions. In contrast, sedimentation velocity (SV) allows to analyze samples directly in the product formulation, both with limited sample-matrix interactions and minimal dilution. One prerequisite for the analysis of highly concentrated samples is the detection of steep concentration gradients with sufficient resolution: Commercially available ultracentrifuges are not able to resolve such steep interference profiles. With the development of our Advanced Interference Detection Array (AIDA), it has become possible to register interferograms of solutions as highly concentrated as 150 g/L. The other major difficulty encountered at high protein concentrations is the pronounced non-ideal sedimentation behavior resulting from repulsive intermolecular interactions, for which a comprehensive theoretical modelling has not yet been achieved. Here, we report the first SV analysis of highly concentrated antibodies up to 147 g/L employing the unique AIDA ultracentrifuge. By developing a consistent experimental design and data fit approach, we were able to provide a reliable estimation of the minimum content of soluble aggregates in the original formulations of two antibodies. Limitations of the procedure are discussed.
Free-boundary PIES Calculations
NASA Astrophysics Data System (ADS)
Monticello, D. A.; Reiman, A. H.; Arndt, S. C.; Merkel, P. K.
1998-11-01
A new formulation of the free boundary problem for general three-dimensional configurations has been formulated for the PIES( Reiman, A. H., Greenside, H. S., Compt. Phys. Commun. 43), (1986). code. The new formulation is more flexible and is faster that the original formulation described in Merkel et al(Merkel, P., Johnson, J. L., Monticello, D.A., et al., Proceedings of the Fourteenth International Conference on Plasma Physics and Controlled Nuclear Fusion Research paper IAEA-CN-60 | D-P-II-10) (1994) . These advantages will be described and first results of the application of this new algorithm to W7-X and NCSX (National Compact Stellarator Experiment) configurations will be presented.
Learning in neural networks based on a generalized fluctuation theorem
NASA Astrophysics Data System (ADS)
Hayakawa, Takashi; Aoyagi, Toshio
2015-11-01
Information maximization has been investigated as a possible mechanism of learning governing the self-organization that occurs within the neural systems of animals. Within the general context of models of neural systems bidirectionally interacting with environments, however, the role of information maximization remains to be elucidated. For bidirectionally interacting physical systems, universal laws describing the fluctuation they exhibit and the information they possess have recently been discovered. These laws are termed fluctuation theorems. In the present study, we formulate a theory of learning in neural networks bidirectionally interacting with environments based on the principle of information maximization. Our formulation begins with the introduction of a generalized fluctuation theorem, employing an interpretation appropriate for the present application, which differs from the original thermodynamic interpretation. We analytically and numerically demonstrate that the learning mechanism presented in our theory allows neural networks to efficiently explore their environments and optimally encode information about them.
NASA Astrophysics Data System (ADS)
Germain, Norbert; Besson, Jacques; Feyel, Frédéric
2007-07-01
Simulating damage and failure of laminate composites structures often fails when using the standard finite element procedure. The difficulties arise from an uncontrolled mesh dependence caused by damage localization and an increase in computational costs. One of the solutions to the first problem, widely used to predict the failure of metallic materials, consists of using non-local damage constitutive equations. The second difficulty can then be solved using specific finite element formulations, such as shell element, which decrease the number of degrees of freedom. The main contribution of this paper consists of extending these techniques to layered materials such as polymer matrix composites. An extension of the non-local implicit gradient formulation, accounting for anisotropy and stratification, and an original layered shell element, based on a new partition of the unity, are proposed. Finally the efficiency of the resulting numerical scheme is studied by comparing simulation with experimental results.
Kamble, Bhagyashree; Talreja, Seema; Gupta, Ankur; Patil, Dada; Pathak, Deepa; Moothedath, Ismail; Duraiswamy, Basavan
2013-08-01
To develop and characterize Gymnema sylvestre extract-loaded niosomes using nonionic surfactants, and to evaluate their antihyperglycemic efficacy in comparison with the parent extract. Nonionic surfactant-based G. sylvestre extract-loaded niosomes were prepared using the thin-film hydration method. The optimized formulation was screened for entrapment efficiency of the constituents, as well as other parameters such as release kinetics, vesicle size, zeta-potential and stability studies. The parent extract and optimized niosomal formulation were evaluated for their antihyperglycemic potential in an alloxan-induced diabetic animal model. Niosomes prepared using Span™ 40 (SD Fine Chemicals Ltd, Mumbai, India) provided sterically stable vesicles 229.5 nm in size with zeta-potential and entrapment efficiency of 150.86 mV and 85.3 ± 4.5%, respectively. The surface morphology of vesicles was confirmed to be spherical by scanning electron microscopy studies. An in vitro release study demonstrated 77.4% of phytoconstituents release within 24 h. The niosome formulation demonstrated significant blood glucose level reduction in an oral glucose tolerance test, and increased antihyperglycemic activity compared with the parent extract in an alloxan-induced diabetic model. This study reveals the merits of G. sylvestre extract-loaded niosomes, and justifies the potential of niosomes for improving the efficacy of G. sylvestre extract as antidiabetic. Original submitted 30 March 2012; Revised submitted 29 August 2012; Published online 24 December 2012.
Improved two-equation k-omega turbulence models for aerodynamic flows
NASA Technical Reports Server (NTRS)
Menter, Florian R.
1992-01-01
Two new versions of the k-omega two-equation turbulence model will be presented. The new Baseline (BSL) model is designed to give results similar to those of the original k-omega model of Wilcox, but without its strong dependency on arbitrary freestream values. The BSL model is identical to the Wilcox model in the inner 50 percent of the boundary-layer but changes gradually to the high Reynolds number Jones-Launder k-epsilon model (in a k-omega formulation) towards the boundary-layer edge. The new model is also virtually identical to the Jones-Lauder model for free shear layers. The second version of the model is called Shear-Stress Transport (SST) model. It is based on the BSL model, but has the additional ability to account for the transport of the principal shear stress in adverse pressure gradient boundary-layers. The model is based on Bradshaw's assumption that the principal shear stress is proportional to the turbulent kinetic energy, which is introduced into the definition of the eddy-viscosity. Both models are tested for a large number of different flowfields. The results of the BSL model are similar to those of the original k-omega model, but without the undesirable freestream dependency. The predictions of the SST model are also independent of the freestream values and show excellent agreement with experimental data for adverse pressure gradient boundary-layer flows.
NP-hardness of the cluster minimization problem revisited
NASA Astrophysics Data System (ADS)
Adib, Artur B.
2005-10-01
The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.
Taki, Moeko; Tagami, Tatsuaki; Fukushige, Kaori; Ozeki, Tetsuya
2016-09-10
A unique two-solution mixing-type spray nozzle is useful for producing nanocomposite particles (microparticles containing drug nanoparticles) in one step. The nanocomposite particles can prevent nanoparticle aggregation. Curcumin has many reported pharmacological effects. Curcumin was entrapped in mannitol microparticles using a spray dryer coupled with a two-solution mixing-type spray nozzle to prepare "curcumin nanocomposite particles" and the application of these particles for inhalation formulations was investigated. Spray drying conditions (flow rate, concentration and inlet temperature) affected the size of both the resulting curcumin nanocomposite particles and the curcumin nanoparticles in the nanocomposite particles. The aerosol performance of the curcumin nanocomposite particles changed depending on the spray drying conditions and several conditions provided better deposition compared with the curcumin original powder. The curcumin nanocomposite particles showed an improved dissolution profile of curcumin compared with the original powder. Furthermore, the curcumin nanocomposite particles showed a higher cytotoxic effect compared with the curcumin original powder towards three cancer cell lines. Curcumin nanocomposite particles containing curcumin nanoparticles show promise as an inhalation formulation for treating lung-related diseases including cancer. Copyright © 2016. Published by Elsevier B.V.
Double multiple streamtube model with recent improvements
NASA Astrophysics Data System (ADS)
Paraschivoiu, I.; Delclaux, F.
1983-06-01
The objective of the present paper is to show the new capabilities of the double multiple streamtube (DMS) model for predicting the aerodynamic loads and performance of the Darrieus vertical-axis turbine. The original DMS model has been improved (DMSV model) by considering the variation in the upwind and downwind induced velocities as a function of the azimuthal angle for each streamtube. A comparison is made of the rotor performance for several blade geometries (parabola, catenary, troposkien, and Sandia shape). A new formulation is given for an approximate troposkien shape by considering the effect of the gravitational field. The effects of three NACA symmetrical profiles, 0012, 0015 and 0018, on the aerodynamic performance of the turbine are shown. Finally, a semiempirical dynamic-stall model has been incorporated and a better approximation obtained for modeling the local aerodynamic forces and performance for a Darrieus rotor.
A hydrostatic stress-dependent anisotropic model of viscoplasticity
NASA Technical Reports Server (NTRS)
Robinson, D. N.; Tao, Q.; Verrilli, M. J.
1994-01-01
A hydrostatic stress-dependent, anisotropic model of viscoplasticity is formulated as an extension of Bodner's model. This represents a further extension of the isotropic Bodner model over that made to anisotropy by Robinson and MitiKavuma. Account is made of the inelastic deformation that can occur in metallic composites under hydrostatic stress. A procedure for determining the material parameters is identified that is virtually identical to the established characterization procedure for the original Bodner model. Characterization can be achieved using longitudinal/transverse tensile and shear tests and hydrostatic stress tests; alternatively, four off-axis tensile tests can be used. Conditions for a yield stress minimum under off-axis tension are discussed. The model is applied to a W/Cu composite; characterization is made using off-axis tensile data generated at NASA Lewis Research Center (LeRC).
Wang, Wen-chuan; Chau, Kwok-wing; Qiu, Lin; Chen, Yang-bo
2015-05-01
Hydrological time series forecasting is one of the most important applications in modern hydrology, especially for the effective reservoir management. In this research, an artificial neural network (ANN) model coupled with the ensemble empirical mode decomposition (EEMD) is presented for forecasting medium and long-term runoff time series. First, the original runoff time series is decomposed into a finite and often small number of intrinsic mode functions (IMFs) and a residual series using EEMD technique for attaining deeper insight into the data characteristics. Then all IMF components and residue are predicted, respectively, through appropriate ANN models. Finally, the forecasted results of the modeled IMFs and residual series are summed to formulate an ensemble forecast for the original annual runoff series. Two annual reservoir runoff time series from Biuliuhe and Mopanshan in China, are investigated using the developed model based on four performance evaluation measures (RMSE, MAPE, R and NSEC). The results obtained in this work indicate that EEMD can effectively enhance forecasting accuracy and the proposed EEMD-ANN model can attain significant improvement over ANN approach in medium and long-term runoff time series forecasting. Copyright © 2015 Elsevier Inc. All rights reserved.
A computational algorithm for spacecraft control and momentum management
NASA Technical Reports Server (NTRS)
Dzielski, John; Bergmann, Edward; Paradiso, Joseph
1990-01-01
Developments in the area of nonlinear control theory have shown how coordinate changes in the state and input spaces of a dynamical system can be used to transform certain nonlinear differential equations into equivalent linear equations. These techniques are applied to the control of a spacecraft equipped with momentum exchange devices. An optimal control problem is formulated that incorporates a nonlinear spacecraft model. An algorithm is developed for solving the optimization problem using feedback linearization to transform to an equivalent problem involving a linear dynamical constraint and a functional approximation technique to solve for the linear dynamics in terms of the control. The original problem is transformed into an unconstrained nonlinear quadratic program that yields an approximate solution to the original problem. Two examples are presented to illustrate the results.
Morphodynamic Modeling Using The SToRM Computational System
NASA Astrophysics Data System (ADS)
Simoes, F.
2016-12-01
The framework of the work presented here is the open source SToRM (System for Transport and River Modeling) eco-hydraulics modeling system, which is one of the models released with the iRIC hydraulic modeling graphical software package (http://i-ric.org/). SToRM has been applied to the simulation of various complex environmental problems, including natural waterways, steep channels with regime transition, and rapidly varying flood flows with wetting and drying fronts. In its previous version, however, channel bed was treated as static and the ability of simulating sediment transport rates or bed deformation was not included. The work presented here reports SToRM's newly developed extensions to expand the system's capability to calculate morphological changes in alluvial river systems. The sediment transport module of SToRM has been developed based on the general recognition that meaningful advances depend on physically solid formulations and robust and accurate numerical solution methods. The basic concepts of mass and momentum conservation are used, where the feedback mechanisms between the flow of water, the sediment in transport, and the bed changes are directly incorporated in the governing equations used in the mathematical model. This is accomplished via a non-capacity transport formulation based on the work of Cao et al. [Z. Cao et al., "Non-capacity or capacity model for fluvial sediment transport," Water Management, 165(WM4):193-211, 2012], where the governing equations are augmented with source/sink terms due to water-sediment interaction. The same unsteady, shock-capturing numerical schemes originally used in SToRM were adapted to the new physics, using a control volume formulation over unstructured computational grids. The presentation will include a brief overview of these methodologies, and the result of applications of the model to a number of relevant physical test cases with movable bed, where computational results are compared to experimental data.
Chaos in an Eulerian Based Model of Sickle Cell Blood Flow
NASA Astrophysics Data System (ADS)
Apori, Akwasi; Harris, Wesley
2001-11-01
A novel Eulerian model describing the manifestation of sickle cell blood flow in the capillaries has been formulated to study the apparently chaotic onset of sickle cell crises. This Eulerian model was based on extending previous models of sickle cell blood flow which were limited due to their Lagrangian formulation. Oxygen concentration, red blood cell velocity, cell stiffness, and plasma viscosity were modeled as system state variables. The governing equations of the system were expressed in canonical form. The non-linear coupling of velocity-viscosity and viscosity- stiffness proved to be the origin of chaos in the system. The system was solved with respect to a control parameter representing the unique rheology of the sickle cell erythrocytes. Results of chaos tests proved positive for various ranges of the control parameter. The results included con-tinuous patterns found in the Poincare section, spectral broadening of the Fourier power spectrum, and positive Lyapunov exponent values. The onset of chaos predicted by this sickle cell flow model as the control parameter was varied appeared to coincide with the change from a healthy state to a crisis state in a sickle cell patient. This finding that sickle cell crises may be caused from the well understood change of a solution from a steady state to chaotic could point to new ways in preventing and treating crises and should be validated in clinical trials.
Micromechanical modelling of polyethylene
NASA Astrophysics Data System (ADS)
Alvarado Contreras, Jose Andres
2008-10-01
The increasing use of polyethylene in diverse applications motivates the need for understanding how its molecular properties relate to the overall behaviour of the material. Although microstructure and mechanical properties of polymers have been the subject of several studies, the irreversible microstructural rearrangements occurring at large deformations are not completely understood. The purpose of this thesis is to describe how the concepts of Continuum Damage Mechanics can be applied to modelling of polyethylene materials under different loading conditions. The first part of the thesis consists of the theoretical formulation and numerical implementation of a three-dimensional micromechanical model for crystalline polyethylene. Based on the theory of shear slip on crystallographic planes, the proposed model is expressed in the framework of viscoplasticity coupled with degradation at large deformations. Earlier models aid in the interpretation of the mechanical behaviour of crystalline polyethylene under different loading conditions; however, they cannot predict the microstructural damage caused by deformation. The model, originally due to Parks and Ahzi (199o), was further developed in the light of the concept of Continuum Damage Mechanics to consider the original microstructure, the particular irreversible rearrangements, and the deformation mechanisms. Damage mechanics has been a matter of intensive research by many authors, yet it has not been introduced to the micromodelling of semicrystalline polymeric materials such as polyethylene. Regarding the material representation, the microstructure is simplified as an aggregate of randomly oriented and perfectly bonded crystals. To simulate large deformations, the new constitutive model attempts to take into account existence of intracrystalline microcracks. The second part of the work presents the theoretical formulation and numerical implementation of a three-dimensional constitutive model for the mechanical behaviour of semicrystalline polyethylene. The model proposed herein attempts to describe the deformation and degradation process in semicrystalline polyethylene following the approach of damage mechanics. Structural degradation, an important phenomenon at large deformations, has not received sufficient attention in the literature. The modifications to the constitutive equations consist essentially of introducing the concept of Continuum Damage Mechanics to describe the rupture of the intermolecular (van der Waals) bonds that hold crystals as coherent structures. In order to model the mechanical behaviour, the material morphology is simplified as a collection of inclusions comprising the crystalline and amorphous phases with their characteristic average volume fractions. In the spatial arrangement, each inclusion consists of crystalline material lying in a thin lamella attached to an amorphous layer. To consider microstructural damage, two different approaches are analyzed. The first approach assumes damage occurs only in the crystalline phase, i.e., degradation of the amorphous phase is ignored. The second approach considers the effect of damage on the mechanical behaviour of both the amorphous and crystalline phases. To illustrate the proposed constitutive formulations, the models were used to predict the responses of crystalline and semicrystalline polyethylene under uniaxial tension and simple shear. The numerical simulations were compared with experimental data previously obtained by Bartczak et al. (1994), G'Sell and Jonas (1981), G'Sell et al. (1983), Hillmansen et al. (2000), and Li et al. (2001). Our model's predictions show a consistently good agreement with the experimental results and a significant improvement with respect to the ones obtained by Parks and Ahzi (1990), Schoenfeld et al. (1995), Yang and Chen (2001), Lee et al. (i993b), Lee et al. (1993a), and Nikolov et al. (2006). The newly proposed formulations demonstrate that these types of constitutive models based on Continuum Damage Mechanics are appropriate for predicting large deformations and failure in polyethylene materials.
Further improvements of a new model for turbulent convection in stars
NASA Technical Reports Server (NTRS)
Canuto, V. M.; Mazzitelli, I.
1992-01-01
The effects of including a variable molecular weight and of using the newest opacities of Rogers and Iglesias (1991) as inputs to a recent model by Canuto and Mazzitelli (1991) for stellar turbulent convection are studied. Solar evolutionary tracks are used to conclude that the the original model for turbulence with mixing length Lambda = z, Giuli's variable Q unequal to 1 and the new opacities yields a fit to solar T(eff) within 0.5 percent. A formulation of Lambda is proposed that extends the purely nonlocal Lambda = z expression to include local effects. A new expression for Lambda is obtained which generalizes both the mixing length theory (MLT) phenomenological expression for Lambda as well as the model Lambda = z. It is argued that the MLT should now be abandoned.
Study of the Bellman equation in a production model with unstable demand
NASA Astrophysics Data System (ADS)
Obrosova, N. K.; Shananin, A. A.
2014-09-01
A production model with allowance for a working capital deficit and a restricted maximum possible sales volume is proposed and analyzed. The study is motivated by the urgency of analyzing well-known problems of functioning low competitive macroeconomic structures. The original formulation of the task represents an infinite-horizon optimal control problem. As a result, the model is formalized in the form of a Bellman equation. It is proved that the corresponding Bellman operator is a contraction and has a unique fixed point in the chosen class of functions. A closed-form solution of the Bellman equation is found using the method of steps. The influence of the credit interest rate on the firm market value assessment is analyzed by applying the developed model.
Numerical Error Estimation with UQ
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Korn, Peter; Marotzke, Jochem
2014-05-01
Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted
Multiconfigurational quantum propagation with trajectory-guided generalized coherent states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grigolo, Adriano, E-mail: agrigolo@ifi.unicamp.br; Aguiar, Marcus A. M. de, E-mail: aguiar@ifi.unicamp.br; Viscondi, Thiago F., E-mail: viscondi@if.usp.br
2016-03-07
A generalized version of the coupled coherent states method for coherent states of arbitrary Lie groups is developed. In contrast to the original formulation, which is restricted to frozen-Gaussian basis sets, the extended method is suitable for propagating quantum states of systems featuring diversified physical properties, such as spin degrees of freedom or particle indistinguishability. The approach is illustrated with simple models for interacting bosons trapped in double- and triple-well potentials, most adequately described in terms of SU(2) and SU(3) bosonic coherent states, respectively.
Kast, Stefan M
2004-03-08
An argument brought forward by Sholl and Fichthorn against the stochastic collision-based constant temperature algorithm for molecular dynamics simulations developed by Kast et al. is refuted. It is demonstrated that the large temperature fluctuations noted by Sholl and Fichthorn are due to improperly chosen initial conditions within their formulation of the algorithm. With the original form or by suitable initialization of their variant no deficient behavior is observed.
Advances in cognitive theory and therapy: the generic cognitive model.
Beck, Aaron T; Haigh, Emily A P
2014-01-01
For over 50 years, Beck's cognitive model has provided an evidence-based way to conceptualize and treat psychological disorders. The generic cognitive model represents a set of common principles that can be applied across the spectrum of psychological disorders. The updated theoretical model provides a framework for addressing significant questions regarding the phenomenology of disorders not explained in previous iterations of the original model. New additions to the theory include continuity of adaptive and maladaptive function, dual information processing, energizing of schemas, and attentional focus. The model includes a theory of modes, an organization of schemas relevant to expectancies, self-evaluations, rules, and memories. A description of the new theoretical model is followed by a presentation of the corresponding applied model, which provides a template for conceptualizing a specific disorder and formulating a case. The focus on beliefs differentiates disorders and provides a target for treatment. A variety of interventions are described.
NASA Astrophysics Data System (ADS)
Zhang, Kun; Ma, Jinzhu; Zhu, Gaofeng; Ma, Ting; Han, Tuo; Feng, Li Li
2017-01-01
Global and regional estimates of daily evapotranspiration are essential to our understanding of the hydrologic cycle and climate change. In this study, we selected the radiation-based Priestly-Taylor Jet Propulsion Laboratory (PT-JPL) model and assessed it at a daily time scale by using 44 flux towers. These towers distributed in a wide range of ecological systems: croplands, deciduous broadleaf forest, evergreen broadleaf forest, evergreen needleleaf forest, grasslands, mixed forests, savannas, and shrublands. A regional land surface evapotranspiration model with a relatively simple structure, the PT-JPL model largely uses ecophysiologically-based formulation and parameters to relate potential evapotranspiration to actual evapotranspiration. The results using the original model indicate that the model always overestimates evapotranspiration in arid regions. This likely results from the misrepresentation of water limitation and energy partition in the model. By analyzing physiological processes and determining the sensitive parameters, we identified a series of parameter sets that can increase model performance. The model with optimized parameters showed better performance (R2 = 0.2-0.87; Nash-Sutcliffe efficiency (NSE) = 0.1-0.87) at each site than the original model (R2 = 0.19-0.87; NSE = -12.14-0.85). The results of the optimization indicated that the parameter β (water control of soil evaporation) was much lower in arid regions than in relatively humid regions. Furthermore, the optimized value of parameter m1 (plant control of canopy transpiration) was mostly between 1 to 1.3, slightly lower than the original value. Also, the optimized parameter Topt correlated well to the actual environmental temperature at each site. We suggest that using optimized parameters with the PT-JPL model could provide an efficient way to improve the model performance.
Mean Flow Augmented Acoustics in Rocket Systems
NASA Technical Reports Server (NTRS)
Fischbach, Sean
2014-01-01
Combustion instability in solid rocket motors and liquid engines has long been a subject of concern. Many rockets display violent fluctuations in pressure, velocity, and temperature originating from the complex interactions between the combustion process and gas dynamics. Recent advances in energy based modeling of combustion instabilities require accurate determination of acoustic frequencies and mode shapes. Of particular interest is the acoustic mean flow interactions within the converging section of a rocket nozzle, where gradients of pressure, density, and velocity become large. The expulsion of unsteady energy through the nozzle of a rocket is identified as the predominate source of acoustic damping for most rocket systems. Recently, an approach to address nozzle damping with mean flow effects was implemented by French [1]. This new approach extends the work originated by Sigman and Zinn [2] by solving the acoustic velocity potential equation (AVPE) formulated by perturbing the Euler equations [3]. The present study aims to implement the French model within the COMSOL Multiphysiscs framework and analyzes one of the author's presented test cases.
Toward Improved Fidelity of Thermal Explosion Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, A L; Becker, R; Howard, W M
2009-07-17
We will present results of an effort to improve the thermal/chemical/mechanical modeling of HMX based explosive like LX04 and LX10 for thermal cook-off. The original HMX model and analysis scheme were developed by Yoh et.al. for use in the ALE3D modeling framework. The current results were built to remedy the deficiencies of that original model. We concentrated our efforts in four areas. The first area was addition of porosity to the chemical material model framework in ALE3D that is used to model the HMX explosive formulation. This is needed to handle the roughly 2% porosity in solid explosives. The secondmore » area was the improvement of the HMX reaction network, which included the inclusion of a reactive phase change model base on work by Henson et.al. The third area required adding early decomposition gas species to the CHEETAH material database to develop more accurate equations of state for gaseous intermediates and products. Finally, it was necessary to improve the implicit mechanics module in ALE3D to more naturally handle the long time scales associated with thermal cook-off. The application of the resulting framework to the analysis of the Scaled Thermal Explosion (STEX) experiments will be discussed.« less
NASA Astrophysics Data System (ADS)
Guerrero, José Luis Morales; Vidal, Manuel Cánovas; Nicolás, José Andrés Moreno; López, Francisco Alhama
2018-05-01
New additional conditions required for the uniqueness of the 2D elastostatic problems formulated in terms of potential functions for the derived Papkovich-Neuber representations, are studied. Two cases are considered, each of them formulated by the scalar potential function plus one of the rectangular non-zero components of the vector potential function. For these formulations, in addition to the original (physical) boundary conditions, two new additional conditions are required. In addition, for the complete Papkovich-Neuber formulation, expressed by the scalar potential plus two components of the vector potential, the additional conditions established previously for the three-dimensional case in z-convex domain can be applied. To show the usefulness of these new conditions in a numerical scheme two applications are numerically solved by the network method for the three cases of potential formulations.
Nieto-Orellana, Alejandro; Coghlan, David; Rothery, Malcolm; Falcone, Franco H; Bosquillon, Cynthia; Childerhouse, Nick; Mantovani, Giuseppe; Stolnik, Snow
2018-04-05
Pulmonary delivery of protein therapeutics has considerable clinical potential for treating both local and systemic diseases. However, poor protein conformational stability, immunogenicity and protein degradation by proteolytic enzymes in the lung are major challenges to overcome for the development of effective therapeutics. To address these, a family of structurally related copolymers comprising polyethylene glycol, mPEG 2k , and poly(glutamic acid) with linear A-B (mPEG 2k -lin-GA) and miktoarm A-B 3 (mPEG 2k -mik-(GA) 3 ) macromolecular architectures was investigated as potential protein stabilisers. These copolymers form non-covalent nanocomplexes with a model protein (lysozyme) which can be formulated into dry powders by spray-drying using common aerosol excipients (mannitol, trehalose and leucine). Powder formulations with excellent aerodynamic properties (fine particle fraction of up to 68%) were obtained with particle size (D 50 ) in the 2.5 µm range, low moisture content (<5%), and high glass transitions temperatures, i.e. formulation attributes all suitable for inhalation application. In aqueous medium, dry powders rapidly disintegrated into the original polymer-protein nanocomplexes which provided protection towards proteolytic degradation. Taken together, the present study shows that dry powders based on (mPEG 2k -polyGA)-protein nanocomplexes possess potentials as an inhalation delivery system. Copyright © 2018 Elsevier B.V. All rights reserved.
A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction
NASA Astrophysics Data System (ADS)
Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur
2009-07-01
For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).
Exploring the Role of Intrinsic Nodal Activation on the Spread of Influence in Complex Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Visweswara Sathanur, Arun; Halappanavar, Mahantesh; Shi, Yi
In many complex networked systems such as online social networks, at any given time, activity originates at certain nodes and subsequently spreads on the network through influence. To model the spread of influence in such a scenario, we consider the problem of identification of influential entities in a complex network when nodal activation can happen through two different mechanisms. The first mode of activation is due mechanisms intrinsic to the node. The second mechanism is through the influence of connected neighbors. In this work, we present a simple probabilistic formulation that models such self-evolving systems where information diffusion occurs primarilymore » because of the intrinsic activity of users and the spread of activity occurs due to influence. We provide an algorithm to mine for the influential seeds in such a scenario by modifying the well-known influence maximization framework with the independent cascade diffusion model. We provide small motivating examples to provide an intuitive understanding of the effect of including the intrinsic activation mechanism. We sketch a proof of the submodularity of the influence function under the new formulation and demonstrate the same with larger graphs. We then show by means of additional experiments on a real-world twitter dataset how the formulation can be applied to real-world social media datasets. Finally we derive a computationally efficient centrality metric that takes into account, both the mechanisms of activation and provides for an accurate as well as computationally efficient alternative approach to the problem of identifying influencers under intrinsic activation.« less
NASA Technical Reports Server (NTRS)
Dean, W. G.
1982-01-01
During prelaunch procedures at Kennedy Space Center some of the EPDM Thermal Protection System material was damaged on the Solid Rocket Booster stiffener stubs. The preferred solution was to patch the damaged areas with a cork-filled epoxy patching compound. Before this was done, however, it was requested that this patching technique be checked out by testing it in the MSFC Hot Gas Facility. Two tests were run in the HFG in 1980. The results showed the patch material to be adequate. Since that time, the formulation of the cork-filled epoxy material has been changed. It became necessary to retest this concept to be sure that the new material is as good as or better than the original material. In addition to the revised formulation material, tests were also made using K5NA as the patch material. The objectives of the tests reported herein were to: (1) compare the thermal performance of the original and the new cork-filled epoxy formulations, and (2) compare the K5NA closeout material to these epoxy materials. Material specifications are also discussed.
Laboratory observations of artificial sand and oil agglomerates
Jenkins, Robert L.; Dalyander, P. Soupy; Penko, Allison; Long, Joseph W.
2018-04-27
Sand and oil agglomerates (SOAs) form when weathered oil reaches the surf zone and combines with suspended sediments. The presence of large SOAs in the form of thick mats (up to 10 centimeters [cm] in height and up to 10 square meters [m2] in area) and smaller SOAs, sometimes referred to as surface residual balls (SRBs), may lead to the re-oiling of beaches previously affected by an oil spill. A limited number of numerical modeling and field studies exist on the transport and dynamics of centimeter-scale SOAs and their interaction with the sea floor. Numerical models used to study SOAs have relied on shear-stress formulations to predict incipient motion. However, uncertainty exists as to the accuracy of applying these formulations, originally developed for sand grains in a uniformly sorted sediment bed, to larger, nonspherical SOAs. In the current effort, artificial sand and oil agglomerates (aSOAs) created with the size, density, and shape characteristics of SOAs were studied in a small-oscillatory flow tunnel. These experiments expanded the available data on SOA motion and interaction with the sea floor and were used to examine the applicability of shear-stress formulations to predict SOA mobility. Data collected during these two sets of experiments, including photographs, video, and flow velocity, are presented in this report, along with an analysis of shear-stress-based formulations for incipient motion. The results showed that shear-stress thresholds for typical quartz sand predicted the incipient motion of aSOAs with 0.5–1.0-cm diameters, but were inaccurate for aSOAs with larger diameters (>2.5 cm). This finding implies that modified parameterizations of incipient motion may be necessary under certain combinations of aSOA characteristics and environmental conditions.
Symonds, Erin L; Cole, Stephen R; Bastin, Dawn; Fraser, Robert Jl; Young, Graeme P
2017-12-01
Objectives Faecal immunochemical test accuracy may be adversely affected when samples are exposed to high temperatures. This study evaluated the effect of two sample collection buffer formulations (OC-Sensor, Eiken) and storage temperatures on faecal haemoglobin readings. Methods Faecal immunochemical test samples returned in a screening programme and with ≥10 µg Hb/g faeces in either the original or new formulation haemoglobin stabilizing buffer were stored in the freezer, refrigerator, or at room temperature (22℃-24℃), and reanalysed after 1-14 days. Samples in the new buffer were also reanalysed after storage at 35℃ and 50℃. Results were expressed as percentage of the initial concentration, and the number of days that levels were maintained to at least 80% was calculated. Results Haemoglobin concentrations were maintained above 80% of their initial concentration with both freezer and refrigerator storage, regardless of buffer formulation or storage duration. Stability at room temperature was significantly better in the new buffer, with haemoglobin remaining above 80% for 20 days compared with six days in the original buffer. Storage at 35℃ or 50℃ in the new buffer maintained haemoglobin above 80% for eight and two days, respectively. Conclusion The new formulation buffer has enhanced haemoglobin stabilizing properties when samples are exposed to temperatures greater than 22℃.
Odediran, Samuel Akintunde; Elujoba, Anthony Adebolu; Adebajo, Adeleke Clement
2014-05-01
Mangifera indica, Alstonia boonei, Morinda lucida and Azadirachta indica (MAMA) decoction, commonly prepared and used in Nigeria from 1:1:1:1 ratio of Mangifera indica L. (Anacardiaceae), Alstonia boonei De Wild (Apocynaceae), Morinda lucida Benth (Rubiaceae), and Azadirachta indica A. Juss (Meliaceae) leaves, plus four new variants of this combination were subjected to three in vivo antimalarial test models using chloroquine-sensitive Plasmodium berghei berghei to determine the most active under each of the test models. Using the original formulation, MAMA (1:1:1:1) which gave ED50 and ED90 of 101.54±2.95 and 227.18±2.95, respectively, as reference for comparison, MAMA-1 (1:2:2:2), with 79.58±1.30 and 170.98±1.30, gave significantly (p<0.05) higher survival at 85 and 340 mg/kg when 80 % of the mice survived for 15.6 and 17.8 days, respectively, while MAMA-2 (2:1:2:2), with 83.57±1.93 and 164.23±1.93, gave comparable survival except at 170 mg/kg with 60 % survivors for 12 days. MAMA-1 and MAMA-2 were the best curative formulations with MAMA-1 giving additional prophylactic activity. MAMA-3 (2:2:2:1) with 98.70±0.91 and 220.17±0.91, gave comparable (p>0.05) survival at 85 mg/kg with 60 % survival for 13.2 days and significantly higher survival at 42.5 mg/kg for 17 days with 40 % survival. Both MAMA and MAMA-3 were the best chemosuppressive formulations plus additional curative activities. MAMA-4 (1:1:2:2), the best prophylactic formulation with 94.87±2.43 and 201.20±2.43 gave significantly higher (p<0.05) survival at all doses except at 21.25 mg/kg which gave 60 % survival up to 10 days. Thus, the antimalarial therapy desired, following appropriate diagnosis, whether prophylactic, chemosuppressive or curative would determine which of the MAMA decoction formulations to be prescribed. This phenomenon of formulary optimization may also be applied to other pharmacological activities.
Phenomenologically viable Lorentz-violating quantum gravity.
Sotiriou, Thomas P; Visser, Matt; Weinfurtner, Silke
2009-06-26
Horava's "Lifschitz point gravity" has many desirable features, but in its original incarnation one is forced to accept a nonzero cosmological constant of the wrong sign to be compatible with observation. We develop an extension of Horava's model that abandons "detailed balance" and regains parity invariance, and in 3+1 dimensions exhibit all five marginal (renormalizable) and four relevant (super-renormalizable) operators, as determined by power counting. We also consider the classical limit of this theory, evaluate the Hamiltonian and supermomentum constraints, and extract the classical equations of motion in a form similar to the Arnowitt-Deser-Misner formulation of general relativity. This puts the model in a framework amenable to developing detailed precision tests.
Fully-coupled analysis of jet mixing problems. Three-dimensional PNS model, SCIP3D
NASA Technical Reports Server (NTRS)
Wolf, D. E.; Sinha, N.; Dash, S. M.
1988-01-01
Numerical procedures formulated for the analysis of 3D jet mixing problems, as incorporated in the computer model, SCIP3D, are described. The overall methodology closely parallels that developed in the earlier 2D axisymmetric jet mixing model, SCIPVIS. SCIP3D integrates the 3D parabolized Navier-Stokes (PNS) jet mixing equations, cast in mapped cartesian or cylindrical coordinates, employing the explicit MacCormack Algorithm. A pressure split variant of this algorithm is employed in subsonic regions with a sublayer approximation utilized for treating the streamwise pressure component. SCIP3D contains both the ks and kW turbulence models, and employs a two component mixture approach to treat jet exhausts of arbitrary composition. Specialized grid procedures are used to adjust the grid growth in accordance with the growth of the jet, including a hybrid cartesian/cylindrical grid procedure for rectangular jets which moves the hybrid coordinate origin towards the flow origin as the jet transitions from a rectangular to circular shape. Numerous calculations are presented for rectangular mixing problems, as well as for a variety of basic unit problems exhibiting overall capabilities of SCIP3D.
Non Abelian T-duality in Gauged Linear Sigma Models
NASA Astrophysics Data System (ADS)
Bizet, Nana Cabo; Martínez-Merino, Aldo; Zayas, Leopoldo A. Pando; Santos-Silva, Roberto
2018-04-01
Abelian T-duality in Gauged Linear Sigma Models (GLSM) forms the basis of the physical understanding of Mirror Symmetry as presented by Hori and Vafa. We consider an alternative formulation of Abelian T-duality on GLSM's as a gauging of a global U(1) symmetry with the addition of appropriate Lagrange multipliers. For GLSMs with Abelian gauge groups and without superpotential we reproduce the dual models introduced by Hori and Vafa. We extend the construction to formulate non-Abelian T-duality on GLSMs with global non-Abelian symmetries. The equations of motion that lead to the dual model are obtained for a general group, they depend in general on semi-chiral superfields; for cases such as SU(2) they depend on twisted chiral superfields. We solve the equations of motion for an SU(2) gauged group with a choice of a particular Lie algebra direction of the vector superfield. This direction covers a non-Abelian sector that can be described by a family of Abelian dualities. The dual model Lagrangian depends on twisted chiral superfields and a twisted superpotential is generated. We explore some non-perturbative aspects by making an Ansatz for the instanton corrections in the dual theories. We verify that the effective potential for the U(1) field strength in a fixed configuration on the original theory matches the one of the dual theory. Imposing restrictions on the vector superfield, more general non-Abelian dual models are obtained. We analyze the dual models via the geometry of their susy vacua.
Parallel Finite Element Domain Decomposition for Structural/Acoustic Analysis
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Tungkahotara, Siroj; Watson, Willie R.; Rajan, Subramaniam D.
2005-01-01
A domain decomposition (DD) formulation for solving sparse linear systems of equations resulting from finite element analysis is presented. The formulation incorporates mixed direct and iterative equation solving strategics and other novel algorithmic ideas that are optimized to take advantage of sparsity and exploit modern computer architecture, such as memory and parallel computing. The most time consuming part of the formulation is identified and the critical roles of direct sparse and iterative solvers within the framework of the formulation are discussed. Experiments on several computer platforms using several complex test matrices are conducted using software based on the formulation. Small-scale structural examples are used to validate thc steps in the formulation and large-scale (l,000,000+ unknowns) duct acoustic examples are used to evaluate the ORIGIN 2000 processors, and a duster of 6 PCs (running under the Windows environment). Statistics show that the formulation is efficient in both sequential and parallel computing environmental and that the formulation is significantly faster and consumes less memory than that based on one of the best available commercialized parallel sparse solvers.
NASA Technical Reports Server (NTRS)
Kirshen, N.; Mill, T.
1973-01-01
The effect of formulation components and the addition of fire retardants on the impact sensitivity of Viton B fluoroelastomer in liquid oxygen was studied with the objective of developing a procedure for reliably reducing this sensitivity. Component evaluation, carried out on more than 40 combinations of components and cure cycles, showed that almost all the standard formulation agents, including carbon, MgO, Diak-3, and PbO2, will sensitize the Viton stock either singly or in combinations, some combinations being much more sensitive than others. Cure and postcure treatments usually reduced the sensitivity of a given formulation, often dramatically, but no formulated Viton was as insensitive as the pure Viton B stock. Coating formulated Viton with a thin layer of pure Viton gave some indication of reduced sensitivity, but additional tests are needed. It is concluded that sensitivity in formulated Viton arises from a variety of sources, some physical and some chemical in origin. Elemental analyses for all the formulated Vitons are reported as are the results of a literature search on the subject of LOX impact sensitivity.
The ODD protocol: A review and first update
Grimm, Volker; Berger, Uta; DeAngelis, Donald L.; Polhill, J. Gary; Giske, Jarl; Railsback, Steve F.
2010-01-01
The 'ODD' (Overview, Design concepts, and Details) protocol was published in 2006 to standardize the published descriptions of individual-based and agent-based models (ABMs). The primary objectives of ODD are to make model descriptions more understandable and complete, thereby making ABMs less subject to criticism for being irreproducible. We have systematically evaluated existing uses of the ODD protocol and identified, as expected, parts of ODD needing improvement and clarification. Accordingly, we revise the definition of ODD to clarify aspects of the original version and thereby facilitate future standardization of ABM descriptions. We discuss frequently raised critiques in ODD but also two emerging, and unanticipated, benefits: ODD improves the rigorous formulation of models and helps make the theoretical foundations of large models more visible. Although the protocol was designed for ABMs, it can help with documenting any large, complex model, alleviating some general objections against such models.
Mixed-strategy Nash equilibrium for a discontinuous symmetric N-player game
NASA Astrophysics Data System (ADS)
Hilhorst, H. J.; Appert-Rolland, C.
2018-03-01
We consider a game in which each player must find a compromise between more daring strategies that carry a high risk for him to be eliminated, and more cautious ones that, however, reduce his final score. For two symmetric players this game was originally formulated in 1961 by Dresher, who modeled a duel between two opponents. The game has also been of interest in the description of athletic competitions. We extend here the two-player game to an arbitrary number N of symmetric players. We show that there is a mixed-strategy Nash equilibrium and find its exact analytic expression, which we analyze in particular in the limit of large N, where mean-field behavior occurs. The original game with N = 2 arises as a singular limit of the general case.
Pradhan, Abani K; Ivanek, Renata; Gröhn, Yrjö T; Bukowski, Robert; Geornaras, Ifigenia; Sofos, John N; Wiedmann, Martin
2010-04-01
The objective of this study was to estimate the relative risk of listeriosis-associated deaths attributable to Listeria monocytogenes contamination in ham and turkey formulated without and with growth inhibitors (GIs). Two contamination scenarios were investigated: (i) prepackaged deli meats with contamination originating solely from manufacture at a frequency of 0.4% (based on reported data) and (ii) retail-sliced deli meats with contamination originating solely from retail at a frequency of 2.3% (based on reported data). Using a manufacture-to-consumption risk assessment with product-specific growth kinetic parameters (i.e., lag phase and exponential growth rate), reformulation with GIs was estimated to reduce human listeriosis deaths linked to ham and turkey by 2.8- and 9-fold, respectively, when contamination originated at manufacture and by 1.9- and 2.8-fold, respectively, for products contaminated at retail. Contamination originating at retail was estimated to account for 76 and 63% of listeriosis deaths caused by ham and turkey, respectively, when all products were formulated without GIs and for 83 and 84% of listeriosis deaths caused by ham and turkey, respectively, when all products were formulated with GIs. Sensitivity analyses indicated that storage temperature was the most important factor affecting the estimation of per annum relative risk. Scenario analyses suggested that reducing storage temperature in home refrigerators to consistently below 7 degrees C would greatly reduce the risk of human listeriosis deaths, whereas reducing storage time appeared to be less effective. Overall, our data indicate a critical need for further development and implementation of effective control strategies to reduce L. monocytogenes contamination at the retail level.
ERIC Educational Resources Information Center
Mazzilli, Sueli
2000-01-01
Examines influences of the Cordoba Movement in formulation of ideas concerning the inseparability among teaching, research, and extension--a new paradigm for the Brazilian university. Finds the formulation of this inseparability had its origins in the Brazilian student movement of the 1960s which included theses of the Cordoba Manifesto. (BT)
Emergence of long distance bird migrations: a new model integrating global climate changes
NASA Astrophysics Data System (ADS)
Louchart, Antoine
2008-12-01
During modern birds history, climatic and environmental conditions have evolved on wide scales. In a continuously changing world, landbirds annual migrations emerged and developed. However, models accounting for the origins of these avian migrations were formulated with static ecogeographic perspectives. Here I reviewed Cenozoic paleoclimatic and paleontological data relative to the palearctic paleotropical long distance (LD) migration system. This led to propose a new model for the origin of LD migrations, the ‘shifting home’ model (SHM). It is based on a dynamic perspective of climate evolution and may apply to the origins of most modern migrations. Non-migrant tropical African bird taxa were present at European latitudes during most of the Cenozoic. Their distribution limits shifted progressively toward modern tropical latitudes during periods of global cooling and increasing seasonality. In parallel, decreasing winter temperatures in the western Palearctic drove shifts of population winter ranges toward the equator. I propose that this induced the emergence of most short distance migrations, and in turn LD migrations. This model reconciliates ecologically tropical ancestry of most LD migrants with predominant winter range shifts, in accordance with requirements for heritable homing. In addition, it is more parsimonious than other non-exclusive models. Greater intrinsic plasticity of winter ranges implied by the SHM is supported by recently observed impacts of the present global warming on migrating birds. This may induce particular threats to some LD migrants. The ancestral, breeding homes of LD migrants were not ‘northern’ or ‘southern’ but shifted across high and middle latitudes while migrations emerged through winter range shifts themselves.
NASA Technical Reports Server (NTRS)
Steffen, C. J., Jr.
1993-01-01
Turbulent backward-facing step flow was examined using four low turbulent Reynolds number k-epsilon models and one standard high Reynolds number technique. A tunnel configuration of 1:9 (step height: exit tunnel height) was used. The models tested include: the original Jones and Launder; Chien; Launder and Sharma; and the recent Shih and Lumley formulation. The experimental reference of Driver and Seegmiller was used to make detailed comparisons between reattachment length, velocity, pressure, turbulent kinetic energy, Reynolds shear stress, and skin friction predictions. The results indicated that the use of a wall function for the standard k-epsilon technique did not reduce the calculation accuracy for this separated flow when compared to the low turbulent Reynolds number techniques.
Gasser, Urs E; Fischer, Anton; Timmermans, Jan P; Arnet, Isabelle
2013-04-23
By definition, a generic product is considered interchangeable with the innovator brand product. Controversy exists about interchangeability, and attention is predominantly directed to contaminants. In particular for chronic, degenerative conditions such as in Parkinson's disease (PD) generic substitution remains debated among physicians, patients and pharmacists. The objective of this study was to compare the pharmaceutical quality of seven generic levodopa/benserazide hydrochloride combination products marketed in Germany with the original product (Madopar® / Prolopa® 125, Roche, Switzerland) in order to evaluate the potential impact of Madopar® generics versus branded products for PD patients and clinicians. Madopar® / Prolopa® 125 tablets and capsules were used as reference material. The generic products tested (all 100 mg/25 mg formulations) included four tablet and three capsule formulations. Colour, appearance of powder (capsules), disintegration and dissolution, mass of tablets and fill mass of capsules, content, identity and amounts of impurities were assessed along with standard physical and chemical laboratory tests developed and routinely practiced at Roche facilities. Results were compared to the original "shelf-life" specifications in use by Roche. Each of the seven generic products had one or two parameters outside the specifications. Deviations for the active ingredients ranged from +8.4% (benserazide) to -7.6% (levodopa) in two tablet formulations. Degradation products were measured in marked excess (+26.5%) in one capsule formulation. Disintegration time and dissolution for levodopa and benserazide hydrochloride at 30 min were within specifications for all seven generic samples analysed, however with some outliers. Deviations for the active ingredients may go unnoticed by a new user of the generic product, but may entail clinical consequences when switching from original to generic during a long-term therapy. Degradation products may pose a safety concern. Our results should prompt caution when prescribing a generic of Madopar®/Prolopa®, and also invite to further investigations in view of a more comprehensive approach, both pharmaceutical and clinical.
Tracking trade transactions in water resource systems: A node-arc optimization formulation
NASA Astrophysics Data System (ADS)
Erfani, Tohid; Huskova, Ivana; Harou, Julien J.
2013-05-01
We formulate and apply a multicommodity network flow node-arc optimization model capable of tracking trade transactions in complex water resource systems. The model uses a simple node to node network connectivity matrix and does not require preprocessing of all possible flow paths in the network. We compare the proposed node-arc formulation with an existing arc-path (flow path) formulation and explain the advantages and difficulties of both approaches. We verify the proposed formulation model on a hypothetical water distribution network. Results indicate the arc-path model solves the problem with fewer constraints, but the proposed formulation allows using a simple network connectivity matrix which simplifies modeling large or complex networks. The proposed algorithm allows converting existing node-arc hydroeconomic models that broadly represent water trading to ones that also track individual supplier-receiver relationships (trade transactions).
Arul Jose, Polpass; Sivakala, Kunjukrishnan Kamalakshi; Jebakumar, Solomon Robinson David
2013-01-01
Streptomyces sp. JAJ06 is a seawater-dependent antibiotic producer, previously isolated and characterised from an Indian coastal solar saltern. This paper reports replacement of seawater with a defined salt formulation in production medium and subsequent statistical media optimization to ensure consistent as well as improved antibiotic production by Streptomyces sp. JAJ06. This strain was observed to be proficient to produce antibiotic compound with incorporation of chemically defined sodium-chloride-based salt formulation instead of seawater into the production medium. Plackett-Burman design experiment was applied, and three media constituents, starch, KBr, and CaCO3, were recognised to have significant effect on the antibiotic production of Streptomyces JAJ06 at their individual levels. Subsequently, Response surface methodology with Box-Behnken design was employed to optimize these influencing medium constituents for the improved antibiotic production of Streptomyces sp. JAJ06. A total of 17 experiments were conducted towards the construction of a quadratic model and a second-order polynomial equation. Optimum levels of medium constituents were obtained by analysis of the model and numerical optimization method. When the strain JAJ06 was cultivated in the optimized medium, the antibiotic activity was increased to 173.3 U/mL, 26.8% increase as compared to the original (136.7 U/mL). This study found a useful way to cultivate Streptomyces sp. JAJ06 for enhanced production of antibiotic compound. PMID:24454383
Shao, Q; Rowe, R C; York, P
2007-06-01
This study has investigated an artificial intelligence technology - model trees - as a modelling tool applied to an immediate release tablet formulation database. The modelling performance was compared with artificial neural networks that have been well established and widely applied in the pharmaceutical product formulation fields. The predictability of generated models was validated on unseen data and judged by correlation coefficient R(2). Output from the model tree analyses produced multivariate linear equations which predicted tablet tensile strength, disintegration time, and drug dissolution profiles of similar quality to neural network models. However, additional and valuable knowledge hidden in the formulation database was extracted from these equations. It is concluded that, as a transparent technology, model trees are useful tools to formulators.
Assessment of State-of-the-Art Dust Emission Scheme in GEOS
NASA Technical Reports Server (NTRS)
Darmenov, Anton; Liu, Xiaohong; Prigent, Catherine
2017-01-01
The GEOS modeling system has been extended with state of the art parameterization of dust emissions based on the vertical flux formulation described in Kok et al 2014. The new dust scheme was coupled with the GOCART and MAM aerosol models. In the present study we compare dust emissions, aerosol optical depth (AOD) and radiative fluxes from GEOS experiments with the standard and new dust emissions. AOD from the model experiments are also compared with AERONET and satellite based data. Based on this comparative analysis we concluded that the new parameterization improves the GEOS capability to model dust aerosols originating from African sources, however it lead to overestimation of dust emissions from Asian and Arabian sources. Further regional tuning of key parameters controlling the threshold friction velocity may be required in order to achieve more definitive and uniform improvement in the dust modeling skill.
Multiple re-encounter approach to radical pair reactions and the role of nonlinear master equations.
Clausen, Jens; Guerreschi, Gian Giacomo; Tiersch, Markus; Briegel, Hans J
2014-08-07
We formulate a multiple-encounter model of the radical pair mechanism that is based on a random coupling of the radical pair to a minimal model environment. These occasional pulse-like couplings correspond to the radical encounters and give rise to both dephasing and recombination. While this is in agreement with the original model of Haberkorn and its extensions that assume additional dephasing, we show how a nonlinear master equation may be constructed to describe the conditional evolution of the radical pairs prior to the detection of their recombination. We propose a nonlinear master equation for the evolution of an ensemble of independently evolving radical pairs whose nonlinearity depends on the record of the fluorescence signal. We also reformulate Haberkorn's original argument on the physicality of reaction operators using the terminology of quantum optics/open quantum systems. Our model allows one to describe multiple encounters within the exponential model and connects this with the master equation approach. We include hitherto neglected effects of the encounters, such as a separate dephasing in the triplet subspace, and predict potential new effects, such as Grover reflections of radical spins, that may be observed if the strength and time of the encounters can be experimentally controlled.
NASA Astrophysics Data System (ADS)
Ostrovskii, V. E.; Kadyshevich, E. A.
2014-04-01
Till now, we formulated and developed the Life Origination Hydrate Theory (LOH-Theory) and Mitosis and Replication Hydrate Theory (MRHTheory) as the instruments for understanding the physical and chemical mechanisms applied by Nature for the living matter origination and propagation. This work is aimed at coordination of these theories with the paleontological and astrophysical knowledges and hypotheses of the Earth and Solar System remote histories.
NASA Astrophysics Data System (ADS)
Sadeghi, Morteza; Ghanbarian, Behzad; Horton, Robert
2018-02-01
Thermal conductivity is an essential component in multiphysics models and coupled simulation of heat transfer, fluid flow, and solute transport in porous media. In the literature, various empirical, semiempirical, and physical models were developed for thermal conductivity and its estimation in partially saturated soils. Recently, Ghanbarian and Daigle (GD) proposed a theoretical model, using the percolation-based effective-medium approximation, whose parameters are physically meaningful. The original GD model implicitly formulates thermal conductivity λ as a function of volumetric water content θ. For the sake of computational efficiency in numerical calculations, in this study, we derive an explicit λ(θ) form of the GD model. We also demonstrate that some well-known empirical models, e.g., Chung-Horton, widely applied in the HYDRUS model, as well as mixing models are special cases of the GD model under specific circumstances. Comparison with experiments indicates that the GD model can accurately estimate soil thermal conductivity.
[Contribution of Chilean research to the formulation of national clinical guidelines].
Núñez, Paulina F; Torres, Adrián C; Armas, Rodolfo M
2014-12-01
In Chile, 80 diseases were included in a health care system called Health Care Guarantees (GES) and clinical guidelines were elaborated for their management. To assess the scientific background of guidelines and if they were based on research financed by the Chilean National Commission for Science and Technology. The references of the 82 guidelines developed for 80 diseases were reviewed, registering their number, authors, country of origin and funding source. The guidelines had a total of 6,604 references. Of these, only 185 were Chilean (2.8%) and five (0.08%) originated from research financed by the National Commission for Science and Technology. The contribution of research funded by national agencies to the formulation of clinical guidelines is minimal.
Reformulation of Stmerin(®) D CFC formulation using HFA propellants.
Murata, Saburo; Izumi, Takashi; Ito, Hideki
2013-01-01
Stmerin(®) D was reformulated using hydrofluoroalkanes (HFA-134a and HFA-227) as alternative propellants instead of chlorofluorocarbons (CFCs), where the active ingredients were suspended in mixed CFCs (CFC-11/CFC-12/CFC-114). Here, we report the suspension stability and spray performance of the original CFC formulation and a reformulation using HFAs. We prepared metered dose inhalers (MDI) using HFAs with different surfactants and co-solvents, and investigated the effect on suspension stability by visual testing. We found that the drug suspension stability was poor in both HFAs, but was improved, particularly for HFA-227, by adding a middle chain fatty acid triglycerides (MCT) to the formulation. However, the vapor pressure of HFA-227 is higher than a CFC mixture and this increased the fine particle dose (FPD). Spray performance was adjusted by altering the actuator configuration, and the performance of different actuators was tested by cascade impaction. We found the spray performance could be controlled by the configuration of the actuator. A spray performance comparable to the original formulation was obtained with a 0.8 mm orifice diameter and a 90° cone angle. These results demonstrate that the reformulation of Stmerin(®) D using HFA-227 is feasible, by using MCT as a suspending agent and modifying the actuator configuration.
Kulinowski, Piotr; Dorozyński, Przemysław; Jachowicz, Renata; Weglarz, Władysław P
2008-11-04
Controlled release (CR) dosage forms are often based on polymeric matrices, e.g., sustained-release tablets and capsules. It is crucial to visualise and quantify processes of the hydrogel formation during the standard dissolution study. A method for imaging of CR, polymer-based dosage forms during dissolution study in vitro is presented. Imaging was performed in a non-invasive way by means of the magnetic resonance imaging (MRI). This study was designed to simulate in vivo conditions regarding temperature, volume, state and composition of dissolution media. Two formulations of hydrodynamically balanced systems (HBS) were chosen as model CR dosage forms. HBS release active substance in stomach while floating on the surface of the gastric content. Time evolutions of the diffusion region, hydrogel formation region and "dry core" region were obtained during a dissolution study of L-dopa as a model drug in two simulated gastric fluids (i.e. in fed and fasted state). This method seems to be a very promising tool for examining properties of new formulations of CR, polymer-based dosage forms or for comparison of generic and originator dosage forms before carrying out bioequivalence studies.
An eigenvalue approach to quantum plasmonics based on a self-consistent hydrodynamics method
NASA Astrophysics Data System (ADS)
Ding, Kun; Chan, C. T.
2018-02-01
Plasmonics has attracted much attention not only because it has useful properties such as strong field enhancement, but also because it reveals the quantum nature of matter. To handle quantum plasmonics effects, ab initio packages or empirical Feibelman d-parameters have been used to explore the quantum correction of plasmonic resonances. However, most of these methods are formulated within the quasi-static framework. The self-consistent hydrodynamics model offers a reliable approach to study quantum plasmonics because it can incorporate the quantum effect of the electron gas into classical electrodynamics in a consistent manner. Instead of the standard scattering method, we formulate the self-consistent hydrodynamics method as an eigenvalue problem to study quantum plasmonics with electrons and photons treated on the same footing. We find that the eigenvalue approach must involve a global operator, which originates from the energy functional of the electron gas. This manifests the intrinsic nonlocality of the response of quantum plasmonic resonances. Our model gives the analytical forms of quantum corrections to plasmonic modes, incorporating quantum electron spill-out effects and electrodynamical retardation. We apply our method to study the quantum surface plasmon polariton for a single flat interface.
Constitutive modeling of glassy shape memory polymers
NASA Astrophysics Data System (ADS)
Khanolkar, Mahesh
The aim of this research is to develop constitutive models for non-linear materials. Here, issues related for developing constitutive model for glassy shape memory polymers are addressed in detail. Shape memory polymers are novel material that can be easily formed into complex shapes, retaining memory of their original shape even after undergoing large deformations. The temporary shape is stable and return to the original shape is triggered by a suitable mechanism such heating the polymer above a transition temperature. Glassy shape memory polymers are called glassy because the temporary shape is fixed by the formation of a glassy solid, while return to the original shape is due to the melting of this glassy phase. The constitutive model has been developed to capture the thermo-mechanical behavior of glassy shape memory polymers using elements of nonlinear mechanics and polymer physics. The key feature of this framework is that a body can exist stress free in numerous natural configurations, the underlying natural configuration of the body changing during the process, with the response of the body being elastic from these evolving natural configurations. The aim of this research is to formulate a constitutive model for glassy shape memory polymers (GSMP) which takes in to account the fact that the stress-strain response depends on thermal expansion of polymers. The model developed is for the original amorphous phase, the temporary glassy phase and transition between these phases. The glass transition process has been modeled using a framework that was developed recently for studying crystallization in polymers and is based on the theory of multiple natural configurations. Using the same frame work, the melting of the glassy phase to capture the return of the polymer to its original shape is also modeled. The effect of nanoreinforcement on the response of shape memory polymers (GSMP) is studied and a model is developed. In addition to modeling and solving boundary value problems for GSMP's, problems of importance for CSMP, specifically a shape memory cycle (Torsion of a Cylinder) is solved using the developed crystallizable shape memory polymer model. To solve complex boundary value problems in realistic geometries a user material subroutine (UMAT) for GSMP model has been developed for use in conjunction with the commercial finite element software ABAQUS. The accuracy of the UMAT has been verified by testing it against problems for which the results are known.
Coupled variational formulations of linear elasticity and the DPG methodology
NASA Astrophysics Data System (ADS)
Fuentes, Federico; Keith, Brendan; Demkowicz, Leszek; Le Tallec, Patrick
2017-11-01
This article presents a general approach akin to domain-decomposition methods to solve a single linear PDE, but where each subdomain of a partitioned domain is associated to a distinct variational formulation coming from a mutually well-posed family of broken variational formulations of the original PDE. It can be exploited to solve challenging problems in a variety of physical scenarios where stability or a particular mode of convergence is desired in a part of the domain. The linear elasticity equations are solved in this work, but the approach can be applied to other equations as well. The broken variational formulations, which are essentially extensions of more standard formulations, are characterized by the presence of mesh-dependent broken test spaces and interface trial variables at the boundaries of the elements of the mesh. This allows necessary information to be naturally transmitted between adjacent subdomains, resulting in coupled variational formulations which are then proved to be globally well-posed. They are solved numerically using the DPG methodology, which is especially crafted to produce stable discretizations of broken formulations. Finally, expected convergence rates are verified in two different and illustrative examples.
Field theoretical prediction of a property of the tropical cyclone
NASA Astrophysics Data System (ADS)
Spineanu, F.; Vlad, M.
2014-01-01
The large scale atmospheric vortices (tropical cyclones, tornadoes) are complex physical systems combining thermodynamics and fluid-mechanical processes. The late phase of the evolution towards stationarity consists of the vorticity concentration, a well known tendency to self-organization , an universal property of the two-dimensional fluids. It may then be expected that the stationary state of the tropical cyclone has the same nature as the vortices of many other systems in nature: ideal (Euler) fluids, superconductors, Bose-Einsetin condensate, cosmic strings, etc. Indeed it was found that there is a description of the atmospheric vortex in terms of a classical field theory. It is compatible with the more conventional treatment based on conservation laws, but the field theoretical model reveals properties that are almost inaccessible to the conventional formulation: it identifies the stationary states as being close to self-duality. This is of highest importance: the self-duality is known to be the origin of all coherent structures known in natural systems. Therefore the field theoretical (FT) formulation finds that the cuasi-coherent form of the atmospheric vortex (tropical cyclone) at stationarity is an expression of this particular property. In the present work we examine a strong property of the tropical cyclone, which arises in the FT formulation in a natural way: the equality of the masses of the particles associated to the matter field and respectively to the gauge field in the FT model is translated into the equality between the maximum radial extension of the tropical cyclone and the Rossby radius. For the cases where the FT model is a good approximation we calculate characteristic quantities of the tropical cyclone and find good comparison with observational data.
PEGylated PLGA-based nanoparticles targeting M cells for oral vaccination.
Garinot, Marie; Fiévez, Virginie; Pourcelle, Vincent; Stoffelbach, François; des Rieux, Anne; Plapied, Laurence; Theate, Ivan; Freichels, Hélène; Jérôme, Christine; Marchand-Brynaert, Jacqueline; Schneider, Yves-Jacques; Préat, Véronique
2007-07-31
To improve the efficiency of orally delivered vaccines, PEGylated PLGA-based nanoparticles displaying RGD molecules at their surface were designed to target human M cells. RGD grafting was performed by an original method called "photografting" which covalently linked RGD peptides mainly on the PEG moiety of the PCL-PEG, included in the formulation. First, three non-targeted formulations with size and zeta potential adapted to M cell uptake and stable in gastro-intestinal fluids, were developed. Their transport by an in vitro model of the human Follicle associated epithelium (co-cultures) was largely increased as compared to mono-cultures (Caco-2 cells). RGD-labelling of nanoparticles significantly increased their transport by co-cultures, due to interactions between the RGD ligand and the beta(1) intregrins detected at the apical surface of co-cultures. In vivo studies demonstrated that RGD-labelled nanoparticles particularly concentrated in M cells. Finally, ovalbumin-loaded nanoparticles were orally administrated to mice and induced an IgG response, attesting antigen ability to elicit an immune response after oral delivery.
Biomechanics as a window into the neural control of movement
2016-01-01
Abstract Biomechanics and motor control are discussed as parts of a more general science, physics of living systems. Major problems of biomechanics deal with exact definition of variables and their experimental measurement. In motor control, major problems are associated with formulating currently unknown laws of nature specific for movements by biological objects. Mechanics-based hypotheses in motor control, such as those originating from notions of a generalized motor program and internal models, are non-physical. The famous problem of motor redundancy is wrongly formulated; it has to be replaced by the principle of abundance, which does not pose computational problems for the central nervous system. Biomechanical methods play a central role in motor control studies. This is illustrated with studies with the reconstruction of hypothetical control variables and those exploring motor synergies within the framework of the uncontrolled manifold hypothesis. Biomechanics and motor control have to merge into physics of living systems, and the earlier this process starts the better. PMID:28149390
The fluorescent tracer experiment on Holiday Beach near Mugu Canyon, Southern California
Kinsman, Nicole; Xu, J. P.
2012-01-01
After revisiting sand tracer techniques originally developed in the 1960s, a range of fluorescent coating formulations were tested in the laboratory. Explicit steps are presented for the preparation of the formulation evaluated to have superior attributes, a thermoplastic pigment/dye in a colloidal mixture with a vinyl chloride/vinyl acetate copolymer. In September 2010, 0.59 cubic meters of fluorescent tracer material was injected into the littoral zone about 4 kilometers upcoast of Mugu submarine canyon in California. The movement of tracer was monitored in three dimensions over the course of 4 days using manual and automated techniques. Detailed observations of the tracer's behavior in the coastal zone indicate that this tracer successfully mimicked the native beach sand and similar methods could be used to validate models of tracer movement in this type of environment. Recommendations including how to time successful tracer studies and how to scale the field of view of automated camera systems are presented along with the advantages and disadvantages of the described tracer methodology.
NASA Astrophysics Data System (ADS)
Santos, M. V.; Lespinard, A. R.
2011-12-01
The shelf life of mushrooms is very limited since they are susceptible to physical and microbial attack; therefore they are usually blanched and immediately frozen for commercial purposes. The aim of this work was to develop a numerical model using the finite element technique to predict freezing times of mushrooms considering the actual shape of the product. The original heat transfer equation was reformulated using a combined enthalpy-Kirchhoff formulation, therefore an own computational program using Matlab 6.5 (MathWorks, Natick, Massachusetts) was developed, considering the difficulties encountered when simulating this non-linear problem in commercial softwares. Digital images were used to generate the irregular contour and the domain discretization. The numerical predictions agreed with the experimental time-temperature curves during freezing of mushrooms (maximum absolute error <3.2°C) obtaining accurate results and minimum computer processing times. The codes were then applied to determine required processing times for different operating conditions (external fluid temperatures and surface heat transfer coefficients).
Distributed Parameter Analysis of Pressure and Flow Disturbances in Rocket Propellant Feed Systems
NASA Technical Reports Server (NTRS)
Dorsch, Robert G.; Wood, Don J.; Lightner, Charlene
1966-01-01
A digital distributed parameter model for computing the dynamic response of propellant feed systems is formulated. The analytical approach used is an application of the wave-plan method of analyzing unsteady flow. Nonlinear effects are included. The model takes into account locally high compliances at the pump inlet and at the injector dome region. Examples of the calculated transient and steady-state periodic responses of a simple hypothetical propellant feed system to several types of disturbances are presented. Included are flow disturbances originating from longitudinal structural motion, gimbaling, throttling, and combustion-chamber coupling. The analytical method can be employed for analyzing developmental hardware and offers a flexible tool for the calculation of unsteady flow in these systems.
A Simplified Biosphere Model for Global Climate Studies.
NASA Astrophysics Data System (ADS)
Xue, Y.; Sellers, P. J.; Kinter, J. L.; Shukla, J.
1991-03-01
The Simple Biosphere Model (SiB) as described in Sellers et al. is a bio-physically based model of land surface-atmosphere interaction. For some general circulation model (GCM) climate studies, further simplifications are desirable to have greater computation efficiency, and more important, to consolidate the parametric representation. Three major reductions in the complexity of SiB have been achieved in the present study.The diurnal variation of surface albedo is computed in SiB by means of a comprehensive yet complex calculation. Since the diurnal cycle is quite regular for each vegetation type, this calculation can be simplified considerably. The effect of root zone soil moisture on stomatal resistance is substantial, but the computation in SiB is complicated and expensive. We have developed approximations, which simulate the effects of reduced soil moisture more simply, keeping the essence of the biophysical concepts used in SiB.The surface stress and the fluxes of heat and moisture between the top of the vegetation canopy and an atmospheric reference level have been parameterized in an off-line version of SiB based upon the studies by Businger et al. and Paulson. We have developed a linear relationship between Richardson number and aero-dynamic resistance. Finally, the second vegetation layer of the original model does not appear explicitly after simplification. Compared to the model of Sellers et al., we have reduced the number of input parameters from 44 to 21. A comparison of results using the reduced parameter biosphere with those from the original formulation in a GCM and a zero-dimensional model shows the simplified version to reproduce the original results quite closely. After simplification, the computational requirement of SiB was reduced by about 55%.
Wang, Chen-Chao; Tejwani Motwani, Monica R; Roach, Willie J; Kay, Jennifer L; Yoo, Jaedeok; Surprenant, Henry L; Monkhouse, Donald C; Pryor, Timothy J
2006-03-01
Three near zero-order controlled-release pseudoephedrine hydrochloride (PEH) formulations demonstrating proportional release rates were developed using 3-Dimensional Printing (3-DP) technology. Mixtures of Kollidon SR and hydroxypropylmethyl cellulose (HPMC) were used as drug carriers. The release rates were adjusted by varying the Kollidon SR-HPMC ratio while keeping fabrication parameters constant. The dosage forms were composed of an immediate release core and a release rate regulating shell, fabricated with an aqueous PEH and an ethanolic triethyl citrate (TEC) binder, respectively. The dosage form design called for the drug to be released via diffusional pathways formed by HPMC in the shell matrix. The release rate was shown to increase correspondingly with the fraction of HPMC contained in the polymer blend. The designed formulations resulted in dosage forms that were insensitive to changes in pH of the dissolution medium, paddle stirring rate, and the presence/absence of a sinker. The near zero-order release properties were unchanged regardless of the dissolution test being performed on either single cubes or on a group of eight cubes encased within a gelatin capsule shell. The chemical and dissolution properties of the three formulations remained unchanged following 1 month's exposure to 25 degrees C/60% RH or 40 degrees C/75% RH environment under open container condition. The in vivo performance of the three formulations was evaluated using a single-dose, randomized, open-label, four-way crossover clinical study composed of 10 fasted healthy volunteers. The pharmacokinetic parameters were analyzed using a noncompartmental model. Qualitative rank order linear correlations between in vivo absorption profiles and in vitro dissolution parameters (with slope and intercept close to unity and origin, respectively) were obtained for all three formulations, indicating good support for a Level A in vivo/in vitro correlation.
Wang, Shujing; Zhang, Ning; Hu, Tao; Dai, Weiguo; Feng, Xiuying; Zhang, Xinyi; Qian, Feng
2015-12-07
Monoclonal antibodies display complicated solution properties in highly concentrated (>100 mg/mL) formulations, such as high viscosity, high aggregation propensity, and low stability, among others, originating from protein-protein interactions within the colloidal protein solution. These properties severely hinder the successful development of high-concentration mAb solution for subcutaneous injection. We hereby investigated the effects of several small-molecule excipients with diverse biophysical-chemical properties on the viscosity, aggregation propensity, and stability on two model IgG1 (JM1 and JM2) mAb formulations. These excipients include nine amino acids or their salt forms (Ala, Pro, Val, Gly, Ser, HisHCl, LysHCl, ArgHCl, and NaGlu), four representative salts (NaCl, NaAc, Na2SO4, and NH4Cl), and two chaotropic reagents (urea and GdnHCl). With only salts or amino acids in their salt-forms, significant decrease in viscosity was observed for JM1 (by up to 30-40%) and JM2 (by up to 50-80%) formulations, suggesting charge-charge interaction between the mAbs dictates the high viscosity of these mAbs formulations. Most of these viscosity-lowering excipients did not induce substantial protein aggregation or changes in the secondary structure of the mAbs, as evidenced by HPLC-SEC, DSC, and FT-IR analysis, even in the absence of common protein stabilizers such as sugars and surfactants. Therefore, amino acids in their salt-forms and several common salts, such as ArgHCl, HisHCl, LysHCl, NaCl, Na2SO4, and NaAc, could potentially serve as viscosity-lowering excipients during high-concentration mAb formulation development.
Archaea: The First Domain of Diversified Life
Caetano-Anollés, Gustavo; Nasir, Arshan; Zhou, Kaiyue; Caetano-Anollés, Derek; Mittenthal, Jay E.; Sun, Feng-Jie; Kim, Kyung Mo
2014-01-01
The study of the origin of diversified life has been plagued by technical and conceptual difficulties, controversy, and apriorism. It is now popularly accepted that the universal tree of life is rooted in the akaryotes and that Archaea and Eukarya are sister groups to each other. However, evolutionary studies have overwhelmingly focused on nucleic acid and protein sequences, which partially fulfill only two of the three main steps of phylogenetic analysis, formulation of realistic evolutionary models, and optimization of tree reconstruction. In the absence of character polarization, that is, the ability to identify ancestral and derived character states, any statement about the rooting of the tree of life should be considered suspect. Here we show that macromolecular structure and a new phylogenetic framework of analysis that focuses on the parts of biological systems instead of the whole provide both deep and reliable phylogenetic signal and enable us to put forth hypotheses of origin. We review over a decade of phylogenomic studies, which mine information in a genomic census of millions of encoded proteins and RNAs. We show how the use of process models of molecular accumulation that comply with Weston's generality criterion supports a consistent phylogenomic scenario in which the origin of diversified life can be traced back to the early history of Archaea. PMID:24987307
The Quasicontinuum Method: Overview, applications and current directions
NASA Astrophysics Data System (ADS)
Miller, Ronald E.; Tadmor, E. B.
2002-10-01
The Quasicontinuum (QC) Method, originally conceived and developed by Tadmor, Ortiz and Phillips [1] in 1996, has since seen a great deal of development and application by a number of researchers. The idea of the method is a relatively simple one. With the goal of modeling an atomistic system without explicitly treating every atom in the problem, the QC provides a framework whereby degrees of freedom are judiciously eliminated and force/energy calculations are expedited. This is combined with adaptive model refinement to ensure that full atomistic detail is retained in regions of the problem where it is required while continuum assumptions reduce the computational demand elsewhere. This article provides a review of the method, from its original motivations and formulation to recent improvements and developments. A summary of the important mechanics of materials results that have been obtained using the QC approach is presented. Finally, several related modeling techniques from the literature are briefly discussed. As an accompaniment to this paper, a website designed to serve as a clearinghouse for information on the QC method has been established at www.qcmethod.com. The site includes information on QC research, links to researchers, downloadable QC code and documentation.
An efficient formulation of robot arm dynamics for control and computer simulation
NASA Astrophysics Data System (ADS)
Lee, C. S. G.; Nigam, R.
This paper describes an efficient formulation of the dynamic equations of motion of industrial robots based on the Lagrange formulation of d'Alembert's principle. This formulation, as applied to a PUMA robot arm, results in a set of closed form second order differential equations with cross product terms. They are not as efficient in computation as those formulated by the Newton-Euler method, but provide a better analytical model for control analysis and computer simulation. Computational complexities of this dynamic model together with other models are tabulated for discussion.
NASA Astrophysics Data System (ADS)
Wilde, M. V.; Sergeeva, N. V.
2018-05-01
An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.
NASA Astrophysics Data System (ADS)
Aishah Syed Ali, Sharifah
2017-09-01
This paper considers economic lot sizing problem in remanufacturing with separate setup (ELSRs), where remanufactured and new products are produced on dedicated production lines. Since this problem is NP-hard in general, which leads to computationally inefficient and low-quality of solutions, we present (a) a multicommodity formulation and (b) a strengthened formulation based on a priori addition of valid inequalities in the space of original variables, which are then compared with the Wagner-Whitin based formulation available in the literature. Computational experiments on a large number of test data sets are performed to evaluate the different approaches. The numerical results show that our strengthened formulation outperforms all the other tested approaches in terms of linear relaxation bounds. Finally, we conclude with future research directions.
A fast marching algorithm for the factored eikonal equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treister, Eran, E-mail: erantreister@gmail.com; Haber, Eldad, E-mail: haber@math.ubc.ca; Department of Mathematics, The University of British Columbia, Vancouver, BC
The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. Thismore » inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss–Newton.« less
The formulations of the AMS/EPA Regulatory Model Improvement Committee's applied air dispersion model (AERMOD) as related to the characterization of the planetary boundary layer are described. This is the first in a series of three articles. Part II describes the formulation of...
Model reduction method using variable-separation for stochastic saddle point problems
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2018-02-01
In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.
Supercomputer implementation of finite element algorithms for high speed compressible flows
NASA Technical Reports Server (NTRS)
Thornton, E. A.; Ramakrishnan, R.
1986-01-01
Prediction of compressible flow phenomena using the finite element method is of recent origin and considerable interest. Two shock capturing finite element formulations for high speed compressible flows are described. A Taylor-Galerkin formulation uses a Taylor series expansion in time coupled with a Galerkin weighted residual statement. The Taylor-Galerkin algorithms use explicit artificial dissipation, and the performance of three dissipation models are compared. A Petrov-Galerkin algorithm has as its basis the concepts of streamline upwinding. Vectorization strategies are developed to implement the finite element formulations on the NASA Langley VPS-32. The vectorization scheme results in finite element programs that use vectors of length of the order of the number of nodes or elements. The use of the vectorization procedure speeds up processing rates by over two orders of magnitude. The Taylor-Galerkin and Petrov-Galerkin algorithms are evaluated for 2D inviscid flows on criteria such as solution accuracy, shock resolution, computational speed and storage requirements. The convergence rates for both algorithms are enhanced by local time-stepping schemes. Extension of the vectorization procedure for predicting 2D viscous and 3D inviscid flows are demonstrated. Conclusions are drawn regarding the applicability of the finite element procedures for realistic problems that require hundreds of thousands of nodes.
Three-Dimensional Piecewise-Continuous Class-Shape Transformation of Wings
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2015-01-01
Class-Shape Transformation (CST) is a popular method for creating analytical representations of the surface coordinates of various components of aerospace vehicles. A wide variety of two- and three-dimensional shapes can be represented analytically using only a modest number of parameters, and the surface representation is smooth and continuous to as fine a degree as desired. This paper expands upon the original two-dimensional representation of airfoils to develop a generalized three-dimensional CST parametrization scheme that is suitable for a wider range of aircraft wings than previous formulations, including wings with significant non-planar shapes such as blended winglets and box wings. The method uses individual functions for the spanwise variation of airfoil shape, chord, thickness, twist, and reference axis coordinates to build up the complete wing shape. An alternative formulation parameterizes the slopes of the reference axis coordinates in order to relate the spanwise variation to the tangents of the sweep and dihedral angles. Also discussed are methods for fitting existing wing surface coordinates, including the use of piecewise equations to handle discontinuities, and mathematical formulations of geometric continuity constraints. A subsonic transport wing model is used as an example problem to illustrate the application of the methodology and to quantify the effects of piecewise representation and curvature constraints.
Non Abelian T-duality in Gauged Linear Sigma Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bizet, Nana Cabo; Martínez-Merino, Aldo; Zayas, Leopoldo A. Pando
Abelian T-duality in Gauged Linear Sigma Models (GLSM) forms the basis of the physical understanding of Mirror Symmetry as presented by Hori and Vafa. We consider an alternative formulation of Abelian T-duality on GLSM’s as a gauging of a global U(1) symmetry with the addition of appropriate Lagrange multipliers. For GLSMs with Abelian gauge groups and without superpotential we reproduce the dual models introduced by Hori and Vafa. We extend the construction to formulate non-Abelian T-duality on GLSMs with global non-Abelian symmetries. The equations of motion that lead to the dual model are obtained for a general group, they dependmore » in general on semi-chiral superfields; for cases such as SU(2) they depend on twisted chiral superfields. We solve the equations of motion for an SU(2) gauged group with a choice of a particular Lie algebra direction of the vector superfield. This direction covers a non-Abelian sector that can be described by a family of Abelian dualities. The dual model Lagrangian depends on twisted chiral superfields and a twisted superpotential is generated. We explore some non-perturbative aspects by making an Ansatz for the instanton corrections in the dual theories. We verify that the effective potential for the U(1) field strength in a fixed configuration on the original theory matches the one of the dual theory. Imposing restrictions on the vector superfield, more general non-Abelian dual models are obtained. We analyze the dual models via the geometry of their susy vacua.« less
Non Abelian T-duality in Gauged Linear Sigma Models
Bizet, Nana Cabo; Martínez-Merino, Aldo; Zayas, Leopoldo A. Pando; ...
2018-04-01
Abelian T-duality in Gauged Linear Sigma Models (GLSM) forms the basis of the physical understanding of Mirror Symmetry as presented by Hori and Vafa. We consider an alternative formulation of Abelian T-duality on GLSM’s as a gauging of a global U(1) symmetry with the addition of appropriate Lagrange multipliers. For GLSMs with Abelian gauge groups and without superpotential we reproduce the dual models introduced by Hori and Vafa. We extend the construction to formulate non-Abelian T-duality on GLSMs with global non-Abelian symmetries. The equations of motion that lead to the dual model are obtained for a general group, they dependmore » in general on semi-chiral superfields; for cases such as SU(2) they depend on twisted chiral superfields. We solve the equations of motion for an SU(2) gauged group with a choice of a particular Lie algebra direction of the vector superfield. This direction covers a non-Abelian sector that can be described by a family of Abelian dualities. The dual model Lagrangian depends on twisted chiral superfields and a twisted superpotential is generated. We explore some non-perturbative aspects by making an Ansatz for the instanton corrections in the dual theories. We verify that the effective potential for the U(1) field strength in a fixed configuration on the original theory matches the one of the dual theory. Imposing restrictions on the vector superfield, more general non-Abelian dual models are obtained. We analyze the dual models via the geometry of their susy vacua.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ninokata, H.; Deguchi, A.; Kawahara, A.
1995-09-01
A new void drift model for the subchannel analysis method is presented for the thermohydraulics calculation of two-phase flows in rod bundles where the flow model uses a two-fluid formulation for the conservation of mass, momentum and energy. A void drift model is constructed based on the experimental data obtained in a geometrically simple inter-connected two circular channel test sections using air-water as working fluids. The void drift force is assumed to be an origin of void drift velocity components of the two-phase cross-flow in a gap area between two adjacent rods and to overcome the momentum exchanges at themore » phase interface and wall-fluid interface. This void drift force is implemented in the cross flow momentum equations. Computational results have been successfully compared to experimental data available including 3x3 rod bundle data.« less
Generalized constitutive equations for piezo-actuated compliant mechanism
NASA Astrophysics Data System (ADS)
Cao, Junyi; Ling, Mingxiang; Inman, Daniel J.; Lin, Jin
2016-09-01
This paper formulates analytical models to describe the static displacement and force interactions between generic serial-parallel compliant mechanisms and their loads by employing the matrix method. In keeping with the familiar piezoelectric constitutive equations, the generalized constitutive equations of compliant mechanism represent the input-output displacement and force relations in the form of a generalized Hooke’s law and as analytical functions of physical parameters. Also significantly, a new model of output displacement for compliant mechanism interacting with piezo-stacks and elastic loads is deduced based on the generalized constitutive equations. Some original findings differing from the well-known constitutive performance of piezo-stacks are also given. The feasibility of the proposed models is confirmed by finite element analysis and by experiments under various elastic loads. The analytical models can be an insightful tool for predicting and optimizing the performance of a wide class of compliant mechanisms that simultaneously consider the influence of loads and piezo-stacks.
Olivares-Morales, Andrés; Ghosh, Avijit; Aarons, Leon; Rostami-Hodjegan, Amin
2016-11-01
A new minimal Segmented Transit and Absorption model (mSAT) model has been recently proposed and combined with intrinsic intestinal effective permeability (P eff,int ) to predict the regional gastrointestinal (GI) absorption (f abs ) of several drugs. Herein, this model was extended and applied for the prediction of oral bioavailability and pharmacokinetics of oxybutynin and its enantiomers to provide a mechanistic explanation of the higher relative bioavailability observed for oxybutynin's modified-release OROS® formulation compared to its immediate-release (IR) counterpart. The expansion of the model involved the incorporation of mechanistic equations for the prediction of release, transit, dissolution, permeation and first-pass metabolism. The predicted pharmacokinetics of oxybutynin enantiomers after oral administration for both the IR and OROS® formulations were in close agreement with the observed data. The predicted absolute bioavailability for the IR formulation was within 5% of the observed value, and the model adequately predicted the higher relative bioavailability observed for the OROS® formulation vs. the IR counterpart. From the model predictions, it can be noticed that the higher bioavailability observed for the OROS® formulation was mainly attributable to differences in the intestinal availability (F G ) rather than due to a higher colonic f abs , thus confirming previous hypotheses. The predicted f abs was almost 70% lower for the OROS® formulation compared to the IR formulation, whereas the F G was almost eightfold higher than in the IR formulation. These results provide further support to the hypothesis of an increased F G as the main factor responsible for the higher bioavailability of oxybutynin's OROS® formulation vs. the IR.
Two alternative ways for solving the coordination problem in multilevel optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
Two techniques for formulating the coupling between levels in multilevel optimization by linear decomposition, proposed as improvements over the original formulation, now several years old, that relied on explicit equality constraints which were shown by application experience as occasionally causing numerical difficulties. The two new techniques represent the coupling without using explicit equality constraints, thus avoiding the above diffuculties and also reducing computational cost of the procedure. The old and new formulations are presented in detail and illustrated by an example of a structural optimization. A generic version of the improved algorithm is also developed for applications to multidisciplinary systems not limited to structures.
NASA Astrophysics Data System (ADS)
Cirilo-Lombardo, Diego Julio
2009-04-01
The physical meaning of the particularly simple non-degenerate supermetric, introduced in the previous part by the authors, is elucidated and the possible connection with processes of topological origin in high energy physics is analyzed and discussed. New possible mechanism of the localization of the fields in a particular sector of the supermanifold is proposed and the similarity and differences with a 5-dimensional warped model are shown. The relation with gauge theories of supergravity based in the OSP(1/4) group is explicitly given and the possible original action is presented. We also show that in this non-degenerate super-model the physic states, in contrast with the basic states, are observables and can be interpreted as tomographic projections or generalized representations of operators belonging to the metaplectic group Mp(2). The advantage of geometrical formulations based on non-degenerate super-manifolds over degenerate ones is pointed out and the description and the analysis of some interesting aspects of the simplest Riemannian superspaces are presented from the point of view of the possible vacuum solutions.
Advanced Technology System Scheduling Governance Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ang, Jim; Carnes, Brian; Hoang, Thuc
In the fall of 2005, the Advanced Simulation and Computing (ASC) Program appointed a team to formulate a governance model for allocating resources and scheduling the stockpile stewardship workload on ASC capability systems. This update to the original document takes into account the new technical challenges and roles for advanced technology (AT) systems and the new ASC Program workload categories that must be supported. The goal of this updated model is to effectively allocate and schedule AT computing resources among all three National Nuclear Security Administration (NNSA) laboratories for weapons deliverables that merit priority on this class of resource. Themore » process outlined below describes how proposed work can be evaluated and approved for resource allocations while preserving high effective utilization of the systems. This approach will provide the broadest possible benefit to the Stockpile Stewardship Program (SSP).« less
Silicate Inclusions in the Kodaikanal IIE Iron Meteorite
NASA Technical Reports Server (NTRS)
Kurat, G.; Varela, M. E.; Zinner, E.
2005-01-01
Silicate inclusions in iron meteorites display an astonishing chemical and mineralogical variety, ranging from chondritic to highly fractionated, silica- and alkali-rich assemblages. In spite of this, their origin is commonly considered to be a simple one: mixing of silicates, fractionated or unfractionated, with metal. The latter had to be liquid in order to accommodate the former in a pore-free way which all models accomplish by assuming shock melting. II-E iron meteorites are particularly interesting because they contain an exotic zoo of silicate inclusions, including some chemically strongly fractionated ones. They also pose a formidable conundrum: young silicates are enclosed by very old metal. This and many other incompatibilities between models and reality forced the formulation of an alternative genetic model for irons. Here we present preliminary findings in our study of Kodaikanal silicate inclusions.
Analytical and numerical analysis of frictional damage in quasi brittle materials
NASA Astrophysics Data System (ADS)
Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.
2016-07-01
Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.
When growth models are not universal: evidence from marine invertebrates
Hirst, Andrew G.; Forster, Jack
2013-01-01
The accumulation of body mass, as growth, is fundamental to all organisms. Being able to understand which model(s) best describe this growth trajectory, both empirically and ultimately mechanistically, is an important challenge. A variety of equations have been proposed to describe growth during ontogeny. Recently, the West Brown Enquist (WBE) equation, formulated as part of the metabolic theory of ecology, has been proposed as a universal model of growth. This equation has the advantage of having a biological basis, but its ability to describe invertebrate growth patterns has not been well tested against other, more simple models. In this study, we collected data for 58 species of marine invertebrate from 15 different taxa. The data were fitted to three growth models (power, exponential and WBE), and their abilities were examined using an information theoretic approach. Using Akaike information criteria, we found changes in mass through time to fit an exponential equation form best (in approx. 73% of cases). The WBE model predominantly overestimates body size in early ontogeny and underestimates it in later ontogeny; it was the best fit in approximately 14% of cases. The exponential model described growth well in nine taxa, whereas the WBE described growth well in one of the 15 taxa, the Amphipoda. Although the WBE has the advantage of being developed with an underlying proximate mechanism, it provides a poor fit to the majority of marine invertebrates examined here, including species with determinate and indeterminate growth types. In the original formulation of the WBE model, it was tested almost exclusively against vertebrates, to which it fitted well; the model does not however appear to be universal given its poor ability to describe growth in benthic or pelagic marine invertebrates. PMID:23945691
Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases.
Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N
2016-01-01
A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed.
Strehl-constrained reconstruction of post-adaptive optics data and the Software Package AIRY, v. 6.1
NASA Astrophysics Data System (ADS)
Carbillet, Marcel; La Camera, Andrea; Deguignet, Jérémy; Prato, Marco; Bertero, Mario; Aristidi, Éric; Boccacci, Patrizia
2014-08-01
We first briefly present the last version of the Software Package AIRY, version 6.1, a CAOS-based tool which includes various deconvolution methods, accelerations, regularizations, super-resolution, boundary effects reduction, point-spread function extraction/extrapolation, stopping rules, and constraints in the case of iterative blind deconvolution (IBD). Then, we focus on a new formulation of our Strehl-constrained IBD, here quantitatively compared to the original formulation for simulated near-infrared data of an 8-m class telescope equipped with adaptive optics (AO), showing their equivalence. Next, we extend the application of the original method to the visible domain with simulated data of an AO-equipped 1.5-m telescope, testing also the robustness of the method with respect to the Strehl ratio estimation.
NASA Astrophysics Data System (ADS)
Gorjiara, Tina; Hill, Robin; Kuncic, Zdenka; Baldock, Clive
2010-11-01
A major challenge in brachytherapy dosimetry is the measurement of steep dose gradients. This can be achieved with a high spatial resolution three dimensional (3D) dosimeter. PRESAGE® is a polyurethane based dosimeter which is suitable for 3D dosimetry. Since an ideal dosimeter is radiologically water equivalent, we have investigated the relative dose response of three different PRESAGE® formulations, two with a lower chloride and bromide content than original one, for Cs-137 and Ir-192 brachytherapy sources. Doses were calculated using the EGSnrc Monte Carlo package. Our results indicate that PRESAGE® dosimeters are suitable for relative dose measurement of Cs-137 and Ir-192 brachytherapy sources and the lower halogen content PRESAGE® dosimeters are more water equivalent than the original formulation.
Multi-Body Analysis of the 1/5 Scale Wind Tunnel Model of the V-22 Tiltrotor
NASA Technical Reports Server (NTRS)
Ghiringhelli, G. L.; Masarati, P.; Mantegazza, P.; Nixon, M. W.
1999-01-01
The paper presents a multi-body analysis of the 1/5 scale wind tunnel model of the V-22 tiltrotor, the Wing and Rotor Aeroelastic Testing System (WRATS), currently tested at NASA Langley Research Center. An original multi-body formulation has been developed at the Dipartimento di Ingegneria Aerospaziale of the Politecnico di Milano, Italy. It is based on the direct writing of the equilibrium equations of independent rigid bodies, connected by kinematic constraints that result in the addition of algebraic constraint equations, and by dynamic constraints, that directly contribute to the equilibrium equations. The formulation has been extended to the simultaneous solution of interdisciplinary problems by modeling electric and hydraulic networks, for aeroservoelastic problems. The code has been tailored to the modeling of rotorcrafts while preserving a complete generality. A family of aerodynamic elements has been introduced to model high aspect aerodynamic surfaces, based on the strip theory, with quasi-steady aerodynamic coefficients, compressibility, post-stall interpolation of experimental data, dynamic stall modeling, and radial flow drag. Different models for the induced velocity of the rotor can be used, from uniform velocity to dynamic in flow. A complete dynamic and aeroelastic analysis of the model of the V-22 tiltrotor has been performed, to assess the validity of the formulation and to exploit the unique features of multi-body analysis with respect to conventional comprehensive rotorcraft codes; These are the ability to model the exact kinematics of mechanical systems, and the possibility to simulate unusual maneuvers and unusual flight conditions, that are particular to the tiltrotor, e.g. the conversion maneuver. A complete modal validation of the analytical model has been performed, to assess the ability to reproduce the correct dynamics of the system with a relatively coarse beam model of the semispan wing, pylon and rotor. Particular care has been used to model the kinematics of the gimbal joint, that characterizes the rotor hub, and of the control system, consisting in the entire swashplate mechanism. The kinematics of the fixed and the rotating plates have been modeled, with variable length control links used to input the controls, the rotating flexible links, the pitch horns and the pitch bearings. The investigations took advantage of concurring wind tunnel test runs, that were performed in August 1998, and allowed the acquisition of data specific to the multi-body analysis.
General Pharmacokinetic Model for Topically Administered Ocular Drug Dosage Forms.
Deng, Feng; Ranta, Veli-Pekka; Kidron, Heidi; Urtti, Arto
2016-11-01
In ocular drug development, an early estimate of drug behavior before any in vivo experiments is important. The pharmacokinetics (PK) and bioavailability depend not only on active compound and excipients but also on physicochemical properties of the ocular drug formulation. We propose to utilize PK modelling to predict how drug and formulational properties affect drug bioavailability and pharmacokinetics. A physiologically relevant PK model based on the rabbit eye was built to simulate the effect of formulation and physicochemical properties on PK of pilocarpine solutions and fluorometholone suspensions. The model consists of four compartments: solid and dissolved drug in tear fluid, drug in corneal epithelium and aqueous humor. Parameter values and in vivo PK data in rabbits were taken from published literature. The model predicted the pilocarpine and fluorometholone concentrations in the corneal epithelium and aqueous humor with a reasonable accuracy for many different formulations. The model includes a graphical user interface that enables the user to modify parameters easily and thus simulate various formulations. The model is suitable for the development of ophthalmic formulations and the planning of bioequivalence studies.
On Milne-Barbier-Unsöld relationships
NASA Astrophysics Data System (ADS)
Paletou, Frédéric
2018-04-01
This short review aims to clarify upon the origins of so-called Eddington-Barbier relationships, which relate the emergent specific intensity and the flux to the photospheric source function at specific optical depths. Here we discuss the assumptions behind the original derivation of Barbier (1943).We also point to the fact that Milne had already formulated these two relations in 1921.
The implausibility of Mendel's theory before 1900.
Orel, V
Attention is paid to the category of the plausibility of Mendel's terminology in formulating the research problem, in describing experimental model and research method and in explaining his theory in the historical context of the long lasting enigma of generation, hybridization and heredity. The new research problem of heredity derived from the enigma of generation was plausible for the sheep breeders in Brno in 1836-1837 who also formulated the research question: what and how is inherited? But they did not find an approach to the experimental investigation. Later in 1852 the research problem of heredity was formulated by the physiologist of the Göttingen University, R. Wagner, who also outlined the method of crossing animals or artifical fertilization of plants for the investigation of the enigma of generation and heredity. But he could not carry out the recommended experiments at the University. His proposal remained without echo. Mendel first mentioned the motivation for his research arising from plant breeding experience and then from the experiments with plant crossing by botanists. He delivered his lectures in Brno to the community of naturalists, who paid attention to the appearance of hybrids in nature, but were not interested in plant breeding. After describing research model and experimental method Mendel presented the sequence of hypotheses proved in experiments and explained the origin and development of hybrids and at the same time also the mechanism of fertilization and of transmission of traits, what was heredity without using the term. The listeners of his lectures and later the readers of his paper did not understand his explanation. ...
Differential equations with applications in cancer diseases.
Ilea, M; Turnea, M; Rotariu, M
2013-01-01
Mathematical modeling is a process by which a real world problem is described by a mathematical formulation. The cancer modeling is a highly challenging problem at the frontier of applied mathematics. A variety of modeling strategies have been developed, each focusing on one or more aspects of cancer. The vast majority of mathematical models in cancer diseases biology are formulated in terms of differential equations. We propose an original mathematical model with small parameter for the interactions between these two cancer cell sub-populations and the mathematical model of a vascular tumor. We work on the assumption that, the quiescent cells' nutrient consumption is long. One the equations system includes small parameter epsilon. The smallness of epsilon is relative to the size of the solution domain. MATLAB simulations obtained for transition rate from the quiescent cells' nutrient consumption is long, we show a similar asymptotic behavior for two solutions of the perturbed problem. In this system, the small parameter is an asymptotic variable, different from the independent variable. The graphical output for a mathematical model of a vascular tumor shows the differences in the evolution of the tumor populations of proliferating, quiescent and necrotic cells. The nutrient concentration decreases sharply through the viable rim and tends to a constant level in the core due to the nearly complete necrosis in this region. Many mathematical models can be quantitatively characterized by ordinary differential equations or partial differential equations. The use of MATLAB in this article illustrates the important role of informatics in research in mathematical modeling. The study of avascular tumor growth cells is an exciting and important topic in cancer research and will profit considerably from theoretical input. Interpret these results to be a permanent collaboration between math's and medical oncologists.
NASA Astrophysics Data System (ADS)
Benettin, G.; Pasquali, S.; Ponno, A.
2018-05-01
FPU models, in dimension one, are perturbations either of the linear model or of the Toda model; perturbations of the linear model include the usual β -model, perturbations of Toda include the usual α +β model. In this paper we explore and compare two families, or hierarchies, of FPU models, closer and closer to either the linear or the Toda model, by computing numerically, for each model, the maximal Lyapunov exponent χ . More precisely, we consider statistically typical trajectories and study the asymptotics of χ for large N (the number of particles) and small ɛ (the specific energy E / N), and find, for all models, asymptotic power laws χ ˜eq Cɛ ^a, C and a depending on the model. The asymptotics turns out to be, in general, rather slow, and producing accurate results requires a great computational effort. We also revisit and extend the analytic computation of χ introduced by Casetti, Livi and Pettini, originally formulated for the β -model. With great evidence the theory extends successfully to all models of the linear hierarchy, but not to models close to Toda.
NASA Astrophysics Data System (ADS)
Anees, Asim; Aryal, Jagannath; O'Reilly, Małgorzata M.; Gale, Timothy J.; Wardlaw, Tim
2016-12-01
A robust non-parametric framework, based on multiple Radial Basic Function (RBF) kernels, is proposed in this study, for detecting land/forest cover changes using Landsat 7 ETM+ images. One of the widely used frameworks is to find change vectors (difference image) and use a supervised classifier to differentiate between change and no-change. The Bayesian Classifiers e.g. Maximum Likelihood Classifier (MLC), Naive Bayes (NB), are widely used probabilistic classifiers which assume parametric models, e.g. Gaussian function, for the class conditional distributions. However, their performance can be limited if the data set deviates from the assumed model. The proposed framework exploits the useful properties of Least Squares Probabilistic Classifier (LSPC) formulation i.e. non-parametric and probabilistic nature, to model class posterior probabilities of the difference image using a linear combination of a large number of Gaussian kernels. To this end, a simple technique, based on 10-fold cross-validation is also proposed for tuning model parameters automatically instead of selecting a (possibly) suboptimal combination from pre-specified lists of values. The proposed framework has been tested and compared with Support Vector Machine (SVM) and NB for detection of defoliation, caused by leaf beetles (Paropsisterna spp.) in Eucalyptus nitens and Eucalyptus globulus plantations of two test areas, in Tasmania, Australia, using raw bands and band combination indices of Landsat 7 ETM+. It was observed that due to multi-kernel non-parametric formulation and probabilistic nature, the LSPC outperforms parametric NB with Gaussian assumption in change detection framework, with Overall Accuracy (OA) ranging from 93.6% (κ = 0.87) to 97.4% (κ = 0.94) against 85.3% (κ = 0.69) to 93.4% (κ = 0.85), and is more robust to changing data distributions. Its performance was comparable to SVM, with added advantages of being probabilistic and capable of handling multi-class problems naturally with its original formulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pennington, J.C.; Theriot, E.A.
1983-06-01
A formulation of the fungus Cercospora rodmanii Conway has been produced, as a biocontrol of waterhyacinth (Eichhornia crassipes (Mart.) Solms.). To ensure the most efficient germination of the formulation, 12 potential enhancing agents were tested for addition during the spray application. The agents were aspartic acid, glucose, glutamic acid, gum xanthan, nutrient agar, Ortho X-77 Spreader, Tween 20, Tween 60, Tween 80, sodium alginate, Super Slupper, and yeast extract. Compatibility of test agents and combinations of test agents with two lots of the formulation was determined in the laboratory. All combinations of test agents were compatible with both lots ofmore » the C. rodmanii formulation. The C. rodmanii formulation was sprayed with test agents on waterhyacinth pseudolaminae. Damage was monitored each week for 8 weeks by assigning a disease index to each original and new pseudolaminae. No spots having characteristics suggestive of C. rodmanii infection were observed at any time during the study. Lack of infectivity could be remedied by isolating a virulent strain of C. rodmanii from the field. Agents determined to be compatible in this study could then be reexamined for enhancing infectivity on a virulent C. rodmanii formulation. 14 references, 2 figures, 5 tables.« less
Low Temperature Reactivities of Ultra-High Temperature Ceramics (Hf-X System)
2006-03-01
as interacting fillers with the preceramic polymer formulations. In situ formation of the SiC phase was also evaluated as a practical approach in...silicon (reaction-bonded SiC ), which was introduced either as a powder mixed in the original composite formulation or as a subsequent infiltrant that...and their aerospace and turbine applications has led to a renewal of activities to fabricate MB 2/ SiC composites as the materials of choice, because
2016-09-07
approach in co simulation with fluid-dynamics solvers is used. An original variational formulation is developed for the inverse problem of...by the inverse solution meshing. The same approach is used to map the structural and fluid interface kinematics and loads during the fluid structure...co-simulation. The inverse analysis is verified by reconstructing the deformed solution obtained with a corresponding direct formulation, based on
A hybridizable discontinuous Galerkin method for modeling fluid-structure interaction
NASA Astrophysics Data System (ADS)
Sheldon, Jason P.; Miller, Scott T.; Pitt, Jonathan S.
2016-12-01
This work presents a novel application of the hybridizable discontinuous Galerkin (HDG) finite element method to the multi-physics simulation of coupled fluid-structure interaction (FSI) problems. Recent applications of the HDG method have primarily been for single-physics problems including both solids and fluids, which are necessary building blocks for FSI modeling. Utilizing these established models, HDG formulations for linear elastostatics, a nonlinear elastodynamic model, and arbitrary Lagrangian-Eulerian Navier-Stokes are derived. The elasticity formulations are written in a Lagrangian reference frame, with the nonlinear formulation restricted to hyperelastic materials. With these individual solid and fluid formulations, the remaining challenge in FSI modeling is coupling together their disparate mathematics on the fluid-solid interface. This coupling is presented, along with the resultant HDG FSI formulation. Verification of the component models, through the method of manufactured solutions, is performed and each model is shown to converge at the expected rate. The individual components, along with the complete FSI model, are then compared to the benchmark problems proposed by Turek and Hron [1]. The solutions from the HDG formulation presented in this work trend towards the benchmark as the spatial polynomial order and the temporal order of integration are increased.
Kulinowski, Piotr; Woyna-Orlewicz, Krzysztof; Rappen, Gerd-Martin; Haznar-Garbacz, Dorota; Węglarz, Władysław P; Dorożyński, Przemysław P
2015-04-30
Motivation for the study was the lack of dedicated and effective research and development (R&D) in vitro methods for oral, generic, modified release formulations. The purpose of the research was to assess multimodal in vitro methodology for further bioequivalence study risk minimization. Principal results of the study are as follows: (i) Pharmaceutically equivalent quetiapine fumarate extended release dosage form of Seroquel XR was developed using a quality by design/design of experiment (QbD/DoE) paradigm. (ii) The developed formulation was then compared with originator using X-ray microtomography, magnetic resonance imaging and texture analysis. Despite similarity in terms of compendial dissolution test, developed and original dosage forms differed in micro/meso structure and consequently in mechanical properties. (iii) These differences were found to be the key factors of failure of biorelevant dissolution test using the stress dissolution apparatus. Major conclusions are as follows: (i) Imaging methods allow to assess internal features of the hydrating extended release matrix and together with the stress dissolution test allow to rationalize the design of generic formulations at the in vitro level. (ii) Technological impact on formulation properties e.g., on pore formation in hydrating matrices cannot be overlooked when designing modified release dosage forms. Copyright © 2015 Elsevier B.V. All rights reserved.
Rodríguez, Luis A García; Hernández-Díaz, Sonia; de Abajo, Francisco J
2001-01-01
Aims Because of the widespread use of aspirin for prevention of cardiovascular diseases, side-effects associated with thromboprophylactic doses are of interest. This study summarizes the relative risk (RR) for serious upper gastrointestinal complications (UGIC) associated with aspirin exposure in general and with specific aspirin doses and formulations in particular. Methods After a systematic review, 17 original epidemiologic studies published between 1990 and 2001 were selected according to predefined criteria. Heterogeneity of effects was explored. Pooled estimates were calculated according to different study characteristics and patterns of aspirin use. Results The overall relative risk of UGIC associated with aspirin use was 2.2 (95% confidence interval (CI): 2.1, 2.4) for cohort studies and nested case-control studies and 3.1 (95% CI: 2.8, 3.3) for non-nested case-control studies. Original studies found a dose–response relationship between UGIC and aspirin, although the risk was still elevated for doses lower or up to 300 mg day−1. The summary RR was 2.6 (95% CI: 2.3, 2.9) for plain, 5.3 (95% CI: 3.0, 9.2) for buffered, and 2.4 (95% CI: 1.9, 2.9) for enteric-coated aspirin formulations. Conclusions Aspirin was associated with UGIC even when used at low doses or in buffered or enteric-coated formulations. The latter findings may be partially explained by channeling of susceptible patients to these formulations. PMID:11736865
Xu, Zhiliang; Chen, Xu-Yan; Liu, Yingjie
2014-01-01
We present a new formulation of the Runge-Kutta discontinuous Galerkin (RKDG) method [9, 8, 7, 6] for solving conservation Laws with increased CFL numbers. The new formulation requires the computed RKDG solution in a cell to satisfy additional conservation constraint in adjacent cells and does not increase the complexity or change the compactness of the RKDG method. Numerical computations for solving one-dimensional and two-dimensional scalar and systems of nonlinear hyperbolic conservation laws are performed with approximate solutions represented by piecewise quadratic and cubic polynomials, respectively. The hierarchical reconstruction [17, 33] is applied as a limiter to eliminate spurious oscillations in discontinuous solutions. From both numerical experiments and the analytic estimate of the CFL number of the newly formulated method, we find that: 1) this new formulation improves the CFL number over the original RKDG formulation by at least three times or more and thus reduces the overall computational cost; and 2) the new formulation essentially does not compromise the resolution of the numerical solutions of shock wave problems compared with ones computed by the RKDG method. PMID:25414520
Automatic query formulations in information retrieval.
Salton, G; Buckley, C; Fox, E A
1983-07-01
Modern information retrieval systems are designed to supply relevant information in response to requests received from the user population. In most retrieval environments the search requests consist of keywords, or index terms, interrelated by appropriate Boolean operators. Since it is difficult for untrained users to generate effective Boolean search requests, trained search intermediaries are normally used to translate original statements of user need into useful Boolean search formulations. Methods are introduced in this study which reduce the role of the search intermediaries by making it possible to generate Boolean search formulations completely automatically from natural language statements provided by the system patrons. Frequency considerations are used automatically to generate appropriate term combinations as well as Boolean connectives relating the terms. Methods are covered to produce automatic query formulations both in a standard Boolean logic system, as well as in an extended Boolean system in which the strict interpretation of the connectives is relaxed. Experimental results are supplied to evaluate the effectiveness of the automatic query formulation process, and methods are described for applying the automatic query formulation process in practice.
Le Châtelier reciprocal relations and the mechanical analog
NASA Astrophysics Data System (ADS)
Gilmore, Robert
1983-08-01
Le Châtelier's principle is discussed carefully in terms of two sets of simple thermodynamic examples. The principle is then formulated quantitatively for general thermodynamic systems. The formulation is in terms of a perturbation-response matrix, the Le Châtelier matrix [L]. Le Châtelier's principle is contained in the diagonal elements of this matrix, all of which exceed one. These matrix elements describe the response of a system to a perturbation of either its extensive or intensive variables. These response ratios are inverses of each other. The Le Châtelier matrix is symmetric, so that a new set of thermodynamic reciprocal relations is derived. This quantitative formulation is illustrated by a single simple example which includes the original examples and shows the reciprocities among them. The assumptions underlying this new quantitative formulation of Le Châtelier's principle are general and applicable to a wide variety of nonthermodynamic systems. Le Châtelier's principle is formulated quantitatively for mechanical systems in static equilibrium, and mechanical examples of this formulation are given.
Toward Improved Fidelity of Thermal Explosion Simulations
NASA Astrophysics Data System (ADS)
Nichols, Albert; Becker, Richard; Burnham, Alan; Howard, W. Michael; Knap, Jarek; Wemhoff, Aaron
2009-06-01
We present results of an improved thermal/chemical/mechanical model of HMX based explosives like LX04 and LX10 for thermal cook-off. The original HMX model and analysis scheme were developed by Yoh et.al. for use in the ALE3D modeling framework. The improvements were concentrated in four areas. First, we added porosity to the chemical material model framework in ALE3D used to model HMX explosive formulations to handle the roughly 2% porosity in solid explosives. Second, we improved the HMX reaction network, which included the addition of a reactive phase change model base on work by Henson et.al. Third, we added early decomposition gas species to the CHEETAH material database to improve equations of state for gaseous intermediates and products. Finally, we improved the implicit mechanics module in ALE3D to more naturally handle the long time scales associated with thermal cookoff. The application of the resulting framework to the analysis of the Scaled Thermal Explosion (STEX) experiments will be discussed.
Stochastic Sznajd Model in Open Community
NASA Astrophysics Data System (ADS)
Emmert-Streib, Frank
We extend the Sznajd Model for opinion formation by introducing persuasion probabilities for opinions. Moreover, we couple the system to an environment which mimics the application of the opinion. This results in a feedback, representing single-state opinion transitions in opposite to the two-state opinion transitions for persuading other people. We call this model opinion formation in an open community (OFOC). It can be seen as a stochastic extension of the Sznajd model for an open community, because it allows for a special choice of parameters to recover the original Sznajd model. We demonstrate the effect of feedback in the OFOC model by applying it to a scenario in which, e.g., opinion B is worse then opinion A but easier explained to other people. Casually formulated we analyzed the question, how much better one has to be, in order to persuade other people, provided the opinion is worse. Our results reveal a linear relation between the transition probability for opinion B and the influence of the environment on B.
NASA Astrophysics Data System (ADS)
Monson, D. J.; Seegmiller, H. L.; McConnaughey, P. K.
1990-06-01
In this paper experimental measurements are compared with Navier-Stokes calculations using seven different turbulence models for the internal flow in a two-dimensional U-duct. The configuration is representative of many internal flows of engineering interst that experience strong curvature. In an effort to improve agreement, this paper tests several versions of the two-equation k-epsilon turbulence model including the standard version, an extended version with a production range time scale, and a version that includes curvature time scales. Each is tested in its high and low Reynolds number formulations. Calculations using these new models and the original mixing length model are compared here with measurements of mean and turbulence velocities, static pressure and skin friction in the U-duct at two Reynolds numbers. The comparisons show that only the low Reynolds number version of the extended k-epsilon model does a reasonable job of predicting the important features of this flow at both Reynolds numbers tested.
A reduced-order model from high-dimensional frictional hysteresis
Biswas, Saurabh; Chatterjee, Anindya
2014-01-01
Hysteresis in material behaviour includes both signum nonlinearities as well as high dimensionality. Available models for component-level hysteretic behaviour are empirical. Here, we derive a low-order model for rate-independent hysteresis from a high-dimensional massless frictional system. The original system, being given in terms of signs of velocities, is first solved incrementally using a linear complementarity problem formulation. From this numerical solution, to develop a reduced-order model, basis vectors are chosen using the singular value decomposition. The slip direction in generalized coordinates is identified as the minimizer of a dissipation-related function. That function includes terms for frictional dissipation through signum nonlinearities at many friction sites. Luckily, it allows a convenient analytical approximation. Upon solution of the approximated minimization problem, the slip direction is found. A final evolution equation for a few states is then obtained that gives a good match with the full solution. The model obtained here may lead to new insights into hysteresis as well as better empirical modelling thereof. PMID:24910522
Some observations on the mechanism of aircraft wing rock
NASA Technical Reports Server (NTRS)
Hwang, C.; Pi, W. S.
1979-01-01
A scale model of the Northrop F-5A was tested in NASA Ames Research Center Eleven-Foot Transonic Tunnel to simulate the wing rock oscillations in a transonic maneuver. For this purpose, a flexible model support device was designed and fabricated, which allowed the model to oscillate in roll at the scaled wing rock frequency. Two tunnel entries were performed to acquire the pressure (steady state and fluctuating) and response data when the model was held fixed and when it was excited by flow to oscillate in roll. Based on these data, a limit cycle mechanism was identified, which supplied energy to the aircraft model and caused the Dutch roll type oscillations, commonly called wing rock. The major origin of the fluctuating pressures that contributed to the limit cycle was traced to the wing surface leading edge stall and the subsequent lift recovery. For typical wing rock oscillations, the energy balance between the pressure work input and the energy consumed by the model's aerodynamic and mechanical damping was formulated and numerical data presented.
Some observations on the mechanism of aircraft wing rock
NASA Technical Reports Server (NTRS)
Hwang, C.; Pi, W. S.
1978-01-01
A pressure scale model of Northrop F-5A was tested in NASA Ames Research Center Eleven-Foot Transonic Tunnel to simulate the wing rock oscillations in a transonic maneuver. For this purpose, a flexible model support device was designed and fabricated which allowed the model to oscillate in roll at the scaled wing rock frequency. Two tunnel entries were performed to acquire the pressure (steady state and fluctuating) and response data when the model was held fixed and when it was excited by flow to oscillate in roll. Based on these data, a limit cycle mechanism was identified which supplied energy to the aircraft model and caused the Dutch roll type oscillations, commonly called wing rock. The major origin of the fluctuating pressures which contributed to the limit cycle was traced to the wing surface leading edge stall and the subsequent lift recovery. For typical wing rock oscillations, the energy balance between the pressure work input and the energy consumed by the model aerodynamic and mechanical damping was formulated and numerical data presented.
NASA Astrophysics Data System (ADS)
Jafari, Azadeh; Deville, Michel O.; Fiétier, Nicolas
2008-09-01
This study discusses the capability of the constitutive laws for the matrix logarithm of the conformation tensor (LCT model) within the framework of the spectral elements method. The high Weissenberg number problems (HWNP) usually produce a lack of convergence of the numerical algorithms. Even though the question whether the HWNP is a purely numerical problem or rather a breakdown of the constitutive law of the model has remained somewhat of a mystery, it has been recognized that the selection of an appropriate constitutive equation constitutes a very crucial step although implementing a suitable numerical technique is still important for successful discrete modeling of non-Newtonian flows. The LCT model formulation of the viscoelastic equations originally suggested by Fattal and Kupferman is applied for 2-dimensional (2D) FENE-CR model. The Planar Poiseuille flow is considered as a benchmark problem to test this representation at high Weissenberg number. The numerical results are compared with numerical solution of the standard constitutive equation.
Fundamental modeling of pulverized coal and coal-water slurry combustion in a gas turbine combustor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatwani, A.; Turan, A.; Hals, F.
1988-01-01
This work describes the essential features of a coal combustion model which is incorporated into a three-dimensional, steady-state, two-phase, turbulent, reactive flow code. The code is a modified and advanced version of INTERN code originally developed at Imperial College which has gone through many stages of development and validation. Swithenbank et al have reported spray combustion model results for an experimental can combustor. The code has since then been modified by and made public under a US Army program. A number of code modifications and improvements have been made at ARL. The earlier version of code was written for amore » small CDC machine which relied on frequent disk/memory transfer and overlay features to carry the computations resulting in loss of computational speed. These limitations have now been removed. For spray applications, the fuel droplet vaporization generates gaseous fuel of uniform composition; hence the earlier formulation relied upon the use of conserved scalar approximation to reduce the number of species equations to be solved. In applications related to coal fuel, coal pyrolysis leads to the formation of at least two different gaseous fuels and a solid fuel of different composition. The authors have therefore removed the conserved scalar formulation for the sake of generality and easy adaptability to complex fuel situations.« less
Mixture theory-based poroelasticity as a model of interstitial tissue growth
Cowin, Stephen C.; Cardoso, Luis
2011-01-01
This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues. PMID:22184481
Mixture theory-based poroelasticity as a model of interstitial tissue growth.
Cowin, Stephen C; Cardoso, Luis
2012-01-01
This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues.
A mathematical model of marine bacteriophage evolution.
Pagliarini, Silvia; Korobeinikov, Andrei
2018-03-01
To explore how particularities of a host cell-virus system, and in particular host cell replication, affect viral evolution, in this paper we formulate a mathematical model of marine bacteriophage evolution. The intrinsic simplicity of real-life phage-bacteria systems, and in particular aquatic systems, for which the assumption of homogeneous mixing is well justified, allows for a reasonably simple model. The model constructed in this paper is based upon the Beretta-Kuang model of bacteria-phage interaction in an aquatic environment (Beretta & Kuang 1998 Math. Biosci. 149 , 57-76. (doi:10.1016/S0025-5564(97)10015-3)). Compared to the original Beretta-Kuang model, the model assumes the existence of a multitude of viral variants which correspond to continuously distributed phenotypes. It is noteworthy that the model is mechanistic (at least as far as the Beretta-Kuang model is mechanistic). Moreover, this model does not include any explicit law or mechanism of evolution; instead it is assumed, in agreement with the principles of Darwinian evolution, that evolution in this system can occur as a result of random mutations and natural selection. Simulations with a simplistic linear fitness landscape (which is chosen for the convenience of demonstration only and is not related to any real-life system) show that a pulse-type travelling wave moving towards increasing Darwinian fitness appears in the phenotype space. This implies that the overall fitness of a viral quasi-species steadily increases with time. That is, the simulations demonstrate that for an uneven fitness landscape random mutations combined with a mechanism of natural selection (for this particular system this is given by the conspecific competition for the resource) lead to the Darwinian evolution. It is noteworthy that in this system the speed of propagation of this wave (and hence the rate of evolution) is not constant but varies, depending on the current viral fitness and the abundance of susceptible bacteria. A specific feature of the original Beretta-Kuang model is that this model exhibits a supercritical Hopf bifurcation, leading to the loss of stability and the rise of self-sustained oscillations in the system. This phenomenon corresponds to the paradox of enrichment in the system. It is remarkable that under the conditions that ensure the bifurcation in the Beretta-Kuang model, the viral evolution model formulated in this paper also exhibits a rise in self-sustained oscillations of the abundance of all interacting populations. The propagation of the travelling wave, however, remains stable under these conditions. The only visible impact of the oscillations on viral evolution is a lower speed of the evolution.
Scherließ, Regina; Ajmera, Ankur; Dennis, Mike; Carroll, Miles W; Altrichter, Jens; Silman, Nigel J; Scholz, Martin; Kemter, Kristina; Marriott, Anthony C
2014-04-17
Currently, the need for cooled storage and the impossibility of terminal sterilisation are major drawbacks in vaccine manufacturing and distribution. To overcome current restrictions a preclinical safety and efficacy study was conducted to evaluate new influenza A vaccine formulations regarding thermal resistance, resistance against irradiation-mediated damage and storage stability. We evaluated the efficacy of novel antigen stabilizing and protecting solutions (SPS) to protect influenza A(H1N1)pdm09 split virus antigen under experimental conditions in vitro and in vivo. Original or SPS re-buffered vaccine (Pandemrix) was spray-dried and terminally sterilised by irradiation with 25 kGy (e-beam). Antigen integrity was monitored by SDS-PAGE, dynamic light scattering, size exclusion chromatography and functional haemagglutination assays. In vitro screening experiments revealed a number of highly stable compositions containing glycyrrhizinic acid (GA) and/or chitosan. The most stable composition was selected for storage tests and in vivo assessment of seroconversion in non-human primates (Macaca fascicularis) using a prime-boost strategy. Redispersed formulations with original adjuvant were administered intramuscularly. Storage data revealed high stability of protected vaccines at 4°C and 25°C, 60% relative humidity, for at least three months. Animals receiving original Pandemrix exhibited expected levels of seroconversion after 21 days (prime) and 48 days (boost) as assessed by haemagglutination inhibition and microneutralisation assays. Animals vaccinated with spray-dried and irradiated Pandemrix failed to exhibit seroconversion after 21 days whereas spray-dried and irradiated, SPS-protected vaccines elicited similar seroconversion levels to those vaccinated with original Pandemrix. Boost immunisation with SPS-protected vaccine resulted in a strong increase in seroconversion but had only minor effects in animals treated with non SPS-protected vaccine. In conclusion, utilising the SPS formulation technology, spray-drying and terminal sterilisation of influenza A(H1N1)pdm09 split virus vaccine is feasible. Findings indicate the potential utility of such formulated vaccines e.g. for needle-free vaccination routes and delivery to countries with uncertain cold chain facilities. Copyright © 2014 Elsevier Ltd. All rights reserved.
2013-01-01
Background By definition, a generic product is considered interchangeable with the innovator brand product. Controversy exists about interchangeability, and attention is predominantly directed to contaminants. In particular for chronic, degenerative conditions such as in Parkinson’s disease (PD) generic substitution remains debated among physicians, patients and pharmacists. The objective of this study was to compare the pharmaceutical quality of seven generic levodopa/benserazide hydrochloride combination products marketed in Germany with the original product (Madopar® / Prolopa® 125, Roche, Switzerland) in order to evaluate the potential impact of Madopar® generics versus branded products for PD patients and clinicians. Methods Madopar® / Prolopa® 125 tablets and capsules were used as reference material. The generic products tested (all 100 mg/25 mg formulations) included four tablet and three capsule formulations. Colour, appearance of powder (capsules), disintegration and dissolution, mass of tablets and fill mass of capsules, content, identity and amounts of impurities were assessed along with standard physical and chemical laboratory tests developed and routinely practiced at Roche facilities. Results were compared to the original “shelf-life” specifications in use by Roche. Results Each of the seven generic products had one or two parameters outside the specifications. Deviations for the active ingredients ranged from +8.4% (benserazide) to −7.6% (levodopa) in two tablet formulations. Degradation products were measured in marked excess (+26.5%) in one capsule formulation. Disintegration time and dissolution for levodopa and benserazide hydrochloride at 30 min were within specifications for all seven generic samples analysed, however with some outliers. Conclusions Deviations for the active ingredients may go unnoticed by a new user of the generic product, but may entail clinical consequences when switching from original to generic during a long-term therapy. Degradation products may pose a safety concern. Our results should prompt caution when prescribing a generic of Madopar®/Prolopa®, and also invite to further investigations in view of a more comprehensive approach, both pharmaceutical and clinical. PMID:23617953
Initial Development and Validation of the Mexican Intercultural Competence Scale
Torres, Lucas
2013-01-01
The current project sought to develop the Mexican Intercultural Competence Scale (MICS), which assesses group-specific skills and attributes that facilitate effective cultural interactions, among adults of Mexican descent. Study 1 involved an Exploratory Factor Analysis (N = 184) that identified five factors including Ambition/Perseverance, Networking, the Traditional Latino Culture, Family Relationships, and Communication. In Study 2, a Confirmatory Factor Analysis provided evidence for the 5-factor model for adults of Mexican origin living in the Midwest (N = 341) region of the U.S. The general findings are discussed in terms of a competence-based formulation of cultural adaptation and include theoretical and clinical implications. PMID:24058890
Initial Development and Validation of the Mexican Intercultural Competence Scale.
Torres, Lucas
2013-01-01
The current project sought to develop the Mexican Intercultural Competence Scale (MICS), which assesses group-specific skills and attributes that facilitate effective cultural interactions, among adults of Mexican descent. Study 1 involved an Exploratory Factor Analysis ( N = 184) that identified five factors including Ambition/Perseverance, Networking, the Traditional Latino Culture, Family Relationships, and Communication. In Study 2, a Confirmatory Factor Analysis provided evidence for the 5-factor model for adults of Mexican origin living in the Midwest ( N = 341) region of the U.S. The general findings are discussed in terms of a competence-based formulation of cultural adaptation and include theoretical and clinical implications.
Vehicular headways on signalized intersections: theory, models, and reality
NASA Astrophysics Data System (ADS)
Krbálek, Milan; Šleis, Jiří
2015-01-01
We discuss statistical properties of vehicular headways measured on signalized crossroads. On the basis of mathematical approaches, we formulate theoretical and empirically inspired criteria for the acceptability of theoretical headway distributions. Sequentially, the multifarious families of statistical distributions (commonly used to fit real-road headway statistics) are confronted with these criteria, and with original empirical time clearances gauged among neighboring vehicles leaving signal-controlled crossroads after a green signal appears. Using three different numerical schemes, we demonstrate that an arrangement of vehicles on an intersection is a consequence of the general stochastic nature of queueing systems, rather than a consequence of traffic rules, driver estimation processes, or decision-making procedures.
Modeling OAE responses to short tones
NASA Astrophysics Data System (ADS)
Duifhuis, Hendrikus; Siegel, Jonathan
2015-12-01
In 1999 Shera and Guinan postulated that otoacoustic emissions evoked by low-level transient stimuli are generated by coherent linear reflection (CRF or CLR). This hypothesis was tested experimentally, e.g., by Siegel and Charaziak[10] by measuring emissions evoked by short (1 ms) tone pips in chinchilla. Using techniques in which supplied level and recorded spectral information were used Siegel and Charaziak concluded that much of the emission was generated by a mechanism in a region extending basally from the peak of the traveling wave and that the action of the suppressor is to remove emission generators evoked by the tone-pip and not to generate nonlinear artifacts in regions basal to the peak region. The original formulation of the CRF theory does not account for these results This study addresses relevant cochlear model predictions.
Process modelling for space station experiments
NASA Technical Reports Server (NTRS)
Rosenberger, Franz; Alexander, J. Iwan D.
1988-01-01
The work performed during the first year 1 Oct. 1987 to 30 Sept. 1988 involved analyses of crystal growth from the melt and from solution. The particular melt growth technique under investigation is directional solidification by the Bridgman-Stockbarger method. Two types of solution growth systems are also being studied. One involves growth from solution in a closed container, the other concerns growth of protein crystals by the hanging drop method. Following discussions with Dr. R. J. Naumann of the Low Gravity Science Division at MSFC it was decided to tackle the analysis of crystal growth from the melt earlier than originally proposed. Rapid progress was made in this area. Work is on schedule and full calculations were underway for some time. Progress was also made in the formulation of the two solution growth models.
Quantum Monte Carlo Simulation of Frustrated Kondo Lattice Models
NASA Astrophysics Data System (ADS)
Sato, Toshihiro; Assaad, Fakher F.; Grover, Tarun
2018-03-01
The absence of the negative sign problem in quantum Monte Carlo simulations of spin and fermion systems has different origins. World-line based algorithms for spins require positivity of matrix elements whereas auxiliary field approaches for fermions depend on symmetries such as particle-hole symmetry. For negative-sign-free spin and fermionic systems, we show that one can formulate a negative-sign-free auxiliary field quantum Monte Carlo algorithm that allows Kondo coupling of fermions with the spins. Using this general approach, we study a half-filled Kondo lattice model on the honeycomb lattice with geometric frustration. In addition to the conventional Kondo insulator and antiferromagnetically ordered phases, we find a partial Kondo screened state where spins are selectively screened so as to alleviate frustration, and the lattice rotation symmetry is broken nematically.
Anti-diabetic formulations of Nāga bhasma (lead calx): A brief review.
Rajput, Dhirajsingh; Patgiri, B J; Galib, R; Prajapati, P K
2013-07-01
Ayurvedic formulations usually contain ingredients of herbal, mineral, metal or animal in origin. Nāga bhasma (lead calx) is a potent metallic formulation mainly indicated in the treatment of Prameha (~diabetes). Until date, no published information is available in compiled form on the formulations containing Nāga bhasma as an ingredient, their dose and indications. Therefore, in the present study, an attempt has been made to compile various formulations of Nāga bhasma indicated in treating Prameha. The present work aims to collect information on various formulations of Nāga bhasma mainly indicated in treating Prameha and to elaborate the safety and efficacy of Nāga bhasma as a Pramehaghna (antidiabetic) drug. Critical review of formulations of Nāga bhasma is compiled from various Ayurvedic texts and the therapeutic efficacy of Nāga bhasma is discussed on the basis of available data. Antidiabetic formulations of Nāga bhasma were discovered around 12(th) century CE. There are 44 formulations of Nāga bhasma mainly indicated for Prameha. Haridrā (Curcuma longa Linn), Āmalakī (Emblica officinalis), Guḍūci (Tinospora cordifolia) and Madhu (honey) enhance the antidiabetic action of Nāga bhasma and also help to prevent diabetic complications as well as any untoward effects of Nāga bhasma. On the basis of the reviewed research, it is concluded that Nāga bhasma possesses significant antidiabetic property.
Modifying Bagnold's Sediment Transport Equation for Use in Watershed-Scale Channel Incision Models
NASA Astrophysics Data System (ADS)
Lammers, R. W.; Bledsoe, B. P.
2016-12-01
Destabilized stream channels may evolve through a sequence of stages, initiated by bed incision and followed by bank erosion and widening. Channel incision can be modeled using Exner-type mass balance equations, but model accuracy is limited by the accuracy and applicability of the selected sediment transport equation. Additionally, many sediment transport relationships require significant data inputs, limiting their usefulness in data-poor environments. Bagnold's empirical relationship for bedload transport is attractive because it is based on stream power, a relatively straightforward parameter to estimate using remote sensing data. However, the equation is also dependent on flow depth, which is more difficult to measure or estimate for entire drainage networks. We recast Bagnold's original sediment transport equation using specific discharge in place of flow depth. Using a large dataset of sediment transport rates from the literature, we show that this approach yields similar predictive accuracy as other stream power based relationships. We also explore the applicability of various critical stream power equations, including Bagnold's original, and support previous conclusions that these critical values can be predicted well based solely on sediment grain size. In addition, we propagate error in these sediment transport equations through channel incision modeling to compare the errors associated with our equation to alternative formulations. This new version of Bagnold's bedload transport equation has utility for channel incision modeling at larger spatial scales using widely available and remote sensing data.
Huyghebaert, N; De Beer, J; Vervaet, C; Remon, J P
2007-10-01
Cystic fibrosis (CF) patients suffer from malabsorption of fat-soluble vitamins (A, D, E and K). These vitamins are available as water-dispersible (A, D(3) and E) or water-soluble grades (K(3)), which is favoured in CF patients as they fail to absorb oil-based products. The objective of this study was to determine stability of these raw materials after opening the original package and to develop a compounded formulation of acceptable quality, stability and taste, allowing flexible dose adaptation and being appropriate for administration to children and elderly people. The raw materials were stored after opening their original package for 8 months at 8 degrees C and room temperature (RT). Stability was assessed using a validated HPLC method after extraction of the vitamin from the cold water-soluble matrix (vitamin A acetate, D(3) and E) or using a spectrophotometrical method (vitamin K(3)). These materials were mixed with an appropriate lactose grade (lactose 80 m for vitamins A and D(3); lactose 90 m for vitamin E, lactose very fine powder for vitamin K(3)) and filled in hard gelatin capsules. Mass and content uniformity were determined and stability of the vitamins in the capsules was assessed after 2 months storage at 8 degrees C and RT. All raw materials showed good stability during storage in the opened original package for 8 months storage at 8 degrees C as well as RT (>95% of the initial content). The compounded formulations complied with the requirements of the European Pharmacopoeia for mass and content uniformity and can be stored for 2 months at 8 degrees C or RT while maintaining the vitamin content between 90% and 110%. As these fat-soluble vitamins are not commercially available on the Belgian market, compounded formulations are a valuable alternative for prophylactic administration of these vitamins to CF patients, i.e. a stable formulation, having an acceptable taste, allowing flexible dose adaptation and being appropriate for administration to children and elderly people.
Kesisoglou, Filippos; Chung, John; van Asperen, Judith; Heimbach, Tycho
2016-09-01
In recent years, there has been a significant increase in use of physiologically based pharmacokinetic models in drug development and regulatory applications. Although most of the published examples have focused on aspects such as first-in-human (FIH) dose predictions or drug-drug interactions, several publications have highlighted the application of these models in the biopharmaceutics field and their use to inform formulation development. In this report, we present 5 case studies of use of such models in this biopharmaceutics/formulation space across different pharmaceutical companies. The case studies cover different aspects of biopharmaceutics or formulation questions including (1) prediction of absorption prior to FIH studies; (2) optimization of formulation and dissolution method post-FIH data; (3) early exploration of a modified-release formulation; (4) addressing bridging questions for late-stage formulation changes; and (5) prediction of pharmacokinetics in the fed state for a Biopharmaceutics Classification System class I drug with fasted state data. The discussion of the case studies focuses on how such models can facilitate decisions and biopharmaceutic understanding of drug candidates and the opportunities for increased use and acceptance of such models in drug development and regulatory interactions. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
A hybridizable discontinuous Galerkin method for modeling fluid–structure interaction
Sheldon, Jason P.; Miller, Scott T.; Pitt, Jonathan S.
2016-08-31
This study presents a novel application of the hybridizable discontinuous Galerkin (HDG) finite element method to the multi-physics simulation of coupled fluid–structure interaction (FSI) problems. Recent applications of the HDG method have primarily been for single-physics problems including both solids and fluids, which are necessary building blocks for FSI modeling. Utilizing these established models, HDG formulations for linear elastostatics, a nonlinear elastodynamic model, and arbitrary Lagrangian–Eulerian Navier–Stokes are derived. The elasticity formulations are written in a Lagrangian reference frame, with the nonlinear formulation restricted to hyperelastic materials. With these individual solid and fluid formulations, the remaining challenge in FSI modelingmore » is coupling together their disparate mathematics on the fluid–solid interface. This coupling is presented, along with the resultant HDG FSI formulation. Verification of the component models, through the method of manufactured solutions, is performed and each model is shown to converge at the expected rate. The individual components, along with the complete FSI model, are then compared to the benchmark problems proposed by Turek and Hron [1]. The solutions from the HDG formulation presented in this work trend towards the benchmark as the spatial polynomial order and the temporal order of integration are increased.« less
Linear complementarity formulation for 3D frictional sliding problems
Kaven, Joern; Hickman, Stephen H.; Davatzes, Nicholas C.; Mutlu, Ovunc
2012-01-01
Frictional sliding on quasi-statically deforming faults and fractures can be modeled efficiently using a linear complementarity formulation. We review the formulation in two dimensions and expand the formulation to three-dimensional problems including problems of orthotropic friction. This formulation accurately reproduces analytical solutions to static Coulomb friction sliding problems. The formulation accounts for opening displacements that can occur near regions of non-planarity even under large confining pressures. Such problems are difficult to solve owing to the coupling of relative displacements and tractions; thus, many geomechanical problems tend to neglect these effects. Simple test cases highlight the importance of including friction and allowing for opening when solving quasi-static fault mechanics models. These results also underscore the importance of considering the effects of non-planarity in modeling processes associated with crustal faulting.
Tamez-Guerra, P; McGuire, M R; Behle, R W; Hamm, J J; Sumner, H R; Shasha, B S
2000-04-01
Nuclear polyhedrosis viruses such as the one isolated from the celery looper, Anagrapha falcifera (Kirby) (AfMNPV), have the potential to be successful bioinsecticides if improved formulations can prevent rapid loss of insecticidal activity from environmental conditions such as sunlight and rainfall. We tested 16 spray-dried formulations of AfMNPV to determine the effect of different ingredients (e.g., lignin, corn flour, and so on) on insecticidal activity after simulated rain and simulated sunlight (at Peoria, IL) and natural sunlight exposures (at Tifton, GA). The most effective formulation contained pregelatinized corn flour and potassium lignate, which retained more than half of its original activity after 5 cm of simulated rain, and almost full activity after 8 h of simulated sunlight. In Georgia, formulations made with and without lignin were compared for persistence of insecticidal activity when exposed to natural sunlight. In addition, the effect of fluorescent brighteners as formulation components and spray tank additives was tested. Results showed that the formulations with lignin had more insecticidal activity remaining after sunlight exposure than formulations without lignin. The inclusion of brighteners in the formulation did not improve initial activity or virus persistence. However, a 1% tank mix significantly enhanced activity and improved persistence. Scanning electron micrographs revealed discreet particles, and transmission electron micrographs showed virus embedded within microgranules. Results demonstrated that formulations made with natural ingredients could improve persistence of virus-based biopesticides.
Uncertainty and the Conceptual Site Model
NASA Astrophysics Data System (ADS)
Price, V.; Nicholson, T. J.
2007-12-01
Our focus is on uncertainties in the underlying conceptual framework upon which all subsequent steps in numerical and/or analytical modeling efforts depend. Experienced environmental modelers recognize the value of selecting an optimal conceptual model from several competing site models, but usually do not formally explore possible alternative models, in part due to incomplete or missing site data, as well as relevant regional data for establishing boundary conditions. The value in and approach for developing alternative conceptual site models (CSM) is demonstrated by analysis of case histories. These studies are based on reported flow or transport modeling in which alternative site models are formulated using data that were not available to, or not used by, the original modelers. An important concept inherent to model abstraction of these alternative conceptual models is that it is "Far better an approximate answer to the right question, which is often vague, than the exact answer to the wrong question, which can always be made precise." (Tukey, 1962) The case histories discussed here illustrate the value of formulating alternative models and evaluating them using site-specific data: (1) Charleston Naval Site where seismic characterization data allowed significant revision of the CSM and subsequent contaminant transport modeling; (2) Hanford 300-Area where surface- and ground-water interactions affecting the unsaturated zone suggested an alternative component to the site model; (3) Savannah River C-Area where a characterization report for a waste site within the modeled area was not available to the modelers, but provided significant new information requiring changes to the underlying geologic and hydrogeologic CSM's used; (4) Amargosa Desert Research Site (ADRS) where re-interpretation of resistivity sounding data and water-level data suggested an alternative geologic model. Simple 2-D spreadsheet modeling of the ADRS with the revised CSM provided an improved match to vapor-phase tritium migration. Site-specific monitoring coupled to these alternative CSM's greatly assists in conducting uncertainty assessments. (Work supported by USNRC contract NRC-04-03-061.)
NASA Technical Reports Server (NTRS)
Winckelmans, G. S.; Lund, T. S.; Carati, D.; Wray, A. A.
1996-01-01
Subgrid-scale models for Large Eddy Simulation (LES) in both the velocity-pressure and the vorticity-velocity formulations were evaluated and compared in a priori tests using spectral Direct Numerical Simulation (DNS) databases of isotropic turbulence: 128(exp 3) DNS of forced turbulence (Re(sub(lambda))=95.8) filtered, using the sharp cutoff filter, to both 32(exp 3) and 16(exp 3) synthetic LES fields; 512(exp 3) DNS of decaying turbulence (Re(sub(Lambda))=63.5) filtered to both 64(exp 3) and 32(exp 3) LES fields. Gaussian and top-hat filters were also used with the 128(exp 3) database. Different LES models were evaluated for each formulation: eddy-viscosity models, hyper eddy-viscosity models, mixed models, and scale-similarity models. Correlations between exact versus modeled subgrid-scale quantities were measured at three levels: tensor (traceless), vector (solenoidal 'force'), and scalar (dissipation) levels, and for both cases of uniform and variable coefficient(s). Different choices for the 1/T scaling appearing in the eddy-viscosity were also evaluated. It was found that the models for the vorticity-velocity formulation produce higher correlations with the filtered DNS data than their counterpart in the velocity-pressure formulation. It was also found that the hyper eddy-viscosity model performs better than the eddy viscosity model, in both formulations.
da Silva Antunes, Ricardo; Paul, Sinu; Sidney, John; Weiskopf, Daniela; Dan, Jennifer M.; Phillips, Elizabeth; Mallal, Simon; Crotty, Shane; Sette, Alessandro; Lindestam Arlehamn, Cecilia S.
2017-01-01
Despite widespread uses of tetanus toxoid (TT) as a vaccine, model antigen and protein carrier, TT epitopes have been poorly characterized. Herein we defined the human CD4+ T cell epitope repertoire by reevaluation of previously described epitopes and evaluation of those derived from prediction of HLA Class II binding. Forty-seven epitopes were identified following in vitro TT stimulation, with 28 epitopes accounting for 90% of the total response. Despite this diverse range of epitopes, individual responses were associated with only a few immunodominant epitopes, with each donor responding on average to 3 epitopes. For the top 14 epitopes, HLA restriction could be inferred based on HLA typing of the responding donors. HLA binding predictions re-identified the vast majority of known epitopes, and identified 24 additional novel epitopes. With these epitopes, we created a TT epitope pool, which allowed us to characterize TT responses directly ex vivo using a cytokine-independent Activation Induced Marker (AIM) assay. These TT responses were highly Th1 or Th2 polarized, which was dependent upon the original priming vaccine, either the cellular DTwP or acellular DTaP formulation. This polarization remained despite the original priming having occurred decades past and a recent booster immunization with a reduced acellular vaccine formulation. While TT responses following booster vaccination were not durably increased in magnitude, they were associated with a relative expansion of CD4+ effector memory T cells. PMID:28081174
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller, Kathrin, E-mail: k.mueller@fz-juelich.de; Fedosov, Dmitry A., E-mail: d.fedosov@fz-juelich.de; Gompper, Gerhard, E-mail: g.gompper@fz-juelich.de
Smoothed dissipative particle dynamics (SDPD) combines two popular mesoscopic techniques, the smoothed particle hydrodynamics and dissipative particle dynamics (DPD) methods, and can be considered as an improved dissipative particle dynamics approach. Despite several advantages of the SDPD method over the conventional DPD model, the original formulation of SDPD by Español and Revenga (2003) [9], lacks angular momentum conservation, leading to unphysical results for problems where the conservation of angular momentum is essential. To overcome this limitation, we extend the SDPD method by introducing a particle spin variable such that local and global angular momentum conservation is restored. The new SDPDmore » formulation (SDPD+a) is directly derived from the Navier–Stokes equation for fluids with spin, while thermal fluctuations are incorporated similarly to the DPD method. We test the new SDPD method and demonstrate that it properly reproduces fluid transport coefficients. Also, SDPD with angular momentum conservation is validated using two problems: (i) the Taylor–Couette flow with two immiscible fluids and (ii) a tank-treading vesicle in shear flow with a viscosity contrast between inner and outer fluids. For both problems, the new SDPD method leads to simulation predictions in agreement with the corresponding analytical theories, while the original SDPD method fails to capture properly physical characteristics of the systems due to violation of angular momentum conservation. In conclusion, the extended SDPD method with angular momentum conservation provides a new approach to tackle fluid problems such as multiphase flows and vesicle/cell suspensions, where the conservation of angular momentum is essential.« less
da Silva Antunes, Ricardo; Paul, Sinu; Sidney, John; Weiskopf, Daniela; Dan, Jennifer M; Phillips, Elizabeth; Mallal, Simon; Crotty, Shane; Sette, Alessandro; Lindestam Arlehamn, Cecilia S
2017-01-01
Despite widespread uses of tetanus toxoid (TT) as a vaccine, model antigen and protein carrier, TT epitopes have been poorly characterized. Herein we defined the human CD4+ T cell epitope repertoire by reevaluation of previously described epitopes and evaluation of those derived from prediction of HLA Class II binding. Forty-seven epitopes were identified following in vitro TT stimulation, with 28 epitopes accounting for 90% of the total response. Despite this diverse range of epitopes, individual responses were associated with only a few immunodominant epitopes, with each donor responding on average to 3 epitopes. For the top 14 epitopes, HLA restriction could be inferred based on HLA typing of the responding donors. HLA binding predictions re-identified the vast majority of known epitopes, and identified 24 additional novel epitopes. With these epitopes, we created a TT epitope pool, which allowed us to characterize TT responses directly ex vivo using a cytokine-independent Activation Induced Marker (AIM) assay. These TT responses were highly Th1 or Th2 polarized, which was dependent upon the original priming vaccine, either the cellular DTwP or acellular DTaP formulation. This polarization remained despite the original priming having occurred decades past and a recent booster immunization with a reduced acellular vaccine formulation. While TT responses following booster vaccination were not durably increased in magnitude, they were associated with a relative expansion of CD4+ effector memory T cells.
Smoothed dissipative particle dynamics with angular momentum conservation
NASA Astrophysics Data System (ADS)
Müller, Kathrin; Fedosov, Dmitry A.; Gompper, Gerhard
2015-01-01
Smoothed dissipative particle dynamics (SDPD) combines two popular mesoscopic techniques, the smoothed particle hydrodynamics and dissipative particle dynamics (DPD) methods, and can be considered as an improved dissipative particle dynamics approach. Despite several advantages of the SDPD method over the conventional DPD model, the original formulation of SDPD by Español and Revenga (2003) [9], lacks angular momentum conservation, leading to unphysical results for problems where the conservation of angular momentum is essential. To overcome this limitation, we extend the SDPD method by introducing a particle spin variable such that local and global angular momentum conservation is restored. The new SDPD formulation (SDPD+a) is directly derived from the Navier-Stokes equation for fluids with spin, while thermal fluctuations are incorporated similarly to the DPD method. We test the new SDPD method and demonstrate that it properly reproduces fluid transport coefficients. Also, SDPD with angular momentum conservation is validated using two problems: (i) the Taylor-Couette flow with two immiscible fluids and (ii) a tank-treading vesicle in shear flow with a viscosity contrast between inner and outer fluids. For both problems, the new SDPD method leads to simulation predictions in agreement with the corresponding analytical theories, while the original SDPD method fails to capture properly physical characteristics of the systems due to violation of angular momentum conservation. In conclusion, the extended SDPD method with angular momentum conservation provides a new approach to tackle fluid problems such as multiphase flows and vesicle/cell suspensions, where the conservation of angular momentum is essential.
NASA Astrophysics Data System (ADS)
Alekseev, D. A.; Gokhberg, M. B.
2018-05-01
A 2-D boundary problem formulation in terms of pore pressure in Biot poroelasticity model is discussed, with application to a vertical contact model mechanically excited by a lunar-solar tidal deformation wave, representing a fault zone structure. A problem parametrization in terms of permeability and Biot's modulus contrasts is proposed and its numerical solution is obtained for a series of models differing in the values of the above parameters. The behavior of pore pressure and its gradient is analyzed. From those, the electric field of the electrokinetic nature is calculated. The possibilities of estimation of the elastic properties and permeability of geological formations from the observations of the horizontal and vertical electric field measured inside the medium and at the earth's surface near the block boundary are discussed.
User's guide for a general purpose dam-break flood simulation model (K-634)
Land, Larry F.
1981-01-01
An existing computer program for simulating dam-break floods for forecast purposes has been modified with an emphasis on general purpose applications. The original model was formulated, developed and documented by the National Weather Service. This model is based on the complete flow equations and uses a nonlinear implicit finite-difference numerical method. The first phase of the simulation routes a flood wave through the reservoir and computes an outflow hydrograph which is the sum of the flow through the dam 's structures and the gradually developing breach. The second phase routes this outflow hydrograph through the stream which may be nonprismatic and have segments with subcritical or supercritical flow. The results are discharge and stage hydrographs at the dam as well as all of the computational nodes in the channel. From these hydrographs, peak discharge and stage profiles are tabulated. (USGS)
NASA Technical Reports Server (NTRS)
Lissenden, Cliff J.; Arnold, Steven M.
1996-01-01
Guidance for the formulation of robust, multiaxial, constitutive models for advanced materials is provided by addressing theoretical and experimental issues using micromechanics. The multiaxial response of metal matrix composites, depicted in terms of macro flow/damage surfaces, is predicted at room and elevated temperatures using an analytical micromechanical model that includes viscoplastic matrix response as well as fiber-matrix debonding. Macro flow/damage surfaces (i.e., debonding envelopes, matrix threshold surfaces, macro 'yield' surfaces, surfaces of constant inelastic strain rate, and surfaces of constant dissipation rate) are determined for silicon carbide/titanium in three stress spaces. Residual stresses are shown to offset the centers of the flow/damage surfaces from the origin and their shape is significantly altered by debonding. The results indicate which type of flow/damage surfaces should be characterized and what loadings applied to provide the most meaningful experimental data for guiding theoretical model development and verification.
A modified Finite Element-Transfer Matrix for control design of space structures
NASA Technical Reports Server (NTRS)
Tan, T.-M.; Yousuff, A.; Bahar, L. Y.; Konstandinidis, M.
1990-01-01
The Finite Element-Transfer Matrix (FETM) method was developed for reducing the computational efforts involved in structural analysis. While being widely used by structural analysts, this method does, however, have certain limitations, particularly when used for the control design of large flexible structures. In this paper, a new formulation based on the FETM method is presented. The new method effectively overcomes the limitations in the original FETM method, and also allows an easy construction of reduced models that are tailored for the control design. Other advantages of this new method include the ability to extract open loop frequencies and mode shapes with less computation, and simplification of the design procedures for output feedback, constrained compensation, and decentralized control. The development of this new method and the procedures for generating reduced models using this method are described in detail and the role of the reduced models in control design is discussed through an illustrative example.
Nagata, Takeshi; Fedorov, Dmitri G; Li, Hui; Kitaura, Kazuo
2012-05-28
A new energy expression is proposed for the fragment molecular orbital method interfaced with the polarizable continuum model (FMO/PCM). The solvation free energy is shown to be more accurate on a set of representative polypeptides with neutral and charged residues, in comparison to the original formulation at the same level of the many-body expansion of the electrostatic potential determining the apparent surface charges. The analytic first derivative of the energy with respect to nuclear coordinates is formulated at the second-order Møller-Plesset (MP2) perturbation theory level combined with PCM, for which we derived coupled perturbed Hartree-Fock equations. The accuracy of the analytic gradient is demonstrated on test calculations in comparison to numeric gradient. Geometry optimization of the small Trp-cage protein (PDB: 1L2Y) is performed with FMO/PCM/6-31(+)G(d) at the MP2 and restricted Hartree-Fock with empirical dispersion (RHF/D). The root mean square deviations between the FMO optimized and NMR experimental structure are found to be 0.414 and 0.426 Å for RHF/D and MP2, respectively. The details of the hydrogen bond network in the Trp-cage protein are revealed.
NASA Astrophysics Data System (ADS)
Li, Dewei; Li, Jiwei; Xi, Yugeng; Gao, Furong
2017-12-01
In practical applications, systems are always influenced by parameter uncertainties and external disturbance. Both the H2 performance and the H∞ performance are important for the real applications. For a constrained system, the previous designs of mixed H2/H∞ robust model predictive control (RMPC) optimise one performance with the other performance requirement as a constraint. But the two performances cannot be optimised at the same time. In this paper, an improved design of mixed H2/H∞ RMPC for polytopic uncertain systems with external disturbances is proposed to optimise them simultaneously. In the proposed design, the original uncertain system is decomposed into two subsystems by the additive character of linear systems. Two different Lyapunov functions are used to separately formulate the two performance indices for the two subsystems. Then, the proposed RMPC is designed to optimise both the two performances by the weighting method with the satisfaction of the H∞ performance requirement. Meanwhile, to make the design more practical, a simplified design is also developed. The recursive feasible conditions of the proposed RMPC are discussed and the closed-loop input state practical stable is proven. The numerical examples reflect the enlarged feasible region and the improved performance of the proposed design.
NASA Astrophysics Data System (ADS)
Nagata, Takeshi; Fedorov, Dmitri G.; Li, Hui; Kitaura, Kazuo
2012-05-01
A new energy expression is proposed for the fragment molecular orbital method interfaced with the polarizable continuum model (FMO/PCM). The solvation free energy is shown to be more accurate on a set of representative polypeptides with neutral and charged residues, in comparison to the original formulation at the same level of the many-body expansion of the electrostatic potential determining the apparent surface charges. The analytic first derivative of the energy with respect to nuclear coordinates is formulated at the second-order Møller-Plesset (MP2) perturbation theory level combined with PCM, for which we derived coupled perturbed Hartree-Fock equations. The accuracy of the analytic gradient is demonstrated on test calculations in comparison to numeric gradient. Geometry optimization of the small Trp-cage protein (PDB: 1L2Y) is performed with FMO/PCM/6-31(+)G(d) at the MP2 and restricted Hartree-Fock with empirical dispersion (RHF/D). The root mean square deviations between the FMO optimized and NMR experimental structure are found to be 0.414 and 0.426 Å for RHF/D and MP2, respectively. The details of the hydrogen bond network in the Trp-cage protein are revealed.
The effects of facilitation and competition on group foraging in patches
Laguë, Marysa; Tania, Nessy; Heath, Joel; Edelstein-Keshet, Leah
2012-01-01
Significant progress has been made towards understanding the social behaviour of animal groups, but the patch model, a foundation of foraging theory, has received little attention in a social context. The effect of competition on the optimal time to leave a foraging patch was considered as early as the original formulation of the marginal value theorem, but surprisingly, the role of facilitation (where foraging in groups decreases the time to find food in patches), has not been incorporated. Here we adapt the classic patch model to consider how the trade-off between facilitation and competition influence optimal group size. Using simple assumptions about the effect of group size on the food-finding time and the sharing of resources, we find conditions for existence of optima in patch residence time and in group size. When patches are close together (low travel times), larger group sizes are optimal. Groups are predicted to exploit patches differently than individual foragers and the degree of patch depletion at departure depends on the details of the trade-off between competition and facilitation. A variety of currencies and group-size effects are also considered and compared. Using our simple formulation, we also study the effects of social foraging on patch exploitation which to date have received little empirical study. PMID:22743132
The effects of facilitation and competition on group foraging in patches.
Laguë, Marysa; Tania, Nessy; Heath, Joel; Edelstein-Keshet, Leah
2012-10-07
Significant progress has been made towards understanding the social behaviour of animal groups, but the patch model, a foundation of foraging theory, has received little attention in a social context. The effect of competition on the optimal time to leave a foraging patch was considered as early as the original formulation of the marginal value theorem, but surprisingly, the role of facilitation (where foraging in groups decreases the time to find food in patches), has not been incorporated. Here we adapt the classic patch model to consider how the trade-off between facilitation and competition influences optimal group size. Using simple assumptions about the effect of group size on the food-finding time and the sharing of resources, we find conditions for existence of optima in patch residence time and in group size. When patches are close together (low travel times), larger group sizes are optimal. Groups are predicted to exploit patches differently than individual foragers and the degree of patch depletion at departure depends on the details of the trade-off between competition and facilitation. A variety of currencies and group-size effects are also considered and compared. Using our simple formulation, we also study the effects of social foraging on patch exploitation which to date have received little empirical study. Copyright © 2012 Elsevier Ltd. All rights reserved.
Model for the orientational ordering of the plant microtubule cortical array
NASA Astrophysics Data System (ADS)
Hawkins, Rhoda J.; Tindemans, Simon H.; Mulder, Bela M.
2010-07-01
The plant microtubule cortical array is a striking feature of all growing plant cells. It consists of a more or less homogeneously distributed array of highly aligned microtubules connected to the inner side of the plasma membrane and oriented transversely to the cell growth axis. Here, we formulate a continuum model to describe the origin of orientational order in such confined arrays of dynamical microtubules. The model is based on recent experimental observations that show that a growing cortical microtubule can interact through angle dependent collisions with pre-existing microtubules that can lead either to co-alignment of the growth, retraction through catastrophe induction or crossing over the encountered microtubule. We identify a single control parameter, which is fully determined by the nucleation rate and intrinsic dynamics of individual microtubules. We solve the model analytically in the stationary isotropic phase, discuss the limits of stability of this isotropic phase, and explicitly solve for the ordered stationary states in a simplified version of the model.
Modified Baryonic Dynamics: two-component cosmological simulations with light sterile neutrinos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angus, G.W.; Gentile, G.; Diaferio, A.
2014-10-01
In this article we continue to test cosmological models centred on Modified Newtonian Dynamics (MOND) with light sterile neutrinos, which could in principle be a way to solve the fine-tuning problems of the standard model on galaxy scales while preserving successful predictions on larger scales. Due to previous failures of the simple MOND cosmological model, here we test a speculative model where the modified gravitational field is produced only by the baryons and the sterile neutrinos produce a purely Newtonian field (hence Modified Baryonic Dynamics). We use two-component cosmological simulations to separate the baryonic N-body particles from the sterile neutrinomore » ones. The premise is to attenuate the over-production of massive galaxy cluster halos which were prevalent in the original MOND plus light sterile neutrinos scenario. Theoretical issues with such a formulation notwithstanding, the Modified Baryonic Dynamics model fails to produce the correct amplitude for the galaxy cluster mass function for any reasonable value of the primordial power spectrum normalisation.« less
Mathematical Modeling of Diverse Phenomena
NASA Technical Reports Server (NTRS)
Howard, J. C.
1979-01-01
Tensor calculus is applied to the formulation of mathematical models of diverse phenomena. Aeronautics, fluid dynamics, and cosmology are among the areas of application. The feasibility of combining tensor methods and computer capability to formulate problems is demonstrated. The techniques described are an attempt to simplify the formulation of mathematical models by reducing the modeling process to a series of routine operations, which can be performed either manually or by computer.
Saa, Pedro A.; Nielsen, Lars K.
2016-01-01
Motivation: Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using ‘loopless constraints’. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. Results: We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm—Fast- sparse null-space pursuit (SNP)—inspired by recent results on SNP. By finding a reduced feasible ‘loop-law’ matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Availability and Implementation: Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). Contact: lars.nielsen@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27559155
Brvar, Nina; Mateović-Rojnik, Tatjana; Grabnar, Iztok
2014-10-01
This study aimed to develop a population pharmacokinetic model for tramadol that combines different input rates with disposition characteristics. Data used for the analysis were pooled from two phase I bioavailability studies with immediate (IR) and prolonged release (PR) formulations in healthy volunteers. Tramadol plasma concentration-time data were described by an inverse Gaussian function to model the complete input process linked to a two-compartment disposition model with first-order elimination. Although polymorphic CYP2D6 appears to be a major enzyme involved in the metabolism of tramadol, application of a mixture model to test the assumption of two and three subpopulations did not reveal any improvement of the model. The final model estimated parameters with reasonable precision and was able to estimate the interindividual variability of all parameters except for the relative bioavailability of PR vs. IR formulation. Validity of the model was further tested using the nonparametric bootstrap approach. Finally, the model was applied to assess absorption kinetics of tramadol and predict steady-state pharmacokinetics following administration of both types of formulations. For both formulations, the final model yielded a stable estimate of the absorption time profiles. Steady-state simulation supports switching of patients from IR to PR formulation. Copyright © 2014 Elsevier B.V. All rights reserved.
Beirowski, Jakob; Inghelbrecht, Sabine; Arien, Albertina; Gieseler, Henning
2011-05-01
It has been recently reported in the literature that using a fast freezing rate during freeze-drying of drug nanosuspensions is beneficial to preserve the original particle size distribution. All freezing rates studied were obtained by utilizing a custom-made apparatus and were then indirectly related to conventional vial freeze-drying. However, a standard freeze-dryer is only capable of achieving moderate freezing rates in the shelf fluid circulation system. Therefore, it was the purpose of the present study to evaluate the possibility to establish a typical freezing protocol applicable to a standard freeze-drying unit in combination with an adequate choice of cryoprotective excipients and steric stabilizers to preserve the original particle size distribution. Six different drug nanosuspensions containing itraconazole as a drug model were studied using freeze-thaw experiments and a full factorial design to reveal major factors for the stabilization of drug nanosuspensions and the corresponding interactions. In contrast to previous reports, the freezing regime showed no significant influence on preserving the original particle size distribution, suggesting that the concentrations of both the steric stabilizer and the cryoprotective agent are optimized. Moreover, it could be pinpointed that the combined effect of steric stabilizer and cryoprotectant clearly contribute to nanoparticle stability. Copyright © 2010 Wiley-Liss, Inc.
Sauer, C M; Haugg, A M; Chteinberg, E; Rennspiess, D; Winnepenninckx, V; Speel, E-J; Becker, J C; Kurz, A K; Zur Hausen, A
2017-08-01
Merkel cell carcinoma (MCC) is a highly malignant skin cancer characterized by early metastases and poor survival. Although MCC is a rare malignancy, its incidence is rapidly increasing in the U.S. and Europe. The discovery of the Merkel cell polyomavirus (MCPyV) has enormously impacted our understanding of its etiopathogenesis and biology. MCCs are characterized by trilinear differentiation, comprising the expression of neuroendocrine, epithelial and B-lymphoid lineage markers. To date, it is generally accepted that the initial assumption of MCC originating from Merkel cells (MCs) is unlikely. This is owed to their post-mitotic character, absence of MCPyV in MCs and discrepant protein expression pattern in comparison to MCC. Evidence from mouse models suggests that epidermal/dermal stem cells might be of cellular origin in MCC. The recently formulated hypothesis of MCC originating from early B-cells is based on morphology, the consistent expression of early B-cell lineage markers and the finding of clonal immunoglobulin chain rearrangement in MCC cells. In this review we elaborate on the cellular ancestry of MCC, the identification of which could pave the way for novel and more effective therapeutic regimens. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wacławczyk, Marta; Ma, Yong-Feng; Kopeć, Jacek M.; Malinowski, Szymon P.
2017-11-01
In this paper we propose two approaches to estimating the turbulent kinetic energy (TKE) dissipation rate, based on the zero-crossing method by Sreenivasan et al. (1983). The original formulation requires a fine resolution of the measured signal, down to the smallest dissipative scales. However, due to finite sampling frequency, as well as measurement errors, velocity time series obtained from airborne experiments are characterized by the presence of effective spectral cutoffs. In contrast to the original formulation the new approaches are suitable for use with signals originating from airborne experiments. The suitability of the new approaches is tested using measurement data obtained during the Physics of Stratocumulus Top (POST) airborne research campaign as well as synthetic turbulence data. They appear useful and complementary to existing methods. We show the number-of-crossings-based approaches respond differently to errors due to finite sampling and finite averaging than the classical power spectral method. Hence, their application for the case of short signals and small sampling frequencies is particularly interesting, as it can increase the robustness of turbulent kinetic energy dissipation rate retrieval.
Variance-Based Cluster Selection Criteria in a K-Means Framework for One-Mode Dissimilarity Data.
Vera, J Fernando; Macías, Rodrigo
2017-06-01
One of the main problems in cluster analysis is that of determining the number of groups in the data. In general, the approach taken depends on the cluster method used. For K-means, some of the most widely employed criteria are formulated in terms of the decomposition of the total point scatter, regarding a two-mode data set of N points in p dimensions, which are optimally arranged into K classes. This paper addresses the formulation of criteria to determine the number of clusters, in the general situation in which the available information for clustering is a one-mode [Formula: see text] dissimilarity matrix describing the objects. In this framework, p and the coordinates of points are usually unknown, and the application of criteria originally formulated for two-mode data sets is dependent on their possible reformulation in the one-mode situation. The decomposition of the variability of the clustered objects is proposed in terms of the corresponding block-shaped partition of the dissimilarity matrix. Within-block and between-block dispersion values for the partitioned dissimilarity matrix are derived, and variance-based criteria are subsequently formulated in order to determine the number of groups in the data. A Monte Carlo experiment was carried out to study the performance of the proposed criteria. For simulated clustered points in p dimensions, greater efficiency in recovering the number of clusters is obtained when the criteria are calculated from the related Euclidean distances instead of the known two-mode data set, in general, for unequal-sized clusters and for low dimensionality situations. For simulated dissimilarity data sets, the proposed criteria always outperform the results obtained when these criteria are calculated from their original formulation, using dissimilarities instead of distances.
An analytic performance model of disk arrays and its application
NASA Technical Reports Server (NTRS)
Lee, Edward K.; Katz, Randy H.
1991-01-01
As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.
Continuous spin fields of mixed-symmetry type
NASA Astrophysics Data System (ADS)
Alkalaev, Konstantin; Grigoriev, Maxim
2018-03-01
We propose a description of continuous spin massless fields of mixed-symmetry type in Minkowski space at the level of equations of motion. It is based on the appropriately modified version of the constrained system originally used to describe massless bosonic fields of mixed-symmetry type. The description is shown to produce generalized versions of triplet, metric-like, and light-cone formulations. In particular, for scalar continuous spin fields we reproduce the Bekaert-Mourad formulation and the Schuster-Toro formulation. Because a continuous spin system inevitably involves infinite number of fields, specification of the allowed class of field configurations becomes a part of its definition. We show that the naive choice leads to an empty system and propose a suitable class resulting in the correct degrees of freedom. We also demonstrate that the gauge symmetries present in the formulation are all Stueckelberg-like so that the continuous spin system is not a genuine gauge theory.
Shahzad, Yasser; Khan, Qalandar; Hussain, Talib; Shah, Syed Nisar Hussain
2013-10-01
Lornoxicam containing topically applied lotions were formulated and optimized with the aim to deliver it transdermally. The formulated lotions were evaluated for pH, viscosity and in vitro permeation studies through silicone membrane using Franz diffusion cells. Data were fitted to linear, quadratic and cubic models and best fit model was selected to investigate the influence of variables, namely hydroxypropyl methylcellulose (HPMC) and ethylene glycol (EG) on permeation of lornoxicam from topically applied lotion formulations. The best fit quadratic model revealed that low level of HPMC and intermediate level of EG in the formulation was optimum for enhancing the drug flux across silicone membrane. FT-IR analysis confirmed absence of drug-polymer interactions. Selected optimized lotion formulation was then subjected to accelerated stability testing, sensatory perception testing and in vitro permeation across rabbit skin. The drug flux from the optimized lotion across rabbit skin was significantly better that that from the control formulation. Furthermore, sensatory perception test rated a higher acceptability while lotion was stable over stability testing period. Therefore, use of Box-Wilson statistical design successfully elaborated the influence of formulation variables on permeation of lornoxicam form topical formulations, thus, helped in optimization of the lotion formulation. Copyright © 2013 Elsevier B.V. All rights reserved.
Byars-Winston, Angela M.
2010-01-01
Scholarship is emerging on intervention models that purposefully attend to cultural variables throughout the career assessment and career counseling process (Swanson & Fouad, in press). One heuristic model that offers promise to advance culturally-relevant vocational practice with African Americans is the Outline for Cultural Formulation (American Psychiatric Association, 1994). This article explicates the Outline for Cultural Formulation in career assessment and career counseling with African Americans integrating the concept of cultural identity into the entire model. The article concludes with an illustration of the Outline for Cultural Formulation model with an African American career client. PMID:20495668
Nonholonomic Hamiltonian Method for Meso-macroscale Simulations of Reacting Shocks
NASA Astrophysics Data System (ADS)
Fahrenthold, Eric; Lee, Sangyup
2015-06-01
The seamless integration of macroscale, mesoscale, and molecular scale models of reacting shock physics has been hindered by dramatic differences in the model formulation techniques normally used at different scales. In recent research the authors have developed the first unified discrete Hamiltonian approach to multiscale simulation of reacting shock physics. Unlike previous work, the formulation employs reacting themomechanical Hamiltonian formulations at all scales, including the continuum. Unlike previous work, the formulation employs a nonholonomic modeling approach to systematically couple the models developed at all scales. Example applications of the method show meso-macroscale shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.
Computational prediction of formulation strategies for beyond-rule-of-5 compounds.
Bergström, Christel A S; Charman, William N; Porter, Christopher J H
2016-06-01
The physicochemical properties of some contemporary drug candidates are moving towards higher molecular weight, and coincidentally also higher lipophilicity in the quest for biological selectivity and specificity. These physicochemical properties move the compounds towards beyond rule-of-5 (B-r-o-5) chemical space and often result in lower water solubility. For such B-r-o-5 compounds non-traditional delivery strategies (i.e. those other than conventional tablet and capsule formulations) typically are required to achieve adequate exposure after oral administration. In this review, we present the current status of computational tools for prediction of intestinal drug absorption, models for prediction of the most suitable formulation strategies for B-r-o-5 compounds and models to obtain an enhanced understanding of the interplay between drug, formulation and physiological environment. In silico models are able to identify the likely molecular basis for low solubility in physiologically relevant fluids such as gastric and intestinal fluids. With this baseline information, a formulation scientist can, at an early stage, evaluate different orally administered, enabling formulation strategies. Recent computational models have emerged that predict glass-forming ability and crystallisation tendency and therefore the potential utility of amorphous solid dispersion formulations. Further, computational models of loading capacity in lipids, and therefore the potential for formulation as a lipid-based formulation, are now available. Whilst such tools are useful for rapid identification of suitable formulation strategies, they do not reveal drug localisation and molecular interaction patterns between drug and excipients. For the latter, Molecular Dynamics simulations provide an insight into the interplay between drug, formulation and intestinal fluid. These different computational approaches are reviewed. Additionally, we analyse the molecular requirements of different targets, since these can provide an early signal that enabling formulation strategies will be required. Based on the analysis we conclude that computational biopharmaceutical profiling can be used to identify where non-conventional gateways, such as prediction of 'formulate-ability' during lead optimisation and early development stages, are important and may ultimately increase the number of orally tractable contemporary targets. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Linear programming: an alternative approach for developing formulations for emergency food products.
Sheibani, Ershad; Dabbagh Moghaddam, Arasb; Sharifan, Anousheh; Afshari, Zahra
2018-03-01
To minimize the mortality rates of individuals affected by disasters, providing high-quality food relief during the initial stages of an emergency is crucial. The goal of this study was to develop a formulation for a high-energy, nutrient-dense prototype using linear programming (LP) model as a novel method for developing formulations for food products. The model consisted of the objective function and the decision variables, which were the formulation costs and weights of the selected commodities, respectively. The LP constraints were the Institute of Medicine and the World Health Organization specifications of the content of nutrients in the product. Other constraints related to the product's sensory properties were also introduced to the model. Nonlinear constraints for energy ratios of nutrients were linearized to allow their use in the LP. Three focus group studies were conducted to evaluate the palatability and other aspects of the optimized formulation. New constraints were introduced to the LP model based on the focus group evaluations to improve the formulation. LP is an appropriate tool for designing formulations of food products to meet a set of nutritional requirements. This method is an excellent alternative to the traditional 'trial and error' method in designing formulations. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Gary Garland
2015-04-15
This is a study of the brine formulations that we were using in our testing were stable over time. The data includes charts, as well as, all of the original data from the ICP-MS runs to complete this study.
Ruff, Aaron; Holm, René; Kostewicz, Edmund S
2017-07-15
The present study investigated the ability of the in vitro transfer model and an in vivo pharmacokinetic study in rats to investigate the supersaturation and precipitation behaviour of albendazole (ABZ) relative to data from a human intestinal aspiration study reported in the literature. Two lipid based formulation systems, a hydroxypropyl-β-cyclodextrin (HPβCD) solution and the addition of a crystallization inhibitor (HPMC-E5) on the behaviour of ABZ was investigated. These formulations were investigated to represent differences in their ability to facilitate supersaturation within the small intestine. Overall, both the in vitro transfer model and the in vivo rat study were able to rank order the formulations (as aqueous suspension±HPMC
Psychosocial job factors and biological cardiovascular risk factors in Mexican workers.
Garcia-Rojas, Isabel Judith; Choi, BongKyoo; Krause, Niklas
2015-03-01
Psychosocial job factors (PJF) have been implicated in the development of cardiovascular disease. The paucity of data from developing economies including Mexico hampers the development of worksite intervention efforts in those regions. This cross-sectional study of 2,330 Mexican workers assessed PJF (job strain [JS], social support [SS], and job insecurity [JI]) and biological cardiovascular disease risk factors [CVDRF] by questionnaire and on-site physical examinations. Alternative formulations of the JS scales were developed based on factor analysis and literature review. Associations between both traditional and alternative job factor scales with CVDRF were examined in multiple regression models, adjusting for physical workload, and socio-demographic factors. Alternative formulations of the job demand and control scales resulted in substantial changes in effect sizes or statistical significance when compared with the original scales. JS and JI showed hypothesized associations with most CVDRF, but they were inversely associated with diastolic blood pressure and some adiposity measures. SS was mainly protective against CVDRF. Among Mexican workers, alternative PJF scales predicted health outcomes better than traditional scales, and psychosocial stressors were associated with most CVDRF. © 2015 Wiley Periodicals, Inc.
Crum, Matthew F; Trevaskis, Natalie L; Williams, Hywel D; Pouton, Colin W; Porter, Christopher J H
2016-04-01
In vitro lipid digestion models are commonly used to screen lipid-based formulations (LBF), but in vitro-in vivo correlations are in some cases unsuccessful. Here we enhance the scope of the lipid digestion test by incorporating an absorption 'sink' into the experimental model. An in vitro model of lipid digestion was coupled directly to a single pass in situ intestinal perfusion experiment in an anaesthetised rat. The model allowed simultaneous real-time analysis of the digestion and absorption of LBFs of fenofibrate and was employed to evaluate the influence of formulation digestion, supersaturation and precipitation on drug absorption. Formulations containing higher quantities of co-solvent and surfactant resulted in higher supersaturation and more rapid drug precipitation in vitro when compared to those containing higher quantities of lipid. In contrast, when the same formulations were examined using the coupled in vitro lipid digestion - in vivo absorption model, drug flux into the mesenteric vein was similar regardless of in vitro formulation performance. For some drugs, simple in vitro lipid digestion models may underestimate the potential for absorption from LBFs. Consistent with recent in vivo studies, drug absorption for rapidly absorbed drugs such as fenofibrate may occur even when drug precipitation is apparent during in vitro digestion.
Proteomics as a Quality Control Tool of Pharmaceutical Probiotic Bacterial Lysate Products
Klein, Günter; Schanstra, Joost P.; Hoffmann, Janosch; Mischak, Harald; Siwy, Justyna; Zimmermann, Kurt
2013-01-01
Probiotic bacteria have a wide range of applications in veterinary and human therapeutics. Inactivated probiotics are complex samples and quality control (QC) should measure as many molecular features as possible. Capillary electrophoresis coupled to mass spectrometry (CE/MS) has been used as a multidimensional and high throughput method for the identification and validation of biomarkers of disease in complex biological samples such as biofluids. In this study we evaluate the suitability of CE/MS to measure the consistency of different lots of the probiotic formulation Pro-Symbioflor which is a bacterial lysate of heat-inactivated Escherichia coli and Enterococcus faecalis. Over 5000 peptides were detected by CE/MS in 5 different lots of the bacterial lysate and in a sample of culture medium. 71 to 75% of the total peptide content was identical in all lots. This percentage increased to 87–89% when allowing the absence of a peptide in one of the 5 samples. These results, based on over 2000 peptides, suggest high similarity of the 5 different lots. Sequence analysis identified peptides of both E. coli and E. faecalis and peptides originating from the culture medium, thus confirming the presence of the strains in the formulation. Ontology analysis suggested that the majority of the peptides identified for E. coli originated from the cell membrane or the fimbrium, while peptides identified for E. faecalis were enriched for peptides originating from the cytoplasm. The bacterial lysate peptides as a whole are recognised as highly conserved molecular patterns by the innate immune system as microbe associated molecular pattern (MAMP). Sequence analysis also identified the presence of soybean, yeast and casein protein fragments that are part of the formulation of the culture medium. In conclusion CE/MS seems an appropriate QC tool to analyze complex biological products such as inactivated probiotic formulations and allows determining the similarity between lots. PMID:23840518
Anti-diabetic formulations of Nāga bhasma (lead calx): A brief review
Rajput, Dhirajsingh; Patgiri, B. J.; Galib, R; Prajapati, P. K.
2013-01-01
Introduction: Ayurvedic formulations usually contain ingredients of herbal, mineral, metal or animal in origin. Nāga bhasma (lead calx) is a potent metallic formulation mainly indicated in the treatment of Prameha (~diabetes). Until date, no published information is available in compiled form on the formulations containing Nāga bhasma as an ingredient, their dose and indications. Therefore, in the present study, an attempt has been made to compile various formulations of Nāga bhasma indicated in treating Prameha. Aim: The present work aims to collect information on various formulations of Nāga bhasma mainly indicated in treating Prameha and to elaborate the safety and efficacy of Nāga bhasma as a Pramehaghna (antidiabetic) drug. Materials and Methods Critical review of formulations of Nāga bhasma is compiled from various Ayurvedic texts and the therapeutic efficacy of Nāga bhasma is discussed on the basis of available data. Result and Conclusion: Antidiabetic formulations of Nāga bhasma were discovered around 12th century CE. There are 44 formulations of Nāga bhasma mainly indicated for Prameha. Haridrā (Curcuma longa Linn), Āmalakī (Emblica officinalis), Guḍūci (Tinospora cordifolia) and Madhu (honey) enhance the antidiabetic action of Nāga bhasma and also help to prevent diabetic complications as well as any untoward effects of Nāga bhasma. On the basis of the reviewed research, it is concluded that Nāga bhasma possesses significant antidiabetic property. PMID:25161332
Okuda, Tomoyuki
2017-01-01
Functional nanoparticles, such as liposomes and polymeric micelles, are attractive drug delivery systems for solubilization, stabilization, sustained release, prolonged tissue retention, and tissue targeting of various encapsulated drugs. For their clinical application in therapy for pulmonary diseases, the development of dry powder inhalation (DPI) formulations is considered practical due to such advantages as: (1) it is noninvasive and can be directly delivered into the lungs; (2) there are few biocomponents in the lungs that interact with nanoparticles; and (3) it shows high storage stability in the solid state against aggregation or precipitation of nanoparticles in water. However, in order to produce effective nanoparticle-loaded dry powders for inhalation, it is essential to pursue an innovative and comprehensive formulation strategy in relation to composition and powderization which can achieve (1) the particle design of dry powders with physical properties suitable for pulmonary delivery through inhalation, and (2) the effective reconstitution of nanoparticles that will maintain their original physical properties and functions after dissolution of the powders. Spray-freeze drying (SFD) is a relatively new powderization technique combining atomization and lyophilization, which can easily produce highly porous dry powders from an aqueous sample solution. Previously, we advanced the optimization of components and process conditions for the production of SFD powders suitable to DPI application. This review describes our recent results in the development of novel DPI formulations effectively loaded with various nanoparticles (electrostatic nanocomplexes for gene therapy, liposomes, and self-assembled lipid nanoparticles), based on SFD.
C Zanuncio, José; C Lacerda, Mabio; Alcántara-de la Cruz, Ricardo; P Brügger, Bruno; Pereira, Alexandre I A; F Wilcken, Carlos; E Serrão, José; S Sediyama, Carlos
2018-01-01
The increase of agricultural areas with glyphosate-resistant (GR) crops, and use of this herbicide in Brazil, makes necessary to assess its impacts on non-target organisms. The objective was to evaluate the development, reproduction and life table parameters of Podisus nigrispinus (Heteroptera: Pentatomidae) reared on GR-soybean plants treated with glyphosate formulations (Zapp-Qi, Roundup-Transorb-R and Roundup-Original) at the recommended field dose (720g acid equivalent ha -1 ). Glyphosate formulations had no affect on nymph and adult weight of this predator. Fourth instar stage was shortest with Zapp Qi. Egg-adult period was similar between treatments (26 days) with a survival over 90%. Zapp-Qi and Roundup-Transorb-R (potassium-salt: K-salt) reduced the egg, posture and nymph number per female, and the longevity and oviposition periods of this predator. Podisus nigrispinus net reproductive rate was highest in GR-soybean plants treated with Roundup-Original (isopropylamine-salt: IPA-salt). However, the duration of one generation, intrinsic and finite increase rates, and time to duplicate the population, were similar between treatments. Glyphosate toxicity on P. nigrispinus depends of the glyphosate salt type. IPA-salt was least harmless to this predator. Formulations based on K-salt altered its reproductive parameters, however, the development and population dynamic were not affect. Therefore, these glyphosate formulations are compatible with the predator P. nigrispinus with GR-soybean crop. Copyright © 2017 Elsevier Inc. All rights reserved.
A Damage Model for the Simulation of Delamination in Advanced Composites under Variable-Mode Loading
NASA Technical Reports Server (NTRS)
Turon, A.; Camanho, P. P.; Costa, J.; Davila, C. G.
2006-01-01
A thermodynamically consistent damage model is proposed for the simulation of progressive delamination in composite materials under variable-mode ratio. The model is formulated in the context of Damage Mechanics. A novel constitutive equation is developed to model the initiation and propagation of delamination. A delamination initiation criterion is proposed to assure that the formulation can account for changes in the loading mode in a thermodynamically consistent way. The formulation accounts for crack closure effects to avoid interfacial penetration of two adjacent layers after complete decohesion. The model is implemented in a finite element formulation, and the numerical predictions are compared with experimental results obtained in both composite test specimens and structural components.
NASA Astrophysics Data System (ADS)
Purwani, Kristanti Indah; Nurhatika, Sri; Ermavitalini, Dini; Saputro, Triono Bagus; Budiarti, Dwi Setia
2017-06-01
Bioinsecticide formulation conducted by adjuvant addition to improve its effecetiveness in the application. Its addition was only help to work whereas active compound and ingredient as a main core originated from plant simplicia. This research was utilized bintaro (Cerbera odollam) as simplicia. It already began to use it as bioinsecticide against armyworm (Spodoptera litura F) even formulation approachment was not conducted in mustard (Brassica rapa) in previous research. Mustard commodity commonly measured based on leaves performences, when its performance broke by pest such as armyworm might decline the commercial value. So this research aimed to determine the effectiveness of liquid biopesticide formulation of the active ingredient from bintaro (Cerbera odollam) leaf extract in pressing the attack larvae of S. litura F. Larvae deployed in mustard leaves (16 HST). Liquid bioinsecticide concentration formulated in 30%, 40%, 50%, 60%, and 70%. Spraying method used to against S. litura F. consisted on preventive (15 HST) and curative (17 HST). Leaves damage observation conducted at day - 35th (HST). The result showed the formulation suppressed larvae from 40% concentration in preventive way 15 HST and 60% concentration as curative way at 17 HST.
Influence of Differing Analgesic Formulations of Aspirin on Pharmacokinetic Parameters.
Kanani, Kunal; Gatoulis, Sergio C; Voelker, Michael
2015-08-03
Aspirin has been used therapeutically for over 100 years. As the originator and an important marketer of aspirin-containing products, Bayer's clinical trial database contains numerous reports of the pharmacokinetics of various aspirin formulations. These include evaluations of plain tablets, effervescent tablets, granules, chewable tablets, and fast-release tablets. This publication seeks to expand upon the available pharmacokinetic information concerning aspirin formulations. In the pre-systemic circulation, acetylsalicylic acid (ASA) is rapidly converted into its main active metabolite, salicylic acid (SA). Therefore, both substances are measured in plasma and reported in the results. The 500 mg strength of each formulation was chosen for analysis as this is the most commonly used for analgesia. A total of 22 studies were included in the analysis. All formulations of 500 mg aspirin result in comparable plasma exposure to ASA and SA as evidenced by AUC. Tablets and dry granules provide a consistently lower Cmax compared to effervescent, granules in suspension and fast release tablets. Effervescent tablets, fast release tablets, and granules in suspension provide a consistently lower median Tmax compared to dry granules and tablets for both ASA and SA. This report reinforces the importance of formulation differences and their impact on pharmacokinetic parameters.
Influence of Differing Analgesic Formulations of Aspirin on Pharmacokinetic Parameters
Kanani, Kunal; Gatoulis, Sergio C.; Voelker, Michael
2015-01-01
Aspirin has been used therapeutically for over 100 years. As the originator and an important marketer of aspirin-containing products, Bayer’s clinical trial database contains numerous reports of the pharmacokinetics of various aspirin formulations. These include evaluations of plain tablets, effervescent tablets, granules, chewable tablets, and fast-release tablets. This publication seeks to expand upon the available pharmacokinetic information concerning aspirin formulations. In the pre-systemic circulation, acetylsalicylic acid (ASA) is rapidly converted into its main active metabolite, salicylic acid (SA). Therefore, both substances are measured in plasma and reported in the results. The 500 mg strength of each formulation was chosen for analysis as this is the most commonly used for analgesia. A total of 22 studies were included in the analysis. All formulations of 500 mg aspirin result in comparable plasma exposure to ASA and SA as evidenced by AUC. Tablets and dry granules provide a consistently lower Cmax compared to effervescent, granules in suspension and fast release tablets. Effervescent tablets, fast release tablets, and granules in suspension provide a consistently lower median Tmax compared to dry granules and tablets for both ASA and SA. This report reinforces the importance of formulation differences and their impact on pharmacokinetic parameters. PMID:26247959
Modeling chloride transport using travel time distributions at Plynlimon, Wales
NASA Astrophysics Data System (ADS)
Benettin, Paolo; Kirchner, James W.; Rinaldo, Andrea; Botter, Gianluca
2015-05-01
Here we present a theoretical interpretation of high-frequency, high-quality tracer time series from the Hafren catchment at Plynlimon in mid-Wales. We make use of the formulation of transport by travel time distributions to model chloride transport originating from atmospheric deposition and compute catchment-scale travel time distributions. The relevance of the approach lies in the explanatory power of the chosen tools, particularly to highlight hydrologic processes otherwise clouded by the integrated nature of the measured outflux signal. The analysis reveals the key role of residual storages that are poorly visible in the hydrological response, but are shown to strongly affect water quality dynamics. A significant accuracy in reproducing data is shown by our calibrated model. A detailed representation of catchment-scale travel time distributions has been derived, including the time evolution of the overall dispersion processes (which can be expressed in terms of time-varying storage sampling functions). Mean computed travel times span a broad range of values (from 80 to 800 days) depending on the catchment state. Results also suggest that, in the average, discharge waters are younger than storage water. The model proves able to capture high-frequency fluctuations in the measured chloride concentrations, which are broadly explained by the sharp transition between groundwaters and faster flows originating from topsoil layers. This article was corrected on 22 JUN 2015. See the end of the full text for details.
Nuclear staining with alum hematoxylin.
Llewellyn, B D
2009-08-01
The hematoxylin and eosin stain is the most common method used in anatomic pathology, yet it is a method about which technologists ask numerous questions. Hematoxylin is a natural dye obtained from a tree originally found in Central America, and is easily converted into the dye hematein. This dye forms coordination compounds with mordant metals, such as aluminum, and the resulting lake attaches to cell nuclei. Regressive formulations contain a higher concentration of dye than progressive formulations and may also contain a lower concentration of mordant. The presence of an acid increases the life of the solution and in progressive solutions may also affect selectivity of staining. An appendix lists more than 60 hemalum formulations and the ratio of dye to mordant for each.
de Beer, Wayne A
2017-10-01
This paper proposes the use of the cognitive domain of Bloom's Taxonomy, an educational classification system, to guide the critical thinking required for the composition of the psychiatric formulation during the various stages of specialist training. Bloom's Taxonomy offers a hierarchical, structured approach to clinical reasoning. Use of this method can assist supervisors and trainees to understand better the concepts of and offer a developmental approach to critical reasoning. Application of the Taxonomy, using cognitive 'action words' (verbs) within each of the levels, can promote increasing sophistication in the construction of the psychiatric formulation. Examples of how the Taxonomy can be adapted to design educational resources are suggested in the article.
One-loop perturbative coupling of A and A? through the chiral overlap operator
NASA Astrophysics Data System (ADS)
Makino, Hiroki; Morikawa, Okuto; Suzuki, Hiroshi
2018-03-01
Recently, Grabowska and Kaplan constructed a four-dimensional lattice formulation of chiral gauge theories on the basis of the chiral overlap operator. At least in the tree-level approximation, the left-handed fermion is coupled only to the original gauge field A, while the right-handed one is coupled only to the gauge field A*, a deformation of A by the gradient flow with infinite flow time. In this paper, we study the fermion one-loop effective action in their formulation. We show that the continuum limit of this effective action contains local interaction terms between A and A*, even if the anomaly cancellation condition is met. These non-vanishing terms would lead an undesired perturbative spectrum in the formulation.
The determination of elements in herbal teas and medicinal plant formulations and their tisanes.
Pohl, Pawel; Dzimitrowicz, Anna; Jedryczko, Dominika; Szymczycha-Madeja, Anna; Welna, Maja; Jamroz, Piotr
2016-10-25
Elemental analysis of herbal teas and their tisanes is aimed at assessing their quality and safety in reference to specific food safety regulations and evaluating their nutritional value. This survey is dedicated to atomic spectroscopy and mass spectrometry element detection methods and sample preparation procedures used in elemental analysis of herbal teas and medicinal plant formulations. Referring to original works from the last 15 years, particular attention has been paid to tisane preparation, sample matrix decomposition, calibration and quality assurance of results in elemental analysis of herbal teas by different atomic and mass spectrometry methods. In addition, possible sources of elements in herbal teas and medicinal plant formulations have been discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Coating formulation and method for refinishing the surface of surface-damaged graphite articles
Ardary, Z.L.; Benton, S.T.
1987-07-08
The described development is directed to a coating formulation for filling surface irregularities in graphite articles such as molds, crucibles, and matched die sets used in high-temperature metallurgical operations. The coating formulation of the present invention is formed of carbon black flour, thermosetting resin and a solvent for the resin. In affixing the coating to the article, the solvent is evaporated, the resin cured to bond the coating to the surface of the article and then pyrolyzed to convert the resin to carbon. Upon completion of the pyrolysis step, the coating is shaped and polished to provide the article with a surface restoration that is essentially similar to the original or desired surface finish without the irregularity.
Coating formulation and method for refinishing the surface of surface-damaged graphite articles
Ardary, Zane L.; Benton, Samuel T.
1988-01-01
The described development is directed to a coating formulation for filling surface irregularities in graphite articles such as molds, crucibles, and matched die sets used in high-temperature metallurgical operations. The coating formulation of the present invention is formed of carbon black flour, thermosetting resin and a solvent for the resin. In affixing the coating to the article, the solvent is evaporated, the resin cured to bond the coating to the surface of the article and then pyrolyzed to convert the resin to carbon. Upon completion of the pyrolysis step, the coating is shaped and polished to provide the article with a surface restoration that is essentially similar to the original or desired surface finish without the irregularity.
Coating formulation and method for refinishing the surface of surface-damaged graphite articles
Ardary, Z.L.; Benton, S.T.
1988-11-22
The described development is directed to a coating formulation for filling surface irregularities in graphite articles such as molds, crucibles, and matched die sets used in high-temperature metallurgical operations. The coating formulation of the present invention is formed of carbon black flour, thermosetting resin and a solvent for the resin. In affixing the coating to the article, the solvent is evaporated, the resin cured to bond the coating to the surface of the article and then pyrolyzed to convert the resin to carbon. Upon completion of the pyrolysis step, the coating is shaped and polished to provide the article with a surface restoration that is essentially similar to the original or desired surface finish without the irregularity.
Evaluation of a lake whitefish bioenergetics model
Madenjian, Charles P.; O'Connor, Daniel V.; Pothoven, Steven A.; Schneeberger, Philip J.; Rediske, Richard R.; O'Keefe, James P.; Bergstedt, Roger A.; Argyle, Ray L.; Brandt, Stephen B.
2006-01-01
We evaluated the Wisconsin bioenergetics model for lake whitefish Coregonus clupeaformis in the laboratory and in the field. For the laboratory evaluation, lake whitefish were fed rainbow smelt Osmerus mordax in four laboratory tanks during a 133-d experiment. Based on a comparison of bioenergetics model predictions of lake whitefish food consumption and growth with observed consumption and growth, we concluded that the bioenergetics model furnished significantly biased estimates of both food consumption and growth. On average, the model overestimated consumption by 61% and underestimated growth by 16%. The source of the bias was probably an overestimation of the respiration rate. We therefore adjusted the respiration component of the bioenergetics model to obtain a good fit of the model to the observed consumption and growth in our laboratory tanks. Based on the adjusted model, predictions of food consumption over the 133-d period fell within 5% of observed consumption in three of the four tanks and within 9% of observed consumption in the remaining tank. We used polychlorinated biphenyls (PCBs) as a tracer to evaluate model performance in the field. Based on our laboratory experiment, the efficiency with which lake whitefish retained PCBs from their food (I?) was estimated at 0.45. We applied the bioenergetics model to Lake Michigan lake whitefish and then used PCB determinations of both lake whitefish and their prey from Lake Michigan to estimate p in the field. Application of the original model to Lake Michigan lake whitefish yielded a field estimate of 0.28, implying that the original formulation of the model overestimated consumption in Lake Michigan by 61%. Application of the bioenergetics model with the adjusted respiration component resulted in a field I? estimate of 0.56, implying that this revised model underestimated consumption by 20%.
Models, Data, and War: a Critique of the Foundation for Defense Analyses.
1980-03-12
scientific formulation 6 An "objective" solution 8 Analysis of a squishy problem 9 A judgmental formulation 9 A potential for distortion 11 A subjective...inextricably tied to those judgments. Different analysts, with apparently identical knowledge of a real world problem, may develop plausible formulations ...configured is a concrete theoretical statement." 2/ The formulation of a computer model--conceiving a mathematical representation of the real world
Variable thickness transient ground-water flow model. Volume 1. Formulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reisenauer, A.E.
1979-12-01
Mathematical formulation for the variable thickness transient (VTT) model of an aquifer system is presented. The basic assumptions are described. Specific data requirements for the physical parameters are discussed. The boundary definitions and solution techniques of the numerical formulation of the system of equations are presented.
A mathematical approach to HIV infection dynamics
NASA Astrophysics Data System (ADS)
Ida, A.; Oharu, S.; Oharu, Y.
2007-07-01
In order to obtain a comprehensive form of mathematical models describing nonlinear phenomena such as HIV infection process and AIDS disease progression, it is efficient to introduce a general class of time-dependent evolution equations in such a way that the associated nonlinear operator is decomposed into the sum of a differential operator and a perturbation which is nonlinear in general and also satisfies no global continuity condition. An attempt is then made to combine the implicit approach (usually adapted for convective diffusion operators) and explicit approach (more suited to treat continuous-type operators representing various physiological interactions), resulting in a semi-implicit product formula. Decomposing the operators in this way and considering their individual properties, it is seen that approximation-solvability of the original model is verified under suitable conditions. Once appropriate terms are formulated to describe treatment by antiretroviral therapy, the time-dependence of the reaction terms appears, and such product formula is useful for generating approximate numerical solutions to the governing equations. With this knowledge, a continuous model for HIV disease progression is formulated and physiological interpretations are provided. The abstract theory is then applied to show existence of unique solutions to the continuous model describing the behavior of the HIV virus in the human body and its reaction to treatment by antiretroviral therapy. The product formula suggests appropriate discrete models describing the dynamics of host pathogen interactions with HIV1 and is applied to perform numerical simulations based on the model of the HIV infection process and disease progression. Finally, the results of our numerical simulations are visualized and it is observed that our results agree with medical and physiological aspects.
Chew, Emily Y; Clemons, Traci E; Agrón, Elvira; Sperduto, Robert D; Sangiovanni, John Paul; Kurinij, Natalie; Davis, Matthew D
2013-08-01
To describe the long-term effects (10 years) of the Age-Related Eye Disease Study (AREDS) formulation of high-dose antioxidants and zinc supplement on progression of age-related macular degeneration (AMD). Multicenter, randomized, controlled, clinical trial followed by an epidemiologic follow-up study. We enrolled 4757 participants with varying severity of AMD in the clinical trial; 3549 surviving participants consented to the follow-up study. Participants were randomly assigned to antioxidants C, E, and β-carotene and/or zinc versus placebo during the clinical trial. For participants with intermediate or advanced AMD in 1 eye, the AREDS formulation delayed the progression to advanced AMD. Participants were then enrolled in a follow-up study. Eye examinations were conducted with annual fundus photographs and best-corrected visual acuity assessments. Medical histories and mortality were obtained for safety monitoring. Repeated measures logistic regression was used in the primary analyses. Photographic assessment of progression to, or history of treatment for, advanced AMD (neovascular [NV] or central geographic atrophy [CGA]), and moderate visual acuity loss from baseline (≥15 letters). Comparison of the participants originally assigned to placebo in AREDS categories 3 and 4 at baseline with those originally assigned to AREDS formulation at 10 years demonstrated a significant (P<0.001) odds reduction in the risk of developing advanced AMD or the development of NV AMD (odds ratio [OR], 0.66, 95% confidence interval [CI], 0.53-0.83 and OR, 0.60; 95% CI, 0.47-0. 78, respectively). No significant reduction (P = 0.93) was seen for the CGA (OR, 1.02; 95% CI, 0.71-1.45). A significant reduction (P = 0.002) for the development of moderate vision loss was seen (OR 0.71; 95% CI, 0.57-0.88). No adverse effects were associated with the AREDS formulation. Mortality was reduced in participants assigned to zinc, especially death from circulatory diseases. Five years after the clinical trial ended, the beneficial effects of the AREDS formulation persisted for development of NV AMD but not for CGA. These results are consistent with the original recommendations that persons with intermediate or advanced AMD in 1 eye should consider taking the AREDS formulation. The authors have no proprietary or commercial interest in any of the materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Interpretation as Freud's specific action, and Bion's container-contained.
Mawson, Chris
2017-12-01
This is a paper showing how a concept central to the work of Wilfred Bion, and one of Klein's important recommendations concerning the practice of analysis with adults and small children, can both be seen in the light of Freud's earliest formulation of the origin of anxiety and the mother's first responses to her infant in distress. In the paper I suggest that these clinically influential concepts of Klein and Bion show an underlying consistency and affinity with Freud's early ideas about the management of anxiety in the mother-infant relationship, described in two of his pre-psychoanalytic writings, How Anxiety Originates (1894b), and The Project for a Scientific Psychology (1950 [1895]). The specific mode of operation of psychoanalytic interpretation is clarified by the comparisons made, with no attempt to suggest that Klein or Bion based their concepts upon these particular early formulations of Freud's. Copyright © 2017 Institute of Psychoanalysis.
Gamma ray bursts as a signature for entangled gravitational systems
NASA Astrophysics Data System (ADS)
Basini, Giuseppe; Capozziello, Salvatore; Longo, Giuseppe
2004-01-01
Gamma ray bursts (GRBs), due to their features, can be considered not only extremely energetic, but also as the most relativistic astrophysical objects discovered. Their phenomenology is still matter of debate and, till now, no fully satisfactory model has been formulated to explain the nature of their origin. In the framework of a recently developed new theory, where general conservation laws are always and absolutely conserved in nature, we propose an alternative model where an ``entangled'' gravitational system, dynamically constituted by a black holes connected to a white hole through a worm hole, seems capable of explaining most of the properties inferred for the GRB engine. In particular, it leads to a natural explanation of energetics, beaming, polarization, and, very likely, distribution. On the other hand, GRBs can be considered a signature of such entangled gravitational systems.
Development of an Aeroelastic Code Based on an Euler/Navier-Stokes Aerodynamic Solver
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Srivastava, Rakesh; Keith, Theo G., Jr.; Stefko, George L.; Janus, Mark J.
1996-01-01
This paper describes the development of an aeroelastic code (TURBO-AE) based on an Euler/Navier-Stokes unsteady aerodynamic analysis. A brief review of the relevant research in the area of propulsion aeroelasticity is presented. The paper briefly describes the original Euler/Navier-Stokes code (TURBO) and then details the development of the aeroelastic extensions. The aeroelastic formulation is described. The modeling of the dynamics of the blade using a modal approach is detailed, along with the grid deformation approach used to model the elastic deformation of the blade. The work-per-cycle approach used to evaluate aeroelastic stability is described. Representative results used to verify the code are presented. The paper concludes with an evaluation of the development thus far, and some plans for further development and validation of the TURBO-AE code.
[Application of an artificial neural network in the design of sustained-release dosage forms].
Wei, X H; Wu, J J; Liang, W Q
2001-09-01
To use the artificial neural network (ANN) in Matlab 5.1 tool-boxes to predict the formulations of sustained-release tablets. The solubilities of nine drugs and various ratios of HPMC: Dextrin for 63 tablet formulations were used as the ANN model input, and in vitro accumulation released at 6 sampling times were used as output. The ANN model was constructed by selecting the optimal number of iterations (25) and model structure in which there are one hidden layer and five hidden layer nodes. The optimized ANN model was used for prediction of formulation based on desired target in vitro dissolution-time profiles. ANN predicted profiles based on ANN predicted formulations were closely similar to the target profiles. The ANN could be used for predicting the dissolution profiles of sustained release dosage form and for the design of optimal formulation.
NASA Astrophysics Data System (ADS)
Olen, Melissa; Geisow, Adrian; Parraman, Carinna
2015-01-01
This paper examines the transferability of the Munsell system to modern inkjet colorants and printing technology following a similar approach to his original methods. While extensive research and development has gone into establishing methods for measuring and modelling the modern colour gamut, this study seeks to reintegrate the psychophysical and artistic principles used in Munsell's early colour studies with digital print. Contemporary inkjet printing, with ink sets containing a greater number of primary colorants, are significantly higher in chroma compared to the limited colorants available at the time of Munsell's original work. Following Munsell's design and implementation, our experiments replicate the use of Clerk-Maxwell's spinning disks in order to examine the effects of colour mixing with these expanded colour capacities, and to determine hue distribution and placement. This work revisits Munsell's project in light of known issues, and formulates questions about how we can reintegrate Munsell's approach for colour description and mixing into modern colour science, understanding, and potential application.
NASA Astrophysics Data System (ADS)
Belikov, Dmitry A.; Maksyutov, Shamil; Yaremchuk, Alexey; Ganshin, Alexander; Kaminski, Thomas; Blessing, Simon; Sasakawa, Motoki; Gomez-Pelaez, Angel J.; Starchenko, Alexander
2016-02-01
We present the development of the Adjoint of the Global Eulerian-Lagrangian Coupled Atmospheric (A-GELCA) model that consists of the National Institute for Environmental Studies (NIES) model as an Eulerian three-dimensional transport model (TM), and FLEXPART (FLEXible PARTicle dispersion model) as the Lagrangian Particle Dispersion Model (LPDM). The forward tangent linear and adjoint components of the Eulerian model were constructed directly from the original NIES TM code using an automatic differentiation tool known as TAF (Transformation of Algorithms in Fortran; http://www.FastOpt.com, with additional manual pre- and post-processing aimed at improving transparency and clarity of the code and optimizing the performance of the computing, including MPI (Message Passing Interface). The Lagrangian component did not require any code modification, as LPDMs are self-adjoint and track a significant number of particles backward in time in order to calculate the sensitivity of the observations to the neighboring emission areas. The constructed Eulerian adjoint was coupled with the Lagrangian component at a time boundary in the global domain. The simulations presented in this work were performed using the A-GELCA model in forward and adjoint modes. The forward simulation shows that the coupled model improves reproduction of the seasonal cycle and short-term variability of CO2. Mean bias and standard deviation for five of the six Siberian sites considered decrease roughly by 1 ppm when using the coupled model. The adjoint of the Eulerian model was shown, through several numerical tests, to be very accurate (within machine epsilon with mismatch around to ±6 e-14) compared to direct forward sensitivity calculations. The developed adjoint of the coupled model combines the flux conservation and stability of an Eulerian discrete adjoint formulation with the flexibility, accuracy, and high resolution of a Lagrangian backward trajectory formulation. A-GELCA will be incorporated into a variational inversion system designed to optimize surface fluxes of greenhouse gases.
Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases
Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N.
2016-01-01
A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed. PMID:26858668
``All that Matter ... in One Big Bang ...'', &Other Cosmological Singularities
NASA Astrophysics Data System (ADS)
Elizalde, Emilio
2018-02-01
The first part of this paper contains a brief description of the beginnings of modern cosmology, which, the author will argue, was most likely born in the Year 1912. Some of the pieces of evidence presented here have emerged from recent research in the history of science, and are not usually shared with the general audiences in popular science books. In special, the issue of the correct formulation of the original Big Bang concept, according to the precise words of Fred Hoyle, is discussed. Too often, this point is very deficiently explained (when not just misleadingly) in most of the available generalist literature. Other frequent uses of the same words, Big Bang, as to name the initial singularity of the cosmos, and also whole cosmological models, are then addressed, as evolutions of its original meaning. Quantum and inflationary additions to the celebrated singularity theorems by Penrose, Geroch, Hawking and others led to subsequent results by Borde, Guth and Vilenkin. And corresponding corrections to the Einstein field equations have originated, in particular, $R^2$, $f(R)$, and scalar-tensor gravities, giving rise to a plethora of new singularities. For completeness, an updated table with a classification of the same is given.
Emergent Theorisations in Modelling the Teaching of Two Science Teachers
NASA Astrophysics Data System (ADS)
Monteiro, Rute; Carrillo, José; Aguaded, Santiago
2008-05-01
The main goal of this study is to understand the teacher’s thoughts and action when he/she is immersed in the activity of teaching. To do so, it describes the procedures used to model two teachers’ practice with respect to the topic of Plant Diversity. Starting from a consideration of the theoretical constructs of script, routine and improvisation, this modelling basically corresponds to a microanalysis of the teacher’s beliefs, goals and knowledge, as highlighted in the classroom activity. From the process of modelling certain theorisations emerge, corresponding to abstractions gained from concrete cases. They allow us to foreground strong relationships between the beliefs and actions, and the knowledge and objectives of the teacher in action. Envisaged as conjectures rather than generalisations, these abstractions could possibly be extended to other cases, and tested out with new case studies, questioning their formulation or perhaps demonstrating that the limits of their applicability do not go beyond the original cases.
A fast community detection method in bipartite networks by distance dynamics
NASA Astrophysics Data System (ADS)
Sun, Hong-liang; Ch'ng, Eugene; Yong, Xi; Garibaldi, Jonathan M.; See, Simon; Chen, Duan-bing
2018-04-01
Many real bipartite networks are found to be divided into two-mode communities. In this paper, we formulate a new two-mode community detection algorithm BiAttractor. It is based on distance dynamics model Attractor proposed by Shao et al. with extension from unipartite to bipartite networks. Since Jaccard coefficient of distance dynamics model is incapable to measure distances of different types of vertices in bipartite networks, our main contribution is to extend distance dynamics model from unipartite to bipartite networks using a novel measure Local Jaccard Distance (LJD). Furthermore, distances between different types of vertices are not affected by common neighbors in the original method. This new idea makes clear assumptions and yields interpretable results in linear time complexity O(| E |) in sparse networks, where | E | is the number of edges. Experiments on synthetic networks demonstrate it is capable to overcome resolution limit compared with existing other methods. Further research on real networks shows that this model can accurately detect interpretable community structures in a short time.
NASA Astrophysics Data System (ADS)
Ballantyne, F.; Billings, S. A.
2016-12-01
Much of the variability in projections of Earth's future C balance derives from uncertainty in how to formulate and parameterize models of biologically mediated transformations of soil organic C (SOC). Over the past decade, models of belowground decomposition have incorporated more realism, namely microbial biomass and exoenzyme pools, but it remains unclear whether microbially mediated decomposition is accurately formulated. Different models and different assumptions about how microbial efficiency, defined in terms of respiratory losses, varies with temperature exert great influence on SOC and CO2 flux projections for the future. Here, we incorporate a physiologically realistic formulation of CO2 loss from microbes, distinct from extant formulations and logically consistent with microbial C uptake and losses, into belowground dynamics and contrast its projections for SOC pools and CO2 flux from soils to those from the phenomenological formulations of efficiency in current models. We quantitatively describe how short and long term SOC dynamics are influenced by different mathematical formulations of efficiency, and that our lack of knowledge regarding loss rates from SOC and microbial biomass pools, specific respiration rate and maximum substrate uptake rate severely constrains our ability to confidently parameterize microbial SOC modules in Earth System Models. Both steady-state SOC and microbial biomass C pools, as well as transient responses to perturbations, can differ substantially depending on how microbial efficiency is derived. In particular, the discrepancy between SOC stocks for different formulations of efficiency varies from negligible to more than two orders of magnitude, depending on the relative values of respiratory versus non-respiratory losses from microbial biomass. Mass-specific respiration and proportional loss rates from soil microbes emerge as key determinants of the consequences of different formulations of efficiency for C flux in soils.
Habjanec, Lidija; Frkanec, Ruza; Halassy, Beata; Tomasić, Jelka
2006-01-01
The adjuvant activity of liposomes and immunostimulating peptidoglycan monomer (PGM) in different formulations has been studied in mice model using ovalbumin (OVA) as an antigen. PGM is a natural compound of bacterial origin with well-defined chemical structure: GlcNAc-MurNAc-L-Ala-D-isoGln-mesoDpm(epsilonNH2)-D-Ala-D-Ala. It is a non-toxic, non-pyrogenic, and water-soluble immunostimulator. The aim of this study was to investigate the influence of different liposomal formulations of OVA, with or without PGM, on the production of total IgG, as well as of IgG1 and IgG2a subclasses of OVA-specific antibodies (as indicators of Th2 and Th1 type of immune response, respectively). CBA mice were immunized s.c. with OVA mixed with liposomes, OVA with PGM mixed with liposomes, OVA encapsulated into liposomes and OVA with PGM encapsulated into liposomes. Control groups were OVA in saline, OVA with PGM in saline, and OVA in CFA/IFA adjuvant formulation. The entrapment efficacy of OVA was monitored by HPLC method. The adjuvant activity of the mixture of OVA and empty liposomes, the mixture of OVA, PGM, and liposomes and PGM encapsulated with OVA into liposomes on production of total anti-OVA IgG was demonstrated. The mixture of PGM and liposomes exhibited additive immunostimulating effect on the production of antigen-specific IgGs. The analysis of IgG subclasses revealed that encapsulation of OVA into liposomes favors the stimulation of IgG2a antibodies, indicating the switch toward the Th1 type of immune response. When encapsulated into liposomes or mixed with liposomes, PGM induced a switch from Th1 to Th2 type of immune response. It could be concluded that appropriate formulations of antigen, PGM, and liposomes differently affect the humoral immune response and direct the switch in the type of immune response (Th1/Th2).
BRST theory without Hamiltonian and Lagrangian
NASA Astrophysics Data System (ADS)
Lyakhovich, S. L.; Sharapov, A. A.
2005-03-01
We consider a generic gauge system, whose physical degrees of freedom are obtained by restriction on a constraint surface followed by factorization with respect to the action of gauge transformations; in so doing, no Hamiltonian structure or action principle is supposed to exist. For such a generic gauge system we construct a consistent BRST formulation, which includes the conventional BV Lagrangian and BFV Hamiltonian schemes as particular cases. If the original manifold carries a weak Poisson structure (a bivector field giving rise to a Poisson bracket on the space of physical observables) the generic gauge system is shown to admit deformation quantization by means of the Kontsevich formality theorem. A sigma-model interpretation of this quantization algorithm is briefly discussed.
Instability of the cored barotropic disc: the linear eigenvalue formulation
NASA Astrophysics Data System (ADS)
Polyachenko, E. V.
2018-05-01
Gaseous rotating razor-thin discs are a testing ground for theories of spiral structure that try to explain appearance and diversity of disc galaxy patterns. These patterns are believed to arise spontaneously under the action of gravitational instability, but calculations of its characteristics in the gas are mostly obscured. The paper suggests a new method for finding the spiral patterns based on an expansion of small amplitude perturbations over Lagrange polynomials in small radial elements. The final matrix equation is extracted from the original hydrodynamical equations without the use of an approximate theory and has a form of the linear algebraic eigenvalue problem. The method is applied to a galactic model with the cored exponential density profile.
3D Compressible Melt Transport with Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Dannberg, Juliane; Heister, Timo
2015-04-01
Melt generation and migration have been the subject of numerous investigations, but their typical time and length-scales are vastly different from mantle convection, which makes it difficult to study these processes in a unified framework. The equations that describe coupled Stokes-Darcy flow have been derived a long time ago and they have been successfully implemented and applied in numerical models (Keller et al., 2013). However, modelling magma dynamics poses the challenge of highly non-linear and spatially variable material properties, in particular the viscosity. Applying adaptive mesh refinement to this type of problems is particularly advantageous, as the resolution can be increased in mesh cells where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. In addition, previous models neglect the compressibility of both the solid and the fluid phase. However, experiments have shown that the melt density change from the depth of melt generation to the surface leads to a volume increase of up to 20%. Considering these volume changes in both phases also ensures self-consistency of models that strive to link melt generation to processes in the deeper mantle, where the compressibility of the solid phase becomes more important. We describe our extension of the finite-element mantle convection code ASPECT (Kronbichler et al., 2012) that allows for solving additional equations describing the behaviour of silicate melt percolating through and interacting with a viscously deforming host rock. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. This approach includes both melt migration and melt generation with the accompanying latent heat effects. We evaluate the functionality and potential of this method using a series of simple model setups and benchmarks, comparing results of the compressible and incompressible formulation and showing the potential of adaptive mesh refinement when applied to melt migration. Our model of magma dynamics provides a framework for modelling processes on different scales and investigating links between processes occurring in the deep mantle and melt generation and migration. This approach could prove particularly useful applied to modelling the generation of komatiites or other melts originating in greater depths. Keller, T., D. A. May, and B. J. P. Kaus (2013), Numerical modelling of magma dynamics coupled to tectonic deformation of lithosphere and crust, Geophysical Journal International, 195 (3), 1406-1442. Kronbichler, M., T. Heister, and W. Bangerth (2012), High accuracy mantle convection simulation through modern numerical methods, Geophysical Journal International, 191 (1), 12-29.
Reliability and validity of the Dutch version of the Readiness to Change Questionnaire.
Defuentes-Merillas, L; Dejong, C A J; Schippers, G M
2002-01-01
The aim of the present study was to evaluate the psychometric properties of the Dutch version of the Readiness to Change Questionnaire (RCQ-D). The subjects were 246 excessive drinkers admitted to an addiction treatment centre and 54 offenders convicted of an alcohol-related crime in The Netherlands. The factor structure of the RCQ-D for the two samples combined was found to be consistent with the three-factor structure established for the original RCQ. The reliability of the items for each scale was found to be satisfactory. Allocated stage of change showed significant differences between the different subsamples. As expected, the scale scores for adjacent stages of change showed significantly higher inter-correlations than the scale scores for non-adjacent stages. Additionally, the negatively formulated items from the pre-contemplation scale were reformulated positively and their internal consistency tested among the offender sample. The positively formulated pre-contemplation items showed a higher alpha value than the negatively formulated items. We therefore suggest that the positively formulated items should replace the negatively formulated ones.
Heberton, C.I.; Russell, T.F.; Konikow, Leonard F.; Hornberger, G.Z.
2000-01-01
This report documents the U.S. Geological Survey Eulerian-Lagrangian Localized Adjoint Method (ELLAM) algorithm that solves an integral form of the solute-transport equation, incorporating an implicit-in-time difference approximation for the dispersive and sink terms. Like the algorithm in the original version of the U.S. Geological Survey MOC3D transport model, ELLAM uses a method of characteristics approach to solve the transport equation on the basis of the velocity field. The ELLAM algorithm, however, is based on an integral formulation of conservation of mass and uses appropriate numerical techniques to obtain global conservation of mass. The implicit procedure eliminates several stability criteria required for an explicit formulation. Consequently, ELLAM allows large transport time increments to be used. ELLAM can produce qualitatively good results using a small number of transport time steps. A description of the ELLAM numerical method, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. The ELLAM algorithm was evaluated for the same set of problems used to test and evaluate Version 1 and Version 2 of MOC3D. These test results indicate that ELLAM offers a viable alternative to the explicit and implicit solvers in MOC3D. Its use is desirable when mass balance is imperative or a fast, qualitative model result is needed. Although accurate solutions can be generated using ELLAM, its efficiency relative to the two previously documented solution algorithms is problem dependent.
Zhang, Xudong; Zeng, Xiaowei; Liang, Xin; Yang, Ying; Li, Xiaoming; Chen, Hongbo; Huang, Laiqiang; Mei, Lin; Feng, Si-Shen
2014-11-01
Micelles may be the nanocarrier that is used most often in the area of nanomedicine due to its promising performance and technical simplicity. However, like the original drugs, micellar formulation may arouse intracellular autophagy that deteriorates their advantages for efficient drug delivery. There has been no report in the literature that involves the fate of micelles after successfully internalized into the cancer cells. In this study, we show by using docetaxel-loaded PEG-b-PLGA micelles as a micellar model that the micelles do arouse intracellular autophagy and are thus subject to degradation through the endo-lysosome pathway. Moreover, we show that co-administration of the micellar formulation with autophagy inhibitor such as chloroquine (CQ) could significantly enhance their therapeutic effects. The docetaxel-loaded PEG-b-PLGA micelles are formulated by the membrane dialysis method, which are of 7.1% drug loading and 72.8% drug encapsulation efficiency in a size range of around 40 nm with narrow size distribution. Autophagy degradation and inhibition are investigated by confocal laser scanning microscopy with various biological makers. We show that the IC50 values of the drug formulated in the PEG-b-PLGA micelles after 24 h treatment MCF-7 cancer cells with no autophagy inhibitor or in combination with CQ were 22.30 ± 1.32 and 1.75 ± 0.43 μg/mL respectively, which indicated a 12-fold more efficient treatment with CQ. The in vivo investigation further confirmed the advantages of such a strategy. The findings may provide advanced knowledge for development of nanomedicine for clinical application. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gibiansky, Leonid; Gibiansky, Ekaterina
2018-02-01
The emerging discipline of mathematical pharmacology occupies the space between advanced pharmacometrics and systems biology. A characteristic feature of the approach is application of advance mathematical methods to study the behavior of biological systems as described by mathematical (most often differential) equations. One of the early application of mathematical pharmacology (that was not called this name at the time) was formulation and investigation of the target-mediated drug disposition (TMDD) model and its approximations. The model was shown to be remarkably successful, not only in describing the observed data for drug-target interactions, but also in advancing the qualitative and quantitative understanding of those interactions and their role in pharmacokinetic and pharmacodynamic properties of biologics. The TMDD model in its original formulation describes the interaction of the drug that has one binding site with the target that also has only one binding site. Following the framework developed earlier for drugs with one-to-one binding, this work aims to describe a rigorous approach for working with similar systems and to apply it to drugs that bind to targets with two binding sites. The quasi-steady-state, quasi-equilibrium, irreversible binding, and Michaelis-Menten approximations of the model are also derived. These equations can be used, in particular, to predict concentrations of the partially bound target (RC). This could be clinically important if RC remains active and has slow internalization rate. In this case, introduction of the drug aimed to suppress target activity may lead to the opposite effect due to RC accumulation.
NASA Astrophysics Data System (ADS)
Sumihara, K.
Based upon legitimate variational principles, one microscopic-macroscopic finite element formulation for linear dynamics is presented by Hybrid Stress Finite Element Method. The microscopic application of Geometric Perturbation introduced by Pian and the introduction of infinitesimal limit core element (Baby Element) have been consistently combined according to the flexible and inherent interpretation of the legitimate variational principles initially originated by Pian and Tong. The conceptual development based upon Hybrid Finite Element Method is extended to linear dynamics with the introduction of physically meaningful higher modes.
NASA Technical Reports Server (NTRS)
Lofton, Rodney
2010-01-01
This presentation describes the process used to collect, review, integrate, and assess research requirements desired to be a part of research and payload activities conducted on the ISS. The presentation provides a description of: where the requirements originate, to whom they are submitted, how they are integrated into a requirements plan, and how that integrated plan is formulated and approved. It is hoped that from completing the review of this presentation, one will get an understanding of the planning process that formulates payload requirements into an integrated plan used for specifying research activities to take place on the ISS.
Case formulation and management using pattern-based formulation (PBF) methodology: clinical case 1.
Fernando, Irosh; Cohen, Martin
2014-02-01
A tool for psychiatric case formulation known as pattern-based formulation (PBF) has been recently introduced. This paper presents an application of this methodology in formulating and managing complex clinical cases. The symptomatology of the clinical presentation has been parsed into individual clinical phenomena and interpreted by selecting explanatory models. The clinical presentation demonstrates how PBF has been used as a clinical tool to guide clinicians' thinking, that takes a structured approach to manage multiple issues using a broad range of management strategies. In doing so, the paper also introduces a number of patterns related to the observed clinical phenomena that can be re-used as explanatory models when formulating other clinical cases. It is expected that this paper will assist clinicians, and particularly trainees, to better understand PBF methodology and apply it to improve their formulation skills.
Godugu, Chandraiah; Doddapaneni, Ravi; Safe, Stephen H.; Singh, Mandip
2017-01-01
The present study demonstrates the promising anticancer effects of novel C-substituted diindolylmethane (DIM) derivatives DIM-10 and DIM-14 in aggressive TNBC models. In vitro studies demonstrated that these compounds possess strong anticancer effects. Caco-2 permeability studies resulted in poor permeability and poor oral bioavailability was demonstrated by pharmacokinetic studies. Nano structured lipid carrier (NLC) formulations were prepared to increase the clinical acceptance of these compounds. Significant increase in oral bioavailability was observed with NLC formulations. Compared to DIM-10, DIM-10 NLC formulation showed increase in Cmax and AUC values by 4.73 and 11.19-folds, respectively. Similar pattern of increase was observed with DIM-14 NLC formulations. In dogs DIM-10 NLC formulations showed an increase of 2.65 and 2.94-fold in Cmax and AUC, respectively. The anticancer studies in MDA-MB-231 orthotopic TNBC models demonstrated significant reduction in tumor volumes in DIM-10 and DIM-14 NLC treated animals. Our studies suggest that NLC formulation of both DIM-10 and 14 is effective in TNBC models. PMID:27586082
Strehlenert, H; Richter-Sundberg, L; Nyström, M E; Hasson, H
2015-12-08
Evidence has come to play a central role in health policymaking. However, policymakers tend to use other types of information besides research evidence. Most prior studies on evidence-informed policy have focused on the policy formulation phase without a systematic analysis of its implementation. It has been suggested that in order to fully understand the policy process, the analysis should include both policy formulation and implementation. The purpose of the study was to explore and compare two policies aiming to improve health and social care in Sweden and to empirically test a new conceptual model for evidence-informed policy formulation and implementation. Two concurrent national policies were studied during the entire policy process using a longitudinal, comparative case study approach. Data was collected through interviews, observations, and documents. A Conceptual Model for Evidence-Informed Policy Formulation and Implementation was developed based on prior frameworks for evidence-informed policymaking and policy dissemination and implementation. The conceptual model was used to organize and analyze the data. The policies differed regarding the use of evidence in the policy formulation and the extent to which the policy formulation and implementation phases overlapped. Similarities between the cases were an emphasis on capacity assessment, modified activities based on the assessment, and a highly active implementation approach relying on networks of stakeholders. The Conceptual Model for Evidence-Informed Policy Formulation and Implementation was empirically useful to organize the data. The policy actors' roles and functions were found to have a great influence on the choices of strategies and collaborators in all policy phases. The Conceptual Model for Evidence-Informed Policy Formulation and Implementation was found to be useful. However, it provided insufficient guidance for analyzing actors involved in the policy process, capacity-building strategies, and overlapping policy phases. A revised version of the model that includes these aspects is suggested.
Semantic concept-enriched dependence model for medical information retrieval.
Choi, Sungbin; Choi, Jinwook; Yoo, Sooyoung; Kim, Heechun; Lee, Youngho
2014-02-01
In medical information retrieval research, semantic resources have been mostly used by expanding the original query terms or estimating the concept importance weight. However, implicit term-dependency information contained in semantic concept terms has been overlooked or at least underused in most previous studies. In this study, we incorporate a semantic concept-based term-dependence feature into a formal retrieval model to improve its ranking performance. Standardized medical concept terms used by medical professionals were assumed to have implicit dependency within the same concept. We hypothesized that, by elaborately revising the ranking algorithms to favor documents that preserve those implicit dependencies, the ranking performance could be improved. The implicit dependence features are harvested from the original query using MetaMap. These semantic concept-based dependence features were incorporated into a semantic concept-enriched dependence model (SCDM). We designed four different variants of the model, with each variant having distinct characteristics in the feature formulation method. We performed leave-one-out cross validations on both a clinical document corpus (TREC Medical records track) and a medical literature corpus (OHSUMED), which are representative test collections in medical information retrieval research. Our semantic concept-enriched dependence model consistently outperformed other state-of-the-art retrieval methods. Analysis shows that the performance gain has occurred independently of the concept's explicit importance in the query. By capturing implicit knowledge with regard to the query term relationships and incorporating them into a ranking model, we could build a more robust and effective retrieval model, independent of the concept importance. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Peng; Bai, Xian-Xu; Qian, Li-Jun; Choi, Seung-Bok
2017-06-01
This paper presents a new hysteresis model based on the force-displacement characteristics of magnetorheological (MR) fluid actuators (or devices) subjected to squeeze mode operation. The idea of the proposed model is originated from experimental observation of the field-dependent hysteretic behavior of MR fluids, which shows that from a view of rate-independence of hysteresis, a gap width-dependent hysteresis is occurred in the force-displacement relationship instead of the typical relationship of the force-velocity. To effectively and accurately portray the hysteresis behavior, the gap width-dependent hysteresis elements, the nonlinear viscous effect and the inertial effect are considered for the formulation of the hysteresis model. Then, a model-based feedforward force tracking control scheme is established through an observer which can estimate the virtual displacement. The effectiveness of the proposed hysteresis model is validated through the identification and prediction of the damping force of MR fluids in the squeeze mode. In addition, it is shown that superior force tracking performance of the feedforward control associated with the proposed hysteresis mode is evaluated by adopting several tracking trajectories.
Optimal one-way and roundtrip journeys design by mixed-integer programming
NASA Astrophysics Data System (ADS)
Ribeiro, Isabel M.; Vale, Cecília
2017-12-01
The introduction of multimodal/intermodal networks in transportation problems, especially when considering roundtrips, adds complexity to the models. This article presents two models for the optimization of intermodal trips as a contribution to the integration of transport modes in networks. The first model is devoted to one-way trips while the second one is dedicated to roundtrips. The original contribution of this research to transportation is mainly the consideration of roundtrips in the optimization process of intermodal transport, especially because the transport mode between two nodes on the return trip should be the same as the one on the outward trip if both nodes are visited on the return trip, which is a valuable aspect for transport companies. The mathematical formulations of both models leads to mixed binary linear programs, which is not a common approach for this type of problem. In this article, as well as the model description, computational experience is included to highlight the importance and efficiency of the proposed models, which may provide a valuable tool for transport managers.
CONSTRUCTION OF EDUCATIONAL THEORY MODELS.
ERIC Educational Resources Information Center
MACCIA, ELIZABETH S.; AND OTHERS
THIS STUDY DELINEATED MODELS WHICH HAVE POTENTIAL USE IN GENERATING EDUCATIONAL THEORY. A THEORY MODELS METHOD WAS FORMULATED. BY SELECTING AND ORDERING CONCEPTS FROM OTHER DISCIPLINES, THE INVESTIGATORS FORMULATED SEVEN THEORY MODELS. THE FINAL STEP OF DEVISING EDUCATIONAL THEORY FROM THE THEORY MODELS WAS PERFORMED ONLY TO THE EXTENT REQUIRED TO…
A Ranking Approach to Genomic Selection.
Blondel, Mathieu; Onogi, Akio; Iwata, Hiroyoshi; Ueda, Naonori
2015-01-01
Genomic selection (GS) is a recent selective breeding method which uses predictive models based on whole-genome molecular markers. Until now, existing studies formulated GS as the problem of modeling an individual's breeding value for a particular trait of interest, i.e., as a regression problem. To assess predictive accuracy of the model, the Pearson correlation between observed and predicted trait values was used. In this paper, we propose to formulate GS as the problem of ranking individuals according to their breeding value. Our proposed framework allows us to employ machine learning methods for ranking which had previously not been considered in the GS literature. To assess ranking accuracy of a model, we introduce a new measure originating from the information retrieval literature called normalized discounted cumulative gain (NDCG). NDCG rewards more strongly models which assign a high rank to individuals with high breeding value. Therefore, NDCG reflects a prerequisite objective in selective breeding: accurate selection of individuals with high breeding value. We conducted a comparison of 10 existing regression methods and 3 new ranking methods on 6 datasets, consisting of 4 plant species and 25 traits. Our experimental results suggest that tree-based ensemble methods including McRank, Random Forests and Gradient Boosting Regression Trees achieve excellent ranking accuracy. RKHS regression and RankSVM also achieve good accuracy when used with an RBF kernel. Traditional regression methods such as Bayesian lasso, wBSR and BayesC were found less suitable for ranking. Pearson correlation was found to correlate poorly with NDCG. Our study suggests two important messages. First, ranking methods are a promising research direction in GS. Second, NDCG can be a useful evaluation measure for GS.
Modeling of autocatalytic hydrolysis of adefovir dipivoxil in solid formulations.
Dong, Ying; Zhang, Yan; Xiang, Bingren; Deng, Haishan; Wu, Jingfang
2011-04-01
The stability and hydrolysis kinetics of a phosphate prodrug, adefovir dipivoxil, in solid formulations were studied. The stability relationship between five solid formulations was explored. An autocatalytic mechanism for hydrolysis could be proposed according to the kinetic behavior which fits the Prout-Tompkins model well. For the classical kinetic models could hardly describe and predict the hydrolysis kinetics of adefovir dipivoxil in solid formulations accurately when the temperature is high, a feedforward multilayer perceptron (MLP) neural network was constructed to model the hydrolysis kinetics. The build-in approaches in Weka, such as lazy classifiers and rule-based learners (IBk, KStar, DecisionTable and M5Rules), were used to verify the performance of MLP. The predictability of the models was evaluated by 10-fold cross-validation and an external test set. It reveals that MLP should be of general applicability proposing an alternative efficient way to model and predict autocatalytic hydrolysis kinetics for phosphate prodrugs.
Formulation and Application of the Generalized Multilevel Facets Model
ERIC Educational Resources Information Center
Wang, Wen-Chung; Liu, Chih-Yu
2007-01-01
In this study, the authors develop a generalized multilevel facets model, which is not only a multilevel and two-parameter generalization of the facets model, but also a multilevel and facet generalization of the generalized partial credit model. Because the new model is formulated within a framework of nonlinear mixed models, no efforts are…
Mallari, K J B; Kim, H; Pak, G; Aksoy, H; Yoon, J
2015-01-01
At the hillslope scale, where the rill-interrill configuration plays a significant role, infiltration is one of the major hydrologic processes affecting the generation of overland flow. As such, it is important to achieve a good understanding and accurate modelling of this process. Horton's infiltration has been widely used in many hydrologic models, though it has been occasionally found limited in handling adequately the antecedent moisture conditions (AMC) of soil. Holtan's model, conversely, is thought to be able to provide better estimation of infiltration rates as it can directly account for initial soil water content in its formulation. In this study, the Holtan model is coupled to an existing overland flow model, originally using Horton's model to account for infiltration, in an attempt to improve the prediction of runoff. For calibration and validation, experimental data from a two-dimensional flume which is incorporated with hillslope configuration have been used. Calibration and validation results showed that Holtan's model was able to improve the modelling results with better performance statistics than the Horton-coupled model. Holtan's infiltration equation, which allows accounting for AMC, provided an advantage and resulted in better runoff prediction of the model.
A toy Penrose inequality and its proof
NASA Astrophysics Data System (ADS)
Bengtsson, Ingemar; Jakobsson, Emma
2016-12-01
We formulate and prove a toy version of the Penrose inequality. The formulation mimics the original Penrose inequality in which the scenario is the following: a shell of null dust collapses in Minkowski space and a marginally trapped surface forms on it. Through a series of arguments relying on established assumptions, an inequality relating the area of this surface to the total energy of the shell is formulated. Then a further reformulation turns the inequality into a statement relating the area and the outer null expansion of a class of surfaces in Minkowski space itself. The inequality has been proven to hold true in many special cases, but there is no proof in general. In the toy version here presented, an analogous inequality in (2 + 1)-dimensional anti-de Sitter space turns out to hold true.
Advanced solid elements for sheet metal forming simulation
NASA Astrophysics Data System (ADS)
Mataix, Vicente; Rossi, Riccardo; Oñate, Eugenio; Flores, Fernando G.
2016-08-01
The solid-shells are an attractive kind of element for the simulation of forming processes, due to the fact that any kind of generic 3D constitutive law can be employed without any additional hypothesis. The present work consists in the improvement of a triangular prism solid-shell originally developed by Flores[2, 3]. The solid-shell can be used in the analysis of thin/thick shell, undergoing large deformations. The element is formulated in total Lagrangian formulation, and employs the neighbour (adjacent) elements to perform a local patch to enrich the displacement field. In the original formulation a modified right Cauchy-Green deformation tensor (C) is obtained; in the present work a modified deformation gradient (F) is obtained, which allows to generalise the methodology and allows to employ the Pull-Back and Push-Forwards operations. The element is based in three modifications: (a) a classical assumed strain approach for transverse shear strains (b) an assumed strain approach for the in-plane components using information from neighbour elements and (c) an averaging of the volumetric strain over the element. The objective is to use this type of elements for the simulation of shells avoiding transverse shear locking, improving the membrane behaviour of the in-plane triangle and to handle quasi-incompressible materials or materials with isochoric plastic flow.
A Mixed Integer Linear Program for Solving a Multiple Route Taxi Scheduling Problem
NASA Technical Reports Server (NTRS)
Montoya, Justin Vincent; Wood, Zachary Paul; Rathinam, Sivakumar; Malik, Waqar Ahmad
2010-01-01
Aircraft movements on taxiways at busy airports often create bottlenecks. This paper introduces a mixed integer linear program to solve a Multiple Route Aircraft Taxi Scheduling Problem. The outputs of the model are in the form of optimal taxi schedules, which include routing decisions for taxiing aircraft. The model extends an existing single route formulation to include routing decisions. An efficient comparison framework compares the multi-route formulation and the single route formulation. The multi-route model is exercised for east side airport surface traffic at Dallas/Fort Worth International Airport to determine if any arrival taxi time savings can be achieved by allowing arrivals to have two taxi routes: a route that crosses an active departure runway and a perimeter route that avoids the crossing. Results indicate that the multi-route formulation yields reduced arrival taxi times over the single route formulation only when a perimeter taxiway is used. In conditions where the departure aircraft are given an optimal and fixed takeoff sequence, accumulative arrival taxi time savings in the multi-route formulation can be as high as 3.6 hours more than the single route formulation. If the departure sequence is not optimal, the multi-route formulation results in less taxi time savings made over the single route formulation, but the average arrival taxi time is significantly decreased.
Wang, Lin; Sassi, Alexandra Beumer; Patton, Dorothy; Isaacs, Charles; Moncla, B. J.; Gupta, Phalguni; Rohan, Lisa Cencia
2015-01-01
The feasibility of using a liposome drug delivery system to formulate octylglycerol (OG) as a vaginal microbicide product was explored. A liposome formulation was developed containing 1% OG and phosphatidyl choline in a ratio that demonstrated in vitro activity against Neisseria gonorrhoeae, HSV-1, HSV-2 and HIV-1 while sparing the innate vaginal flora, Lactobacillus. Two conventional gel formulations were prepared for comparison. The OG liposome formulation with the appropriate OG/lipid ratio and dosing level had greater efficacy than either conventional gel formulation and maintained this efficacy for at least 2 months. No toxicity was observed for the liposome formulation in ex vivo testing in a human ectocervical tissue model or in vivo testing in the macaque safety model. Furthermore, minimal toxicity was observed to lactobacilli in vitro or in vivo safety testing. The OG liposome formulation offers a promising microbicide product with efficacy against HSV, HIV and N. gonorrhoeae. PMID:22149387
NASA Technical Reports Server (NTRS)
Fijany, A.; Featherstone, R.
1999-01-01
This paper presents a new formulation of the Constraint Force Algorithm that corrects a major limitation in the original, and sheds new light on the relationship between it and other dynamics algoritms.
Gacs quantum algorithmic entropy in infinite dimensional Hilbert spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benatti, Fabio, E-mail: benatti@ts.infn.it; Oskouei, Samad Khabbazi, E-mail: kh.oskuei@ut.ac.ir; Deh Abad, Ahmad Shafiei, E-mail: shafiei@khayam.ut.ac.ir
We extend the notion of Gacs quantum algorithmic entropy, originally formulated for finitely many qubits, to infinite dimensional quantum spin chains and investigate the relation of this extension with two quantum dynamical entropies that have been proposed in recent years.
Mittapalli, Rajendar K; Marroum, Patrick; Qiu, Yihong; Apfelbaum, Kathleen; Xiong, Hao
2017-07-01
To develop and validate a Level A in vitro-in vivo correlation (IVIVC) for potassium chloride extended-release (ER) formulations. Three prototype ER formulations of potassium chloride with different in vitro release rates were developed and their urinary pharmacokinetic profiles were evaluated in healthy subjects. A mathematical model between in vitro dissolution and in vivo urinary excretion, a surrogate for measuring in vivo absorption, was developed using time-scale and time-shift parameters. The IVIVC model was then validated based on internal and external predictability. With the established IVIVC model, there was a good correlation between the observed fraction of dose excreted in urine and the time-scaled and time-shifted fraction of the drug dissolved, and between the in vitro dissolution time and the in vivo urinary excretion time for the ER formulations. The percent prediction error (%PE) on cumulative urinary excretion over the 24 h interval (A e0-24h ) and maximum urinary excretion rate (R max ) was less than 15% for the individual formulations and less than 10% for the average of the two formulations used to develop the model. Further, the %PE values using external predictability were below 10%. A novel Level A IVIVC was successfully developed and validated for the new potassium chloride ER formulations using urinary pharmacokinetic data. This successful IVIVC may facilitate future development or manufacturing changes to the potassium chloride ER formulation.
Stability of tiagabine in two oral liquid vehicles.
Nahata, Milap C; Morosco, Richard S
2003-01-01
The stability of tiagabine hydrochloride in two extemporaneously prepared oral suspensions stored at 4 and 25 degrees C for three months was studied. Tiagabine is used for adjunctive therapy for the treatment of refractory partial seizures. It is currently available in a tablet dosage form, which cannot be used in young children who are unable to swallow and given doses in milligrams per kilogram of body weight. No stability data are available for tiagabine in any liquid dosage form. Five bottles contained tiagabine 1 mg/mL in 1% methylcellulose:Simple Syrup, NF (1:6), and another five bottles had tiagabine 1 mg/mL in Ora-Plus:Ora-Sweet (1:1). Three samples were collected from each bottle at 0, 7, 14, 28, 42, 56, 70, and 91 days and analyzed by a stability-indicating high-performance liquid chromatographic method (n = 15). At 4 degrees C, the mean concentration of tiagabine exceeded 95% of the original concentration for 91 days in both formulations. At 25 degrees C, the mean concentration of tiagabine exceeded 90% of the original concentration for 70 days in Ora-Plus:Ora-Sweet formulation and for 42 days in 1% methylcellulose:syrup formulation. No changes in pH or physical appearance were seen during this period. The stability data for two formulations would provide flexibility for compounding tiagabine. Tiagabine hydrochloride 1 mg/,mL in extemporaneously prepared liquid dosage forms and stored in plastic bottles remained stable for up to three months at 4 degrees C and six weeks at 25 degrees C.
A comparison of two brands of clopidogrel in patients with drug-eluting stent implantation.
Park, Yae Min; Ahn, Taehoon; Lee, Kyounghoon; Shin, Kwen-Chul; Jung, Eul Sik; Shin, Dong Su; Kim, Myeong Gun; Kang, Woong Chol; Han, Seung Hwan; Choi, In Suck; Shin, Eak Kyun
2012-07-01
Although generic clopidogrel is widely used, clinical efficacy and safety between generic and original clopidogrel had not been well evaluated. The aim of this study was to evaluate the clinical outcomes of 2 oral formulations of clopidogrel 75 mg tablets in patients with coronary artery disease (CAD) undergoing drug-eluting stent (DES) implantation. Between July 2006 and February 2009, 428 patients that underwent implantation with DES for CAD and completed >1 year of clinical follow-up were enrolled in this study. Patients were divided into the following 2 groups based on treatment formulation, Platless® (test formulation, n=211) or Plavix® (reference formulation, n=217). The incidence of 1-year major adverse cardiovascular and cerebrovascular event (MACCE) and stent thrombosis (ST) were retrospectively reviewed. The baseline demographic and procedural characteristics were not significantly different between two treatment groups. The incidence of 1-year MACCEs was 8.5% {19/211, 2 deaths, 4 myocardial infarctions (MIs), 2 strokes, and 11 target vessel revascularizations (TVRs)} in Platless® group vs. 7.4% (16/217, 4 deaths, 1 MI, 2 strokes, and 9 TVRs) in Plavix® group (p=0.66). The incidence of 1-year ST was 0.5% (1 definite and subacute ST) in Platless® group vs. 0% in Plavix® group (p=0.49). In this study, the 2 tablet preparations of clopidogrel showed similar rates of MACCEs, but additional prospective randomized studies with pharmacodynamics and platelet reactivity are needed to conclude whether generic clopidgrel may replace original clopidogrel.
Eriksen, Janus J; Sauer, Stephan P A; Mikkelsen, Kurt V; Jensen, Hans J Aa; Kongsted, Jacob
2012-09-30
We investigate the effect of including a dynamic reaction field at the lowest possible ab inito wave function level of theory, namely the Hartree-Fock (HF) self-consistent field level within the polarizable embedding (PE) formalism. We formulate HF based PE within the linear response theory picture leading to the PE-random-phase approximation (PE-RPA) and bridge the expressions to a second-order polarization propagator approximation (SOPPA) frame such that dynamic reaction field contributions are included at the RPA level in addition to the static response described at the SOPPA level but with HF induced dipole moments. We conduct calculations on para-nitro-aniline and para-nitro-phenolate using said model in addition to dynamic PE-RPA and PE-CAM-B3LYP. We compare the results to recently published PE-CCSD data and demonstrate how the cost effective SOPPA-based model successfully recovers a great portion of the inherent PE-RPA error when the observable is the solvatochromic shift. We furthermore demonstrate that whenever the change in density resulting from the ground state-excited state electronic transition in the solute is not associated with a significant change in the electric field, dynamic response contributions formulated at the HF level of theory manage to capture the majority of the system response originating from derivative densities. Copyright © 2012 Wiley Periodicals, Inc.
Fundamental theories of waves and particles formulated without classical mass
NASA Astrophysics Data System (ADS)
Fry, J. L.; Musielak, Z. E.
2010-12-01
Quantum and classical mechanics are two conceptually and mathematically different theories of physics, and yet they do use the same concept of classical mass that was originally introduced by Newton in his formulation of the laws of dynamics. In this paper, physical consequences of using the classical mass by both theories are explored, and a novel approach that allows formulating fundamental (Galilean invariant) theories of waves and particles without formally introducing the classical mass is presented. In this new formulation, the theories depend only on one common parameter called 'wave mass', which is deduced from experiments for selected elementary particles and for the classical mass of one kilogram. It is shown that quantum theory with the wave mass is independent of the Planck constant and that higher accuracy of performing calculations can be attained by such theory. Natural units in connection with the presented approach are also discussed and justification beyond dimensional analysis is given for the particular choice of such units.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waddell, Lucas; Muldoon, Frank; Henry, Stephen Michael
In order to effectively plan the management and modernization of their large and diverse fleets of vehicles, Program Executive Office Ground Combat Systems (PEO GCS) and Program Executive Office Combat Support and Combat Service Support (PEO CS&CSS) commis- sioned the development of a large-scale portfolio planning optimization tool. This software, the Capability Portfolio Analysis Tool (CPAT), creates a detailed schedule that optimally prioritizes the modernization or replacement of vehicles within the fleet - respecting numerous business rules associated with fleet structure, budgets, industrial base, research and testing, etc., while maximizing overall fleet performance through time. This paper contains a thor-more » ough documentation of the terminology, parameters, variables, and constraints that comprise the fleet management mixed integer linear programming (MILP) mathematical formulation. This paper, which is an update to the original CPAT formulation document published in 2015 (SAND2015-3487), covers the formulation of important new CPAT features.« less
Hot-melt extrusion--basic principles and pharmaceutical applications.
Lang, Bo; McGinity, James W; Williams, Robert O
2014-09-01
Originally adapted from the plastics industry, the use of hot-melt extrusion has gained favor in drug delivery applications both in academia and the pharmaceutical industry. Several commercial products made by hot-melt extrusion have been approved by the FDA, demonstrating its commercial feasibility for pharmaceutical processing. A significant number of research articles have reported on advances made regarding the pharmaceutical applications of the hot-melt extrusion processing; however, only limited articles have been focused on general principles regarding formulation and process development. This review provides an in-depth analysis and discussion of the formulation and processing aspects of hot-melt extrusion. The impact of physicochemical properties of drug substances and excipients on formulation development using a hot-melt extrusion process is discussed from a material science point of view. Hot-melt extrusion process development, scale-up, and the interplay of formulation and process attributes are also discussed. Finally, recent applications of hot-melt extrusion to a variety of dosage forms and drug substances have also been addressed.
Drug Release and Skin Permeation from Lipid Liquid Crystalline Phases
NASA Astrophysics Data System (ADS)
Costa-Balogh, F. O.; Sparr, E.; Sousa, J. J. S.; Pais, A. A. C. C.
We have studied drug release and skin permeation from several different liquid crystalline lipid formulations that may be used to control the respective release rates. We have studied the release and permeation through human skin of a water-soluble and amphiphilic drug, propranolol hydrochloride, from several formulations prepared with monoolein and phytantriol as permeation enhancers and controlled release excipients. Diolein and cineol were added to selected formulations. We observed that viscosity decreases with drug load, wich is compatible with the occurrence of phase changes. Diolein stabilizes the bicontinuous cubic phases leading to an increase in viscosity and sustained release of the drug. The slowest release was found for the cubic phases with higher viscosity. Studies on skin permeation showed that these latter formulations also presented lower permeability than the less viscous monoolein lamellar phases. Formulations containing cineol originated higher permeability with higher enhancement ratios. Thus, the various formulations are adapted to different circumstances and delivery routes. While a slow release is usually desired for drug sustained delivery, the transdermal route may require a faster release. Lamellar phases, which are less viscous, are more adapted to transdermal applications. Thus, systems involving lamellar phases of monoolein and cineol are good candidates to be used as skin permeation enhancers for propranolol hydrochloride.
Characterization of Amorphous and Co-Amorphous Simvastatin Formulations Prepared by Spray Drying.
Craye, Goedele; Löbmann, Korbinian; Grohganz, Holger; Rades, Thomas; Laitinen, Riikka
2015-12-03
In this study, spray drying from aqueous solutions, using the surface-active agent sodium lauryl sulfate (SLS) as a solubilizer, was explored as a production method for co-amorphous simvastatin-lysine (SVS-LYS) at 1:1 molar mixtures, which previously have been observed to form a co-amorphous mixture upon ball milling. In addition, a spray-dried formulation of SVS without LYS was prepared. Energy-dispersive X-ray spectroscopy (EDS) revealed that SLS coated the SVS and SVS-LYS particles upon spray drying. X-ray powder diffraction (XRPD) and differential scanning calorimetry (DSC) showed that in the spray-dried formulations the remaining crystallinity originated from SLS only. The best dissolution properties and a "spring and parachute" effect were found for SVS spray-dried from a 5% SLS solution without LYS. Despite the presence of at least partially crystalline SLS in the mixtures, all the studied formulations were able to significantly extend the stability of amorphous SVS compared to previous co-amorphous formulations of SVS. The best stability (at least 12 months in dry conditions) was observed when SLS was spray-dried with SVS (and LYS). In conclusion, spray drying of SVS and LYS from aqueous surfactant solutions was able to produce formulations with improved physical stability for amorphous SVS.
Glube, Natalie; Moos, Lea von; Duchateau, Guus
2013-01-01
Purpose In vitro disintegration and dissolution are routine methods used to assess the performance and quality of oral dosage forms. The purpose of the current work was to determine the potential for interaction between capsule shell material and a green tea extract and the impact it can have on the release. Methods A green tea extract was formulated into simple powder-in-capsule formulations of which the capsule shell material was either of gelatin or HPMC origin. The disintegration times were determined together with the dissolution profiles in compendial and biorelevant media. Results All formulations disintegrated within 30 min, meeting the USP criteria for botanical formulations. An immediate release dissolution profile was achieved for gelatin capsules in all media but not for the specified HPMC formulations. Dissolution release was especially impaired for HPMCgell at pH 1.2 and for both HPMC formulations in FeSSIF media suggesting the potential for food interactions. Conclusions The delayed release from studied HPMC capsule materials is likely attributed to an interaction between the catechins, the major constituents of the green tea extract, and the capsule shell material. An assessment of in vitro dissolution is recommended prior to the release of a dietary supplement or clinical trial investigational product to ensure efficacy. PMID:25755998
Glube, Natalie; Moos, Lea von; Duchateau, Guus
2013-01-01
In vitro disintegration and dissolution are routine methods used to assess the performance and quality of oral dosage forms. The purpose of the current work was to determine the potential for interaction between capsule shell material and a green tea extract and the impact it can have on the release. A green tea extract was formulated into simple powder-in-capsule formulations of which the capsule shell material was either of gelatin or HPMC origin. The disintegration times were determined together with the dissolution profiles in compendial and biorelevant media. All formulations disintegrated within 30 min, meeting the USP criteria for botanical formulations. An immediate release dissolution profile was achieved for gelatin capsules in all media but not for the specified HPMC formulations. Dissolution release was especially impaired for HPMCgell at pH 1.2 and for both HPMC formulations in FeSSIF media suggesting the potential for food interactions. The delayed release from studied HPMC capsule materials is likely attributed to an interaction between the catechins, the major constituents of the green tea extract, and the capsule shell material. An assessment of in vitro dissolution is recommended prior to the release of a dietary supplement or clinical trial investigational product to ensure efficacy.
Novel enzyme formulations for improved pharmacokinetic properties and anti-inflammatory efficacies.
Yang, Lan; Yan, Shenglei; Zhang, Yonghong; Hu, Xueyuan; Guo, Qi; Yuan, Yuming; Zhang, Jingqing
2018-02-15
Anti-inflammatory enzymes promote the dissolution and excretion of sticky phlegm, clean the wound surface and accelerate drug diffusion to the lesion. They play important roles in treating different types of inflammation and pain. Currently, various formulations of anti-inflammatory enzymes are successfully prepared to improve the enzymatic characteristics, pharmacokinetic properties and anti-inflammatory efficacies. The work was performed by systematically searching all available literature. An overall summary of current research about various anti-inflammatory enzymes and their novel formulations is presented. The original and improved enzymatic characteristics, pharmacokinetic properties, action mechanisms, clinical information, storage and shelf life, treatment efficacies of anti-inflammatory enzymes and their different formulations are summarized. The influencing factors such as enzyme type, source, excipient, pharmaceutical technique, administration route and dosage are analyzed. The combined application of enzymes and other drugs are included in this paper. Anti-inflammatory enzymes were widely applied in treating different types of inflammation and diseases with accompanying edema. Their novel formulations increased enzymatic stabilities, improved pharmacokinetic properties, provided different administration routes, and enhanced anti-inflammatory efficacies of anti-inflammatory enzymes but decreased side effects and toxicity. Novel enzyme formulations improve and expand the usage of anti-inflammatory enzymes. Copyright © 2017 Elsevier B.V. All rights reserved.
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Dong-Sang
2015-03-02
The legacy nuclear wastes stored in underground tanks at the US Department of Energy’s Hanford site is planned to be separated into high-level waste and low-activity waste fractions and vitrified separately. Formulating optimized glass compositions that maximize the waste loading in glass is critical for successful and economical treatment and immobilization of nuclear wastes. Glass property-composition models have been developed and applied to formulate glass compositions for various objectives for the past several decades. The property models with associated uncertainties and combined with composition and property constraints have been used to develop preliminary glass formulation algorithms designed for vitrification processmore » control and waste form qualification at the planned waste vitrification plant. This paper provides an overview of current status of glass property-composition models, constraints applicable to Hanford waste vitrification, and glass formulation approaches that have been developed for vitrification of hazardous and highly radioactive wastes stored at the Hanford site.« less
Non-Parabolic Hydrodynamic Formulations for the Simulation of Inhomogeneous Semiconductor Devices
NASA Technical Reports Server (NTRS)
Smith, A. W.; Brennan, K. F.
1996-01-01
Hydrodynamic models are becoming prevalent design tools for small scale devices and other devices in which high energy effects can dominate transport. Most current hydrodynamic models use a parabolic band approximation to obtain fairly simple conservation equations. Interest in accounting for band structure effects in hydrodynamic device simulation has begun to grow since parabolic models cannot fully describe the transport in state of the art devices due to the distribution populating non-parabolic states within the band. This paper presents two different non-parabolic formulations or the hydrodynamic model suitable for the simulation of inhomogeneous semiconductor devices. The first formulation uses the Kane dispersion relationship ((hk)(exp 2)/2m = W(1 + alphaW). The second formulation makes use of a power law ((hk)(exp 2)/2m = xW(exp y)) for the dispersion relation. Hydrodynamic models which use the first formulation rely on the binomial expansion to obtain moment equations with closed form coefficients. This limits the energy range over which the model is valid. The power law formulation readily produces closed form coefficients similar to those obtained using the parabolic band approximation. However, the fitting parameters (x,y) are only valid over a limited energy range. The physical significance of the band non-parabolicity is discussed as well as the advantages/disadvantages and approximations of the two non-parabolic models. A companion paper describes device simulations based on the three dispersion relationships; parabolic, Kane dispersion and power law dispersion.
Non-parabolic hydrodynamic formulations for the simulation of inhomogeneous semiconductor devices
NASA Technical Reports Server (NTRS)
Smith, Arlynn W.; Brennan, Kevin F.
1995-01-01
Hydrodynamic models are becoming prevalent design tools for small scale devices and other devices in which high energy effects can dominate transport. Most current hydrodynamic models use a parabolic band approximation to obtain fairly simple conservation equations. Interest in accounting for band structure effects in hydrodynamic device simulation has begun to grow since parabolic models can not fully describe the transport in state of the art devices due to the distribution populating non-parabolic states within the band. This paper presents two different non-parabolic formulations of the hydrodynamic model suitable for the simulation of inhomogeneous semiconductor devices. The first formulation uses the Kane dispersion relationship (hk)(exp 2)/2m = W(1 + alpha(W)). The second formulation makes use of a power law ((hk)(exp 2)/2m = xW(sup y)) for the dispersion relation. Hydrodynamic models which use the first formulation rely on the binomial expansion to obtain moment equations with closed form coefficients. This limits the energy range over which the model is valid. The power law formulation readily produces closed form coefficients similar to those obtained using the parabolic band approximation. However, the fitting parameters (x,y) are only valid over a limited energy range. The physical significance of the band non-parabolicity is discussed as well as the advantages/disadvantages and approximations of the two non-parabolic models. A companion paper describes device simulations based on the three dispersion relationships: parabolic, Kane dispersion, and power low dispersion.
Towards a Neurodevelopmental Model of Clinical Case Formulation
Solomon, Marjorie; Hessl, David; Chiu, Sufen; Olsen, Emily; Hendren, Robert
2009-01-01
Rapid advances in molecular genetics and neuroimaging over the last 10-20 years have been a catalyst for research in neurobiology, developmental psychopathology, and translational neuroscience. Methods of study in psychiatry, previously described as “slow maturing,” now are becoming sufficiently sophisticated to more effectively investigate the biology of higher mental processes. Despite these technological advances, the recognition that psychiatric disorders are disorders of neurodevelopment, and the importance of case formulation to clinical practice, a neurodevelopmental model of case formulation has not yet been articulated. The goals of this manuscript, which is organized as a clinical case conference, are to begin to articulate a neurodevelopmental model of case formulation, to illustrate its value, and finally to explore how clinical psychiatric practice might evolve in the future if this model were employed. PMID:19248925
NASA Astrophysics Data System (ADS)
Amiri-Simkooei, A. R.
2018-01-01
Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.
NASA Technical Reports Server (NTRS)
Amano, R. S.; Goel, P.
1986-01-01
A numerical study of computations in backward-facing steps with flow separation and reattachment, using the Reynolds stress closure is presented. The highlight of this study is the improvement of the Reynold-stress model (RSM) by modifying the diffusive transport of the Reynolds stresses through the formulation, solution and subsequent incorporation of the transport equations of the third moments, bar-u(i)u(j)u(k), into the turbulence model. The diffusive transport of the Reynolds stresses, represented by the gradients of the third moments, attains greater significance in recirculating flows. The third moments evaluated by the development and solution of the complete transport equations are superior to those obtained by existing algebraic correlations. A low-Reynolds number model for the transport equations of the third moments is developed and considerable improvement in the near-wall profiles of the third moments is observed. The values of the empirical constants utilized in the development of the model are recommended. The Reynolds-stress closure is consolidated by incorporating the equations of k and e, containing the modified diffusion coefficients, and the transport equations of the third moments into the Reynolds stress equations. Computational results obtained by the original k-e model, the original RSM and the consolidated and modified RSM are compared with experimental data. Overall improvement in the predictions is seen by consolidation of the RMS and a marked improvement in the profiles of bar-u(i)u(j)u(k) is obtained around the reattachment region.
Kou, Dawen; Dwaraknath, Sudharsan; Fischer, Yannick; Nguyen, Daniel; Kim, Myeonghui; Yiu, Hiuwing; Patel, Preeti; Ng, Tania; Mao, Chen; Durk, Matthew; Chinn, Leslie; Winter, Helen; Wigman, Larry; Yehl, Peter
2017-10-02
In this study, two dissolution models were developed to achieve in vitro-in vivo relationship for immediate release formulations of Compound-A, a poorly soluble weak base with pH-dependent solubility and low bioavailability in hypochlorhydric and achlorhydric patients. The dissolution models were designed to approximate the hypo-/achlorhydric and normal fasted stomach conditions after a glass of water was ingested with the drug. The dissolution data from the two models were predictive of the relative in vivo bioavailability of various formulations under the same gastric condition, hypo-/achlorhydric or normal. Furthermore, the dissolution data were able to estimate the relative performance under hypo-/achlorhydric and normal fasted conditions for the same formulation. Together, these biorelevant dissolution models facilitated formulation development for Compound-A by identifying the right type and amount of key excipient to enhance bioavailability and mitigate the negative effect of hypo-/achlorhydria due to drug-drug interaction with acid-reducing agents. The dissolution models use readily available USP apparatus 2, and their broader utility can be evaluated on other BCS 2B compounds with reduced bioavailability caused by hypo-/achlorhydria.
Model-based optimal design of experiments - semidefinite and nonlinear programming formulations
Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.
2015-01-01
We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279
Popadyuk, A; Kalita, H; Chisholm, B J; Voronov, A
2014-12-01
A new non-toxic soybean oil-based polymeric surfactant (SBPS) for personal-care products was developed and extensively characterized, including an evaluation of the polymeric surfactant performance in model shampoo formulations. To experimentally assure applicability of the soy-based macromolecules in shampoos, either in combination with common anionic surfactants (in this study, sodium lauryl sulfate, SLS) or as a single surface-active ingredient, the testing of SBPS physicochemical properties, performance and visual assessment of SBPS-based model shampoos was carried out. The results obtained, including foaming and cleaning ability of model formulations, were compared to those with only SLS as a surfactant as well as to SLS-free shampoos. Overall, the results show that the presence of SBPS improves cleaning, foaming, and conditioning of model formulations. SBPS-based formulations meet major requirements of multifunctional shampoos - mild detergency, foaming, good conditioning, and aesthetic appeal, which are comparable to commercially available shampoos. In addition, examination of SBPS/SLS mixtures in model shampoos showed that the presence of the SBPS enables the concentration of SLS to be significantly reduced without sacrificing shampoo performance. © 2014 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.
Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C
2016-02-15
We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.
Poli, G; Dall'Ara, P; Binda, S; Santus, G; Poli, A; Cocilovo, A; Ponti, W
2001-01-01
Recurrent herpes simplex labialis represents a disease still difficult to treat, despite the availability of many established antiviral drugs used in clinical research since 30 years ago. Although differences between the human disease and that obtained in experimental animal suggest caution in predicting an effective clinical response from the experimental results, some of the animal models seem to be useful in optimising the topical formulation of single antiviral drugs. In the present work the dorsal cutaneous guinea pig model was used to compare 5 different topical antiviral formulations with clinical promise (active molecule: 5% w/w micronized aciclovir, CAS 59277-89-3), using both roll-on and lipstick application systems. The aim being to evaluate which vehicle (water, oil, low melting and high melting fatty base) and application system (roll-on, lipstick) enhances the skin penetration and the antiviral activity of the drug, after an experimental intradermal infection with Herpes simplex virus type 1 (HSV-1). As reference, a commercial formulation (5% aciclovir ointment) was used. The cumulative results of this study showed that the formulation A, containing 5% aciclovir in an aqueous base in a roll-on application system, has the better antiviral efficacy in reducing the severity of cutaneous lesions and the viral titer; among the lipsticks preparations, the formulation D, containing 5% aciclovir in a low melting fatty base, demonstrates a very strong antiviral activity, though slightly less than formulation A. This experimental work confirms the validity of the dorsal cutaneous guinea pig model as a rapid and efficient method to compare the antiviral efficacy of new formulations, with clinical promise, to optimise the topical formulation of the active antiviral drugs.
A New Model for Self-organized Dynamics and Its Flocking Behavior
NASA Astrophysics Data System (ADS)
Motsch, Sebastien; Tadmor, Eitan
2011-09-01
We introduce a model for self-organized dynamics which, we argue, addresses several drawbacks of the celebrated Cucker-Smale (C-S) model. The proposed model does not only take into account the distance between agents, but instead, the influence between agents is scaled in term of their relative distance. Consequently, our model does not involve any explicit dependence on the number of agents; only their geometry in phase space is taken into account. The use of relative distances destroys the symmetry property of the original C-S model, which was the key for the various recent studies of C-S flocking behavior. To this end, we introduce here a new framework to analyze the phenomenon of flocking for a rather general class of dynamical systems, which covers systems with non-symmetric influence matrices. In particular, we analyze the flocking behavior of the proposed model as well as other strongly asymmetric models with "leaders". The methodology presented in this paper, based on the notion of active sets, carries over from the particle to kinetic and hydrodynamic descriptions. In particular, we discuss the hydrodynamic formulation of our proposed model, and prove its unconditional flocking for slowly decaying influence functions.
Cellular replication limits in the Luria-Delbrück mutation model
NASA Astrophysics Data System (ADS)
Rodriguez-Brenes, Ignacio A.; Wodarz, Dominik; Komarova, Natalia L.
2016-08-01
Originally developed to elucidate the mechanisms of natural selection in bacteria, the Luria-Delbrück model assumed that cells are intrinsically capable of dividing an unlimited number of times. This assumption however, is not true for human somatic cells which undergo replicative senescence. Replicative senescence is thought to act as a mechanism to protect against cancer and the escape from it is a rate-limiting step in cancer progression. Here we introduce a Luria-Delbrück model that explicitly takes into account cellular replication limits in the wild type cell population and models the emergence of mutants that escape replicative senescence. We present results on the mean, variance, distribution, and asymptotic behavior of the mutant population in terms of three classical formulations of the problem. More broadly the paper introduces the concept of incorporating replicative limits as part of the Luria-Delbrück mutational framework. Guidelines to extend the theory to include other types of mutations and possible applications to the modeling of telomere crisis and fluctuation analysis are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starodumov, Ilya; Kropotin, Nikolai
2016-08-10
We investigate the three-dimensional mathematical model of crystal growth called PFC (Phase Field Crystal) in a hyperbolic modification. This model is also called the modified model PFC (originally PFC model is formulated in parabolic form) and allows to describe both slow and rapid crystallization processes on atomic length scales and on diffusive time scales. Modified PFC model is described by the differential equation in partial derivatives of the sixth order in space and second order in time. The solution of this equation is possible only by numerical methods. Previously, authors created the software package for the solution of the Phasemore » Field Crystal problem, based on the method of isogeometric analysis (IGA) and PetIGA program library. During further investigation it was found that the quality of the solution can strongly depends on the discretization parameters of a numerical method. In this report, we show the features that should be taken into account during constructing the computational grid for the numerical simulation.« less
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham; Stepanyan, Vahram; Boskovic, Jovan
2009-01-01
Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. The model matching conditions in the transformed time coordinate results in increase in the feedback gain and modification of the adaptive law.
Optimal Control Modification Adaptive Law for Time-Scale Separated Systems
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. A model matching conditions in the transformed time coordinate results in an increase in the actuator command that effectively compensate for the slow actuator dynamics. Simulations demonstrate effectiveness of the method.
Optimal Control Modification for Time-Scale Separated Systems
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. A model matching conditions in the transformed time coordinate results in an increase in the actuator command that effectively compensate for the slow actuator dynamics. Simulations demonstrate effectiveness of the method.
Incorporation of UK Met Office's radiation scheme into CPTEC's global model
NASA Astrophysics Data System (ADS)
Chagas, Júlio C. S.; Barbosa, Henrique M. J.
2009-03-01
Current parameterization of radiation in the CPTEC's (Center for Weather Forecast and Climate Studies, Cachoeira Paulista, SP, Brazil) operational AGCM has its origins in the work of Harshvardhan et al. (1987) and uses the formulation of Ramaswamy and Freidenreich (1992) for the short-wave absorption by water vapor. The UK Met Office's radiation code (Edwards and Slingo, 1996) was incorporated into CPTEC's global model, initially for short-wave only, and some impacts of that were shown by Chagas and Barbosa (2006). Current paper presents some impacts of the complete incorporation (both short-wave and long-wave) of UK Met Office's scheme. Selected results from off-line comparisons with line-by-line benchmark calculations are shown. Impacts on the AGCM's climate are assessed by comparing output of climate runs of current and modified AGCM with products from GEWEX/SRB (Surface Radiation Budget) project.
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
Computerized Instructional Adaptive Testing Model: Formulation and Validation.
1980-02-01
AD-AO1 855 CONTROL DATA EDUCATION CO MINNEAPOLIS MN F/6 5/9MPUTERIZED INSTRUCTIONAL ADAPTIVE TESTING MODELS FORMULATION --EC(U) FEB 80 S J KALISCH...final report wus submitted by Control Data Education Company, 8100 34th Avenue, South, Minneapolis, Minnesota 55440, under contract F33615-17-C.0071... DATA EDUCATION CO MINNEAPOLIS MN p/e 5/9 I COMPULTERIZED :LSTUCTIONAL ADAPTIVE TESTING MODELS FORMULATION --EIC(U) FEB 80 S J KALISCH F33615-77-C-0O71
Dental age estimation of growing children by measurement of open apices: A Malaysian formula
Cugati, Navaneetha; Kumaresan, Ramesh; Srinivasan, Balamanikanda; Karthikeyan, Priyadarshini
2015-01-01
Background: Age estimation is of prime importance in forensic science and clinical dentistry. Age estimation based on teeth development is one reliable approach. Many radiographic methods are proposed on the Western population for estimating dental age, and a similar assessment was found to be inadequate in Malaysian population. Hence, this study aims at formulating a regression model for dental age estimation in Malaysian children population using Cameriere's method. Materials and Methods: Orthopantomographs of 421 Malaysian children aged between 5 and 16 years involving all the three ethnic origins were digitalized and analyzed using Cameriere's method of age estimation. The subjects’ age was modeled as a function of the morphological variables, gender (g), ethnicity, sum of normalized open apices (s), number of tooth with completed root formation (N0) and the first-order interaction between s and N0. Results: The variables that contributed significantly to the fit were included in the regression model, yielding the following formula: Age = 11.368-0.345g + 0.553No -1.096s - 0.380s.No, where g is a variable, 1 for males and 2 for females. The equation explained 87.1% of total deviance. Conclusion: The results obtained insist on reframing the original Cameriere's formula to suit the population of the nation specifically. Further studies are to be conducted to evaluate the applicability of this formula on a larger sample size. PMID:26816464
A two-field modified Lagrangian formulation for robust simulations of extrinsic cohesive zone models
NASA Astrophysics Data System (ADS)
Cazes, F.; Coret, M.; Combescure, A.
2013-06-01
This paper presents the robust implementation of a cohesive zone model based on extrinsic cohesive laws (i.e. laws involving an infinite initial stiffness). To this end, a two-field Lagrangian weak formulation in which cohesive tractions are chosen as the field variables along the crack's path is presented. Unfortunately, this formulation cannot model the infinite compliance of the broken elements accurately, and no simple criterion can be defined to determine the loading-unloading change of state at the integration points of the cohesive elements. Therefore, a modified Lagrangian formulation using a fictitious cohesive traction instead of the classical cohesive traction as the field variable is proposed. Thanks to this change of variable, the cohesive law becomes an increasing function of the equivalent displacement jump, which eliminates the problems mentioned previously. The ability of the proposed formulations to simulate fracture accurately and without field oscillations is investigated through three numerical test examples.
Compositional Models of Glass/Melt Properties and their Use for Glass Formulation
Vienna, John D.; USA, Richland Washington
2014-12-18
Nuclear waste glasses must simultaneously meet a number of criteria related to their processability, product quality, and cost factors. The properties that must be controlled in glass formulation and waste vitrification plant operation tend to vary smoothly with composition allowing for glass property-composition models to be developed and used. Models have been fit to the key glass properties. The properties are transformed so that simple functions of composition (e.g., linear, polynomial, or component ratios) can be used as model forms. The model forms are fit to experimental data designed statistically to efficiently cover the composition space of interest. Examples ofmore » these models are found in literature. The glass property-composition models, their uncertainty definitions, property constraints, and optimality criteria are combined to formulate optimal glass compositions, control composition in vitrification plants, and to qualify waste glasses for disposal. An overview of current glass property-composition modeling techniques is summarized in this paper along with an example of how those models are applied to glass formulation and product qualification at the planned Hanford high-level waste vitrification plant.« less
BCS, Nambu-Jona-Lasinio, and Han-Nambu: A sketch of Nambu's works in 1960-1965
NASA Astrophysics Data System (ADS)
Fujikawa, Kazuo
2016-06-01
The years 1960-1965 were a remarkable period for Yoichiro Nambu. Starting with a reformulation of BCS theory with emphasis on gauge invariance, he recognized the realization of spontaneous chiral symmetry breaking in particle physics as evidenced by the Goldberger-Treiman relation. A concrete model of Nambu and Jona-Lasinio illustrated the essence of the Nambu-Goldstone theorem and the idea of soft pions. After the proposal of the quark model by Gell-Mann, he together with Han constructed an alternative model of integrally charged quarks with possible non-Abelian gluons. All these remarkable works were performed during the years 1960-1965. Here I briefly review those works following the original papers of Nambu chronologically, together with a brief introduction to a formulation of Noether's theorem and the Ward-Takahashi identities using path integrals. This article is mostly based on a lecture given at the Nambu Memorial Symposium held at Osaka City University in September 2015, where Nambu started his professional career.
A set of constitutive relationships accounting for residual NAPL in the unsaturated zone.
Wipfler, E L; van der Zee, S E
2001-07-01
Although laboratory experiments show that non-aqueous phase liquid (NAPL) is retained in the unsaturated zone, no existing multiphase flow model has been developed to account for residual NAPL after NAPL drainage in the unsaturated zone. We developed a static constitutive set of saturation-capillary pressure relationships for water, NAPL and air that accounts for both this residual NAPL and entrapped NAPL. The set of constitutive relationships is formulated similarly to the set of scaled relationships that is frequently applied in continuum models. The new set consists of three fluid-phase systems: a three-phase system and a two-phase system, that both comply with the original constitutive model, and a newly introduced residual NAPL system. The new system can be added relatively easily to the original two- and three-phase systems. Entrapment is included in the model. The constitutive relationships of the non-drainable residual NAPL system are based on qualitative fluid behavior derived from a pore scale model. The pore scale model reveals that the amount of residual NAPL depends on the spreading coefficient and the water saturation. Furthermore, residual NAPL is history-dependent. At the continuum scale, a critical NAPL pressure head defines the transition from free, mobile NAPL to residual NAPL. Although the Pc-S relationships for water and total liquid are not independent in case of residual NAPL, two two-phase Pc-S relations can represent a three-phase residual system of Pc-S relations. A newly introduced parameter, referred to as the residual oil pressure head, reflects the mutual dependency of water and oil. Example calculations show consistent behavior of the constitutive model. Entrapment and retention in the unsaturated zone cooperate to retain NAPL. Moreover, the results of our constitutive model are in agreement with experimental observations.
Kibleur, Yves; Guffon, Nathalie
2016-04-01
The aim was to describe the status of patients with urea cycle disorders (UCD) at the latest long-term clinical follow-up of treatment with a new taste-masked formulation of sodium phenylbutyrate (NaPB) granules (Pheburane). These patients are a subset of those treated under a cohort temporary utilisation study (ATU) previously reported and now followed for 2 years. From a French cohort temporary utilization authorization (ATU) set up to monitor the use of Pheburane on a named-patient basis in UCD patients in advance of its marketing authorization, a subset of patients were followed up in the long term. Data on demographics, dosing characteristics of NaPB, concomitant medications, adverse events and clinical outcomes were collected at a follow-up visit after 1-2 years of treatment with the drug administered under marketing conditions. This paper reports on the subset of patients who were included in further long-term follow-up at the principal recruiting metabolic reference center involved in the original cohort. No episode of metabolic decompensation was observed over a treatment period ranging from 8 to 30 months with Pheburane, and the range of ammonia and glutamine levels continued to improve and remained within the normal range, thus adding valuable longer-term feedback to the original ATU report. In all, no adverse events were reported with Pheburane treatment. These additional data demonstrate the maintenance of the safety and efficacy of Pheburane over time. The recently developed taste-masked formulation of NaPB granules (Pheburane) improved the quality of life for UCD patients. The present post-marketing report on the use of the product confirms the original observations of improved compliance, efficacy and safety with this taste-masked formulation of NaPB.
Fate and origin of 1,2-dichloropropane in an unconfined shallow aquifer
Tesoriero, A.J.; Loffler, F.E.; Liebscher, H.
2001-01-01
A shallow aquifer with different redox zones overlain by intensive agricultural activity was monitored for the occurrence of 1,2-dichloropropane (DCP) to assess the fate and origin of this pollutant. DCP was detected more frequently in groundwater samples collected in aerobic and nitrate-reducing zones than those collected from iron-reducing zones. Simulated DCP concentrations for groundwater entering an iron-reducing zone were calculated from a fate and transport model that included dispersion, sorption, and hydrolysis but not degradation. Simulated concentrations were well in excess of measured values, suggesting that microbial degradation occurred in the iron-reducing zone. Microcosm experiments were conducted using aquifer samples collected from iron-reducing and aerobic zones to evaluate the potential for microbial degradation of DCP and to explain field observations. Hydrogenolysis of DCP and production of monochlorinated propanes in microcosm experiments occurred only with aquifer materials collected from the iron-reducing zone, and no dechlorination was observed in microcosms established with aquifer materials collected from the aerobic zones. Careful analyses of the DCP/1,2,2-trichloropropane ratios in groundwater indicated that older fumigant formulations were responsible for the high levels of DCP present in this aquifer.A shallow aquifer with different redox zones overlain by intensive agricultural activity was monitored for the occurrence of 1,2-dichloropropane (DCP) to assess the fate and origin of this pollutant. DCP was detected more frequently in groundwater samples collected in aerobic and nitrate-reducing zones than those collected from iron-reducing zones. Simulated DCP concentrations for groundwater entering an iron-reducing zone were calculated from a fate and transport model that included dispersion, sorption, and hydrolysis but not degradation. Simulated concentrations were well in excess of measured values, suggesting that microbial degradation occurred in the iron-reducing zone. Microcosm experiments were conducted using aquifer samples collected from iron-reducing and aerobic zones to evaluate the potential for microbial degradation of DCP and to explain field observations. Hydrogenolysis of DCP and production of monochlorinated propanes in microcosm experiments occurred only with aquifer materials collected from the iron-reducing zone, and no dechlorination was observed in microcosms established with aquifer materials collected from the aerobic zones. Careful analyses of the DCP/1,2,2-trichloropropane ratios in groundwater indicated that older fumigant formulations were responsible for the high levels of DCP present in this aquifer.
Plasma Transfusion: History, Current Realities, and Novel Improvements.
Watson, Justin J J; Pati, Shibani; Schreiber, Martin A
2016-11-01
Traumatic hemorrhage is the leading cause of preventable death after trauma. Early transfusion of plasma and balanced transfusion have been shown to optimize survival, mitigate the acute coagulopathy of trauma, and restore the endothelial glycocalyx. There are a myriad of plasma formulations available worldwide, including fresh frozen plasma, thawed plasma, liquid plasma, plasma frozen within 24 h, and lyophilized plasma (LP). Significant equipoise exists in the literature regarding the optimal plasma formulation. LP is a freeze-dried formulation that was originally developed in the 1930s and used by the American and British military in World War II. It was subsequently discontinued due to risk of disease transmission from pooled donors. Recently, there has been a significant amount of research focusing on optimizing reconstitution of LP. Findings show that sterile water buffered with ascorbic acid results in decreased blood loss with suppression of systemic inflammation. We are now beginning to realize the creation of a plasma-derived formulation that rapidly produces the associated benefits without logistical or safety constraints. This review will highlight the history of plasma, detail the various types of plasma formulations currently available, their pathophysiological effects, impacts of storage on coagulation factors in vitro and in vivo, novel concepts, and future directions.
Robust rotational-velocity-Verlet integration methods.
Rozmanov, Dmitri; Kusalik, Peter G
2010-05-01
Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.
Robust rotational-velocity-Verlet integration methods
NASA Astrophysics Data System (ADS)
Rozmanov, Dmitri; Kusalik, Peter G.
2010-05-01
Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.
Analysis of a thioether lubricant by infrared Fourier microemission spectrophotometry
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.; Morales, W.; Lauer, J. L.
1986-01-01
An infrared Fourier microemission spectrophotometer is used to obtain spectra (wavenumber range, 630 to 1230 0.1 cm) from microgram quantities of thioether lubricant samples deposited on aluminum foil. Infrared bands in the spectra are reproducible and could be identified as originating from aromatic species (1,3-disubstituted benzenes). Spectra from all samples (neat and formulated, used and unused) are very similar. Additives (an acid and a phosphinate) present in low concentration (0.10 percent) in the formulated fluid are not detected. This instrument appears to be a viable tool in helping to identify lubricant components separated by liquid chromatography.
Simulation of noise involved in synthetic aperture radar
NASA Astrophysics Data System (ADS)
Grandchamp, Myriam; Cavassilas, Jean-Francois
1996-08-01
The synthetic aperture radr (SAR) returns from a linear distribution of scatterers are simulated and processed in order to estimate the reflectivity coefficients of the ground. An original expression of this estimate is given, which establishes the relation between the terms of signal and noise. Both are compared. One application of this formulation consists of detecting a surface ship wake on a complex SAR image. A smoothing is first accomplished on the complex image. The choice of the integration area is determined by the preceding mathematical formulation. Then a differential filter is applied, and results are shown for two parts of the wake.
Recursive Newton-Euler formulation of manipulator dynamics
NASA Technical Reports Server (NTRS)
Nasser, M. G.
1989-01-01
A recursive Newton-Euler procedure is presented for the formulation and solution of manipulator dynamical equations. The procedure includes rotational and translational joints and a topological tree. This model was verified analytically using a planar two-link manipulator. Also, the model was tested numerically against the Walker-Orin model using the Shuttle Remote Manipulator System data. The hinge accelerations obtained from both models were identical. The computational requirements of the model vary linearly with the number of joints. The computational efficiency of this method exceeds that of Walker-Orin methods. This procedure may be viewed as a considerable generalization of Armstrong's method. A six-by-six formulation is adopted which enhances both the computational efficiency and simplicity of the model.
Analysis of the surface heat balance over the world ocean
NASA Technical Reports Server (NTRS)
Esbenson, S. K.
1981-01-01
The net surface heat fluxes over the global ocean for all calendar months were evaluated. To obtain a formula in the form Qs = Q2(T*A - Ts), where Qs is the net surface heat flux, Ts is the sea surface temperature, T*A is the apparent atmospheric equilibrium temperature, and Q2 is the proportionality constant. Here T*A and Q2, derived from the original heat flux formulas, are functions of the surface meteorological parameters (e.g., surface wind speed, air temperature, dew point, etc.) and the surface radiation parameters. This formulation of the net surface heat flux together with climatological atmospheric parameters provides a realistic and computationally efficient upper boundary condition for oceanic climate modeling.
Spin-Orbit Dimers and Noncollinear Phases in d1 Cubic Double Perovskites
NASA Astrophysics Data System (ADS)
Romhányi, Judit; Balents, Leon; Jackeli, George
2017-05-01
We formulate and study a spin-orbital model for a family of cubic double perovskites with d1 ions occupying a frustrated fcc sublattice. A variational approach and a complementary analytical analysis reveal a rich variety of phases emerging from the interplay of Hund's rule and spin-orbit coupling. The phase digram includes noncollinear ordered states, with or without a net moment, and, remarkably, a large window of a nonmagnetic disordered spin-orbit dimer phase. The present theory uncovers the physical origin of the unusual amorphous valence bond state experimentally suggested for Ba2B Mo O6 (B =Y , Lu) and predicts possible ordered patterns in Ba2B Os O6 (B =Na , Li) compounds.
Origin of the spike-timing-dependent plasticity rule
NASA Astrophysics Data System (ADS)
Cho, Myoung Won; Choi, M. Y.
2016-08-01
A biological synapse changes its efficacy depending on the difference between pre- and post-synaptic spike timings. Formulating spike-timing-dependent interactions in terms of the path integral, we establish a neural-network model, which makes it possible to predict relevant quantities rigorously by means of standard methods in statistical mechanics and field theory. In particular, the biological synaptic plasticity rule is shown to emerge as the optimal form for minimizing the free energy. It is further revealed that maximization of the entropy of neural activities gives rise to the competitive behavior of biological learning. This demonstrates that statistical mechanics helps to understand rigorously key characteristic behaviors of a neural network, thus providing the possibility of physics serving as a useful and relevant framework for probing life.
Sound propagation in liquid foams: Unraveling the balance between physical and chemical parameters.
Pierre, Juliette; Giraudet, Brice; Chasle, Patrick; Dollet, Benjamin; Saint-Jalmes, Arnaud
2015-04-01
We present experimental results on the propagation of an ultrasonic wave (40 kHz) in liquid foams, as a function of the foam physical and chemical parameters. We have first implemented an original setup, using transducers in a transmission configuration. The foam coarsening was used to vary the bubble size (remaining in the submillimeter range), and we have made foams with various chemical formulations, to investigate the role of the chemicals at the bubble interfaces or in bulk. The results are compared with recently published theoretical works, and good agreements are found. In particular, for all the foams, we have evidenced two asymptotic limits, at small and large bubble size, connected by a nontrivial resonant behavior, associated to an effective negative density. These qualitative features are robust whatever the chemical formulation; we discuss the observed differences between the samples, in relation to the interfacial and bulk viscoelasticity. These results demonstrate the rich and complex acoustic behavior of foams. While the bubble size remain here always smaller than the sound wavelength, it turns out that one must go well beyond mean-field modeling to describe the foam acoustic properties.
Sound propagation in liquid foams: Unraveling the balance between physical and chemical parameters
NASA Astrophysics Data System (ADS)
Pierre, Juliette; Giraudet, Brice; Chasle, Patrick; Dollet, Benjamin; Saint-Jalmes, Arnaud
2015-04-01
We present experimental results on the propagation of an ultrasonic wave (40 kHz) in liquid foams, as a function of the foam physical and chemical parameters. We have first implemented an original setup, using transducers in a transmission configuration. The foam coarsening was used to vary the bubble size (remaining in the submillimeter range), and we have made foams with various chemical formulations, to investigate the role of the chemicals at the bubble interfaces or in bulk. The results are compared with recently published theoretical works, and good agreements are found. In particular, for all the foams, we have evidenced two asymptotic limits, at small and large bubble size, connected by a nontrivial resonant behavior, associated to an effective negative density. These qualitative features are robust whatever the chemical formulation; we discuss the observed differences between the samples, in relation to the interfacial and bulk viscoelasticity. These results demonstrate the rich and complex acoustic behavior of foams. While the bubble size remain here always smaller than the sound wavelength, it turns out that one must go well beyond mean-field modeling to describe the foam acoustic properties.
On 2- and 3-person games on polyhedral sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belenky, A.S.
1994-12-31
Special classes of 3 person games are considered where the sets of players` allowable strategies are polyhedral and the payoff functions are defined as maxima, on a polyhedral set, of certain kind of sums of linear and bilinear functions. Necessary and sufficient conditions, which are easy to verify, for a Nash point in these games are established, and a finite method, based on these conditions, for calculating Nash points is proposed. It is shown that the game serves as a generalization of a model for a problem of waste products evacuation from a territory. The method makes it possible tomore » reduce calculation of a Nash point to solving some linear and quadratic programming problems formulated on the basis of the original 3-person game. A class of 2-person games on connected polyhedral sets is considered, with the payoff function being a sum of two linear functions and one bilinear function. Necessary and sufficient conditions are established for the min-max, the max-min, and for a certain equilibrium. It is shown that the corresponding points can be calculated from auxiliary linear programming problems formulated on the basis of the master game.« less
Dabhi, Mahesh R; Nagori, Stavan A; Gohel, Mukesh C; Parikh, Rajesh K; Sheth, Navin R
2010-01-01
Smart gel periodontal drug delivery systems (SGPDDS) containing gellan gum (0.1-0.8% w/v), lutrol F127 (14, 16, and 18% w/v), and ornidazole (1% w/v) were designed for the treatment of periodontal diseases. Each formulation was characterized in terms of in vitro gelling capacity, viscosity, rheology, content uniformity, in vitro drug release, and syringeability. In vitro gelation time and the nature of the gel formed in simulated saliva for prepared formulations showed polymeric concentration dependency. Drug release data from all formulations was fitted to different kinetic models and the Korsemeyer-Peppas model was the best fit model. Drug release was significantly decreased as the concentration of each polymer component was increased. Increasing the concentration of each polymeric component significantly increased viscosity, syringeability, and time for 50%, 70%, and 90% drug release. In conclusion, the formulations described offer a wide range of physical and drug release characteristics. The formulation containing 0.8% w/v of gellan gum and 16% w/v of lutrol F127 exhibited superior physical characteristics.
Structure-Property Relationships of Architectural Coatings by Neutron Methods
NASA Astrophysics Data System (ADS)
Nakatani, Alan
2015-03-01
Architectural coatings formulations are multi-component mixtures containing latex polymer binder, pigment, rheology modifiers, surfactants, and colorants. In order to achieve the desired flow properties for these formulations, measures of the underlying structure of the components as a function of shear rate and the impact of formulation variables on the structure is necessary. We have conducted detailed measurements to understand the evolution under shear of local microstructure and larger scale mesostructure in model architectural coatings formulations by small angle neutron scattering (SANS) and ultra small angle neutron scattering (USANS), respectively. The SANS results show an adsorbed layer of rheology modifier molecules exist on the surface of the latex particles. However, the additional hydrodynamic volume occupied by the adsorbed surface layer is insufficient to account for the observed viscosity by standard hard sphere suspension models (Krieger-Dougherty). The USANS results show the presence of latex aggregates, which are fractal in nature. These fractal aggregates are the primary structures responsible for coatings formulation viscosity. Based on these results, a new model for the viscosity of coatings formulations has been developed, which is capable of reproducing the observed viscosity behavior.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; D'Costa, Joseph F.
1991-01-01
This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.
Colbourn, E A; Roskilly, S J; Rowe, R C; York, P
2011-10-09
This study has investigated the utility and potential advantages of gene expression programming (GEP)--a new development in evolutionary computing for modelling data and automatically generating equations that describe the cause-and-effect relationships in a system--to four types of pharmaceutical formulation and compared the models with those generated by neural networks, a technique now widely used in the formulation development. Both methods were capable of discovering subtle and non-linear relationships within the data, with no requirement from the user to specify the functional forms that should be used. Although the neural networks rapidly developed models with higher values for the ANOVA R(2) these were black box and provided little insight into the key relationships. However, GEP, although significantly slower at developing models, generated relatively simple equations describing the relationships that could be interpreted directly. The results indicate that GEP can be considered an effective and efficient modelling technique for formulation data. Copyright © 2011 Elsevier B.V. All rights reserved.
Formulation of human-structure interaction system models for vertical vibration
NASA Astrophysics Data System (ADS)
Caprani, Colin C.; Ahmadi, Ehsan
2016-09-01
In this paper, human-structure interaction system models for vibration in the vertical direction are considered. This work assembles various moving load models from the literature and proposes extension of the single pedestrian to a crowd of pedestrians for the FE formulation for crowd-structure interaction systems. The walking pedestrian vertical force is represented as a general time-dependent force, and the pedestrian is in turn modelled as moving force, moving mass, and moving spring-mass-damper. The arbitrary beam structure is modelled using either a formulation in modal coordinates or finite elements. In each case, the human-structure interaction (HSI) system is first formulated for a single walking pedestrian and then extended to consider a crowd of pedestrians. Finally, example applications for single pedestrian and crowd loading scenarios are examined. It is shown how the models can be used to quantify the interaction between the crowd and bridge structure. This work should find use for the evaluation of existing and new footbridges.
Super-Group Field Cosmology in Batalin-Vilkovisky Formulation
NASA Astrophysics Data System (ADS)
Upadhyay, Sudhaker
2016-09-01
In this paper we study the third quantized super-group field cosmology, a model in multiverse scenario, in Batalin-Vilkovisky (BV) formulation. Further, we propose the superfield/super-antifield dependent BRST symmetry transformations. Within this formulation we establish connection between the two different solutions of the quantum master equation within the BV formulation.
Past, Present, and Future of Chemical Acaricides
USDA-ARS?s Scientific Manuscript database
There have been many different acaricides and acaricide formulations used throughout the history of tick control. Originally, various mixtures of crude oil, lard, sulfur, and kerosene were used for dipping livestock. This was followed by Beaumont crude oil. Arsenical dips were introduced in 1911 and...
Ethical Liberalism, Education and the "New Right."
ERIC Educational Resources Information Center
Olssen, Mark
2000-01-01
Examines the philosophical tradition of ethical liberalism from its emergence as a coherent response to 19th century classical liberal individualism through contemporary formulations. Pursues origins in John Stuart Mills's writings and assesses ethical liberalism's relevance for understanding current neo-liberal policy restructuring in education.…
Generalized in vitro-in vivo relationship (IVIVR) model based on artificial neural networks
Mendyk, Aleksander; Tuszyński, Paweł K; Polak, Sebastian; Jachowicz, Renata
2013-01-01
Background The aim of this study was to develop a generalized in vitro-in vivo relationship (IVIVR) model based on in vitro dissolution profiles together with quantitative and qualitative composition of dosage formulations as covariates. Such a model would be of substantial aid in the early stages of development of a pharmaceutical formulation, when no in vivo results are yet available and it is impossible to create a classical in vitro-in vivo correlation (IVIVC)/IVIVR. Methods Chemoinformatics software was used to compute the molecular descriptors of drug substances (ie, active pharmaceutical ingredients) and excipients. The data were collected from the literature. Artificial neural networks were used as the modeling tool. The training process was carried out using the 10-fold cross-validation technique. Results The database contained 93 formulations with 307 inputs initially, and was later limited to 28 in a course of sensitivity analysis. The four best models were introduced into the artificial neural network ensemble. Complete in vivo profiles were predicted accurately for 37.6% of the formulations. Conclusion It has been shown that artificial neural networks can be an effective predictive tool for constructing IVIVR in an integrated generalized model for various formulations. Because IVIVC/IVIVR is classically conducted for 2–4 formulations and with a single active pharmaceutical ingredient, the approach described here is unique in that it incorporates various active pharmaceutical ingredients and dosage forms into a single model. Thus, preliminary IVIVC/IVIVR can be available without in vivo data, which is impossible using current IVIVC/IVIVR procedures. PMID:23569360
Development, fabrication and test of a high purity silica heat shield
NASA Technical Reports Server (NTRS)
Rusert, E. L.; Drennan, D. N.; Biggs, M. S.
1978-01-01
A highly reflective hyperpure ( 25 ppm ion impurities) slip cast fused silica heat shield material developed for planetary entry probes was successfully scaled up. Process development activities for slip casting large parts included green strength improvements, casting slip preparation, aggregate casting, strength, reflectance, and subscale fabrication. Successful fabrication of a one-half scale Saturn probe (shape and size) heat shield was accomplished while maintaining the silica high purity and reflectance through the scale-up process. However, stress analysis of this original aggregate slip cast material indicated a small margin of safety (MS. = +4%) using a factor of safety of 1.25. An alternate hyperpure material formulation to increase the strength and toughness for a greater safety margin was evaluated. The alternate material incorporates short hyperpure silica fibers into the casting slip. The best formulation evaluated has a 50% by weight fiber addition resulting in an 80% increase in flexural strength and a 170% increase in toughness over the original aggregate slip cast materials with comparable reflectance.
Comparison of parametric and bootstrap method in bioequivalence test.
Ahn, Byung-Jin; Yim, Dong-Seok
2009-10-01
The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.
Comparison of Parametric and Bootstrap Method in Bioequivalence Test
Ahn, Byung-Jin
2009-01-01
The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699
NASA Astrophysics Data System (ADS)
Hristov, Y.; Oxley, G.; Žagar, M.
2014-06-01
The Bolund measurement campaign, performed by Danish Technical University (DTU) Wind Energy Department (also known as RISØ), provided significant insight into wind flow modeling over complex terrain. In the blind comparison study several modelling solutions were submitted with the vast majority being steady-state Computational Fluid Dynamics (CFD) approaches with two equation k-epsilon turbulence closure. This approach yielded the most accurate results, and was identified as the state-of-the-art tool for wind turbine generator (WTG) micro-siting. Based on the findings from Bolund, further comparison between CFD and field measurement data has been deemed essential in order to improve simulation accuracy for turbine load and long-term Annual Energy Production (AEP) estimations. Vestas Wind Systems A/S is a major WTG original equipment manufacturer (OEM) with an installed base of over 60GW in over 70 countries accounting for 19% of the global installed base. The Vestas Performance and Diagnostic Centre (VPDC) provides online live data to more than 47GW of these turbines allowing a comprehensive comparison between modelled and real-world energy production data. In previous studies, multiple sites have been simulated with a steady neutral CFD formulation for the atmospheric surface layer (ASL), and wind resource (RSF) files have been generated as a base for long-term AEP predictions showing significant improvement over predictions performed with the industry standard linear WAsP tool. In this study, further improvements to the wind resource file generation with CFD are examined using an unsteady diurnal cycle approach with a full atmospheric boundary layer (ABL) formulation, with the unique stratifications throughout the cycle weighted according to mesoscale simulated sectorwise stability frequencies.
Channelling information flows from observation to decision; or how to increase certainty
NASA Astrophysics Data System (ADS)
Weijs, S. V.
2015-12-01
To make adequate decisions in an uncertain world, information needs to reach the decision problem, to enable overseeing the full consequences of each possible decision.On its way from the physical world to a decision problem, information is transferred through the physical processes that influence the sensor, then through processes that happen in the sensor, through wires or electromagnetic waves. For the last decade, most information becomes digitized at some point. From moment of digitization, information can in principle be transferred losslessly. Information about the physical world is often also stored, sometimes in compressed form, such as physical laws, concepts, or models of specific hydrological systems. It is important to note, however, that all information about a physical system eventually has to originate from observation (although inevitably coloured by some prior assumptions). This colouring makes the compression lossy, but is effectively the only way to make use of similarities in time and space that enable predictions while measuring only a a few macro-states of a complex hydrological system.Adding physical process knowledge to a hydrological model can thus be seen as a convenient way to transfer information from observations from a different time or place, to make predictions about another situation, assuming the same dynamics are at work.The key challenge to achieve more certainty in hydrological prediction can therefore be formulated as a challenge to tap and channel information flows from the environment. For tapping more information flows, new measurement techniques, large scale campaigns, historical data sets, and large sample hydrology and regionalization efforts can bring progress. For channelling the information flows with minimum loss, model calibration, and model formulation techniques should be critically investigated. Some experience from research in a Swiss high alpine catchment are used as an illustration.
Tannenbaum, Emmanuel; Sherley, James L; Shakhnovich, Eugene I
2005-04-01
This paper develops a point-mutation model describing the evolutionary dynamics of a population of adult stem cells. Such a model may prove useful for quantitative studies of tissue aging and the emergence of cancer. We consider two modes of chromosome segregation: (1) random segregation, where the daughter chromosomes of a given parent chromosome segregate randomly into the stem cell and its differentiating sister cell and (2) "immortal DNA strand" co-segregation, for which the stem cell retains the daughter chromosomes with the oldest parent strands. Immortal strand co-segregation is a mechanism, originally proposed by [Cairns Nature (London) 255, 197 (1975)], by which stem cells preserve the integrity of their genomes. For random segregation, we develop an ordered strand pair formulation of the dynamics, analogous to the ordered strand pair formalism developed for quasispecies dynamics involving semiconservative replication with imperfect lesion repair (in this context, lesion repair is taken to mean repair of postreplication base-pair mismatches). Interestingly, a similar formulation is possible with immortal strand co-segregation, despite the fact that this segregation mechanism is age dependent. From our model we are able to mathematically show that, when lesion repair is imperfect, then immortal strand co-segregation leads to better preservation of the stem cell lineage than random chromosome segregation. Furthermore, our model allows us to estimate the optimal lesion repair efficiency for preserving an adult stem cell population for a given period of time. For human stem cells, we obtain that mispaired bases still present after replication and cell division should be left untouched, to avoid potentially fixing a mutation in both DNA strands.
Concise CIO based precession-nutation formulations
NASA Astrophysics Data System (ADS)
Capitaine, N.; Wallace, P. T.
2008-01-01
Context: The IAU 2000/2006 precession-nutation models have precision goals measured in microarcseconds. To reach this level of performance has required series containing terms at over 1300 frequencies and involving several thousand amplitude coefficients. There are many astronomical applications for which such precision is not required and the associated heavy computations are wasteful. This justifies developing smaller models that achieve adequate precision with greatly reduced computing costs. Aims: We discuss strategies for developing simplified IAU 2000/2006 precession-nutation procedures that offer a range of compromises between accuracy and computing costs. Methods: The chain of transformations linking celestial and terrestrial coordinates comprises frame bias, precession-nutation, Earth rotation and polar motion. We address the bias and precession-nutation (NPB) portion of the chain, linking the Geocentric Celestial Reference System (GCRS) with the Celestial Intermediate Reference System (CIRS), the latter based on the Celestial Intermediate Pole (CIP) and Celestial Intermediate Origin (CIO). Starting from direct series that deliver the CIP coordinates X,Y and (via the quantity s + XY/2) the CIO locator s, we look at the opportunities for simplification. Results: The biggest reductions come from truncating the series, but some additional gains can be made in the areas of the matrix formulation, the expressions for the nutation arguments and by subsuming long period effects into the bias quantities. Three example models are demonstrated that approximate the IAU 2000/2006 CIP to accuracies of 1 mas, 16 mas and 0.4 arcsec throughout 1995-2050 but with computation costs reduced by 1, 2 and 3 orders of magnitude compared with the full model. Appendices A to G are only available in electronic form at http://www.aanda.org
NASA Astrophysics Data System (ADS)
Tannenbaum, Emmanuel; Sherley, James L.; Shakhnovich, Eugene I.
2005-04-01
This paper develops a point-mutation model describing the evolutionary dynamics of a population of adult stem cells. Such a model may prove useful for quantitative studies of tissue aging and the emergence of cancer. We consider two modes of chromosome segregation: (1) random segregation, where the daughter chromosomes of a given parent chromosome segregate randomly into the stem cell and its differentiating sister cell and (2) “immortal DNA strand” co-segregation, for which the stem cell retains the daughter chromosomes with the oldest parent strands. Immortal strand co-segregation is a mechanism, originally proposed by [Cairns Nature (London) 255, 197 (1975)], by which stem cells preserve the integrity of their genomes. For random segregation, we develop an ordered strand pair formulation of the dynamics, analogous to the ordered strand pair formalism developed for quasispecies dynamics involving semiconservative replication with imperfect lesion repair (in this context, lesion repair is taken to mean repair of postreplication base-pair mismatches). Interestingly, a similar formulation is possible with immortal strand co-segregation, despite the fact that this segregation mechanism is age dependent. From our model we are able to mathematically show that, when lesion repair is imperfect, then immortal strand co-segregation leads to better preservation of the stem cell lineage than random chromosome segregation. Furthermore, our model allows us to estimate the optimal lesion repair efficiency for preserving an adult stem cell population for a given period of time. For human stem cells, we obtain that mispaired bases still present after replication and cell division should be left untouched, to avoid potentially fixing a mutation in both DNA strands.
The lateral mesodermal divide: an epigenetic model of the origin of paired fins.
Nuño de la Rosa, Laura; Müller, Gerd B; Metscher, Brian D
2014-01-01
By examining development at the level of tissues and processes, rather than focusing on gene expression, we have formulated a general hypothesis to explain the dorso-ventral and anterior-posterior placement of paired appendage initiation sites in vertebrates. According to our model, the number and position of paired appendages are due to a commonality of embryonic tissue environments determined by the global interactions involving the two separated layers (somatic and visceral) of lateral plate mesoderm along the dorso-ventral and anterior-posterior axes of the embryo. We identify this distribution of developmental conditions, as modulated by the separation/contact of the two LPM layers and their interactions with somitic mesoderm, ectoderm, and endoderm as a dynamic developmental entity which we have termed the lateral mesodermal divide (LMD). Where the divide results in a certain tissue environment, fin bud initiation can occur. According to our hypothesis, the influence of the developing gut suppresses limb initiation along the midgut region and the ventral body wall owing to an "endodermal predominance." From an evolutionary perspective, the lack of gut regionalization in agnathans reflects the ancestral absence of these conditions, and the elaboration of the gut together with the concomitant changes to the LMD in the gnathostomes could have led to the origin of paired fins. © 2013 Wiley Periodicals, Inc.
A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall
NASA Astrophysics Data System (ADS)
Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.
2017-06-01
Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.
Integrated Formulation of Beacon-Based Exception Analysis for Multimissions
NASA Technical Reports Server (NTRS)
Mackey, Ryan; James, Mark; Park, Han; Zak, Mickail
2003-01-01
Further work on beacon-based exception analysis for multimissions (BEAM), a method of real-time, automated diagnosis of a complex electromechanical systems, has greatly expanded its capability and suitability of application. This expanded formulation, which fully integrates physical models and symbolic analysis, is described. The new formulation of BEAM expands upon previous advanced techniques for analysis of signal data, utilizing mathematical modeling of the system physics, and expert-system reasoning,
Peachman, Kristina K; Li, Qin; Matyas, Gary R; Shivachandra, Sathish B; Lovchik, Julie; Lyons, Rick C; Alving, Carl R; Rao, Venigalla B; Rao, Mangala
2012-01-01
In an effort to develop an improved anthrax vaccine that shows high potency, five different anthrax protective antigen (PA)-adjuvant vaccine formulations that were previously found to be efficacious in a nonhuman primate model were evaluated for their efficacy in a rabbit pulmonary challenge model using Bacillus anthracis Ames strain spores. The vaccine formulations include PA adsorbed to Alhydrogel, PA encapsulated in liposomes containing monophosphoryl lipid A, stable liposomal PA oil-in-water emulsion, PA displayed on bacteriophage T4 by the intramuscular route, and PA mixed with Escherichia coli heat-labile enterotoxin administered by the needle-free transcutaneous route. Three of the vaccine formulations administered by the intramuscular or the transcutaneous route as a three-dose regimen induced 100% protection in the rabbit model. One of the formulations, liposomal PA, also induced significantly higher lethal toxin neutralizing antibodies than PA-Alhydrogel. Even 5 months after the second immunization of a two-dose regimen, rabbits vaccinated with liposomal PA were 100% protected from lethal challenge with Ames strain spores. In summary, the needle-free skin delivery and liposomal formulation that were found to be effective in two different animal model systems appear to be promising candidates for next-generation anthrax vaccine development.
Su, Li; Farewell, Vernon T
2013-01-01
For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. PMID:24201470
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Michael M.; Marzouk, Youssef M.; Adams, Brian M.
2008-10-01
Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern since the anthrax attacks of 2001. The ability to characterize the parameters of such attacks, i.e., to estimate the number of people infected, the time of infection, the average dose received, and the rate of disease spread in contemporary American society (for contagious diseases), is important when planning a medical response. For non-contagious diseases, we address the characterization problem by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To keep the approach relevant for response planning, we limitmore » ourselves to 3.5 days of data. In computational tests performed for anthrax, we usually find these observation windows sufficient, especially if the outbreak model employed in the inverse problem is accurate. For contagious diseases, we formulated a Bayesian inversion technique to infer both pathogenic transmissibility and the social network from outbreak observations, ensuring that the two determinants of spreading are identified separately. We tested this technique on data collected from a 1967 smallpox epidemic in Abakaliki, Nigeria. We inferred, probabilistically, different transmissibilities in the structured Abakaliki population, the social network, and the chain of transmission. Finally, we developed an individual-based epidemic model to realistically simulate the spread of a rare (or eradicated) disease in a modern society. This model incorporates the mixing patterns observed in an (American) urban setting and accepts, as model input, pathogenic transmissibilities estimated from historical outbreaks that may have occurred in socio-economic environments with little resemblance to contemporary society. Techniques were also developed to simulate disease spread on static and sampled network reductions of the dynamic social networks originally in the individual-based model, yielding faster, though approximate, network-based epidemic models. These reduced-order models are useful in scenario analysis for medical response planning, as well as in computationally intensive inverse problems.« less
Mortazavi, Seyed Alireza; Jafariazar, Zahra; Ghadjahani, Yasaman; Mahmoodi, Hoda; Mehtarpour, Farzaneh
2014-01-01
The purpose of this study was preparation and evaluation of sustained release matrix type ocular mini-tablets of timolol maleate, as a potential formulation for the treatment of glaucoma. Following the initial studies on timolol maleate powder, it was formulated into ocular mini-tablets. The polymers investigated in this study included cellulose derivatives (HEC, CMC, EC) and Carbopol 971P. Mannitol was used as the solubilizing agent and magnesium stearate as the lubricant. Mini-tablets were prepared by through mixing of the ingredients, followed by direct compression. All the prepared formulations were evaluated in terms of physicochemical tests, including uniformity of weight, thickness, crushing strength, friability and in-vitro drug release. Four groups of formulations were prepared. The presence of different amounts of cellulose derivatives or Carbopol 971P, alone, was studied in group A formulations. In group B formulations, the effect of adding Carbopol 971P alongside different cellulose derivatives was investigated. Group C formulations were made by including mannitol as the solubilizing agent, alongside Carbopol 971P and a cellulose derivative. In group D formulations, mini-tablets were made using Carbopol 971P, alongside two different cellulose derivative. The selected formulation (C1) contained ethyl cellulose, Carbopol 971P, mannitol and magnesium stearate, which showed almost 100% drug release over 5 h. Based on kinetic studies, this formulation was found to best fit the zero-order model of drug release. However, the Higuchi and Hixson -Crowell models also showed a good fit. Hence, overall, formulation C1 was chosen as the best formulation. PMID:24734053
Unified Framework for Deriving Simultaneous Equation Algorithms for Water Distribution Networks
The known formulations for steady state hydraulics within looped water distribution networks are re-derived in terms of linear and non-linear transformations of the original set of partly linear and partly non-linear equations that express conservation of mass and energy. All of ...
ERIC Educational Resources Information Center
O'Connell, Daniel C.; Kowal, Sabine; Ageneau, Carie
2005-01-01
A psycholinguistic hypothesis regarding the use of interjections in spoken utterances, originally formulated by Ameka (1992b, 1994) for the English language, but not confirmed in the German-language research of Kowal and O'Connell (2004 a & c), was tested: The local syntactic isolation of interjections is paralleled by their articulatory isolation…
76 FR 77541 - Proposed Information Collection Activity; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-13
... managers in formulating policies for the future direction of the Refugee Resettlement Program. Respondents... of origin, State of resettlement, and number of months since arrival. From the responses, the Office...-9). OMB No.: 0970-0033. Description: The Annual Survey of Refugees collects information on the...
Impression-Management in the Forced Compliance Paradigm.
ERIC Educational Resources Information Center
Saenz, Rogelio; Quigley-Fernandez, Barbara
In its original formulation, dissonance reduction was postulated as a mode for resolving behavior-attitude discrepancies. One mode of resolution has been demonstrated in the forced compliance paradigm, whereby a subject rectifies a counterattitudinal behavior with an actual belief, resulting in moderating beliefs. A forced compliance situation was…
White, Robin R; Capper, Judith L
2014-03-01
The objective of this study was to use a precision nutrition model to simulate the relationship between diet formulation frequency and dairy cattle performance across various climates. Agricultural Modeling and Training Systems (AMTS) CattlePro diet-balancing software (Cornell Research Foundation, Ithaca, NY) was used to compare 3 diet formulation frequencies (weekly, monthly, or seasonal) and 3 levels of climate variability (hot, cold, or variable). Predicted daily milk yield (MY), metabolizable energy (ME) balance, and dry matter intake (DMI) were recorded for each frequency-variability combination. Economic analysis was conducted to calculate the predicted revenue over feed and labor costs. Diet formulation frequency affected ME balance and MY but did not affect DMI. Climate variability affected ME balance and DMI but not MY. The interaction between climate variability and formulation frequency did not affect ME balance, MY, or DMI. Formulating diets more frequently increased MY, DMI, and ME balance. Economic analysis showed that formulating diets weekly rather than seasonally could improve returns over variable costs by $25,000 per year for a moderate-sized (300-cow) operation. To achieve this increase in returns, an entire feeding system margin of error of <1% was required. Formulating monthly, rather than seasonally, may be a more feasible alternative as this requires a margin of error of only 2.5% for the entire feeding system. Feeding systems with a low margin of error must be developed to better take advantage of the benefits of precision nutrition. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Finite Rotation Analysis of Highly Thin and Flexible Structures
NASA Technical Reports Server (NTRS)
Clarke, Greg V.; Lee, Keejoo; Lee, Sung W.; Broduer, Stephen J. (Technical Monitor)
2001-01-01
Deployable space structures such as sunshields and solar sails are extremely thin and highly flexible with limited bending rigidity. For analytical investigation of their responses during deployment and operation in space, these structures can be modeled as thin shells. The present work examines the applicability of the solid shell element formulation to modeling of deployable space structures. The solid shell element formulation that models a shell as a three-dimensional solid is convenient in that no rotational parameters are needed for the description of kinematics of deformation. However, shell elements may suffer from element locking as the thickness becomes smaller unless special care is taken. It is shown that, when combined with the assumed strain formulation, the solid shell element formulation results in finite element models that are free of locking even for extremely thin structures. Accordingly, they can be used for analysis of highly flexible space structures undergoing geometrically nonlinear finite rotations.
Pisklak, Dariusz Maciej; Zielińska-Pisklak, Monika Agnieszka; Szeleszczuk, Łukasz; Wawer, Iwona
2016-04-15
Solid-state NMR is an excellent and useful method for analyzing solid-state forms of drugs. In the (13)C CP/MAS NMR spectra of the solid dosage forms many of the signals originate from the excipients and should be distinguished from those of active pharmaceutical ingredient (API). In this work the most common pharmaceutical excipients used in the solid drug formulations: anhydrous α-lactose, α-lactose monohydrate, mannitol, sucrose, sorbitol, sodium starch glycolate type A and B, starch of different origin, microcrystalline cellulose, hypromellose, ethylcellulose, methylcellulose, hydroxyethylcellulose, sodium alginate, magnesium stearate, sodium laurilsulfate and Kollidon(®) were analyzed. Their (13)C CP/MAS NMR spectra were recorded and the signals were assigned, employing the results (R(2): 0.948-0.998) of GIPAW calculations and theoretical chemical shifts. The (13)C ssNMR spectra for some of the studied excipients have not been published before while for the other signals in the spectra they were not properly assigned or the assignments were not correct. The results summarize and complement the data on the (13)C ssNMR analysis of the most common pharmaceutical excipients and are essential for further NMR studies of API-excipient interactions in the pharmaceutical formulations. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
Echo state networks with filter neurons and a delay&sum readout.
Holzmann, Georg; Hauser, Helmut
2010-03-01
Echo state networks (ESNs) are a novel approach to recurrent neural network training with the advantage of a very simple and linear learning algorithm. It has been demonstrated that ESNs outperform other methods on a number of benchmark tasks. Although the approach is appealing, there are still some inherent limitations in the original formulation. Here we suggest two enhancements of this network model. First, the previously proposed idea of filters in neurons is extended to arbitrary infinite impulse response (IIR) filter neurons. This enables such networks to learn multiple attractors and signals at different timescales, which is especially important for modeling real-world time series. Second, a delay&sum readout is introduced, which adds trainable delays in the synaptic connections of output neurons and therefore vastly improves the memory capacity of echo state networks. It is shown in commonly used benchmark tasks and real-world examples, that this new structure is able to significantly outperform standard ESNs and other state-of-the-art models for nonlinear dynamical system modeling. Copyright 2009 Elsevier Ltd. All rights reserved.
Schick, Robert S; Kraus, Scott D; Rolland, Rosalind M; Knowlton, Amy R; Hamilton, Philip K; Pettis, Heather M; Thomas, Len; Harwood, John; Clark, James S
2016-01-01
Right whales are vulnerable to many sources of anthropogenic disturbance including ship strikes, entanglement with fishing gear, and anthropogenic noise. The effect of these factors on individual health is unclear. A statistical model using photographic evidence of health was recently built to infer the true or hidden health of individual right whales. However, two important prior assumptions about the role of missing data and unexplained variance on the estimates were not previously assessed. Here we tested these factors by varying prior assumptions and model formulation. We found sensitivity to each assumption and used the output to make guidelines on future model formulation.
Sakai, Kenichi; Obata, Kouki; Yoshikawa, Mayumi; Takano, Ryusuke; Shibata, Masaki; Maeda, Hiroyuki; Mizutani, Akihiko; Terada, Katsuhide
2012-10-01
To design a high drug loading formulation of self-microemulsifying/micelle system. A poorly-soluble model drug (CH5137291), 8 hydrophilic surfactants (HS), 10 lipophilic surfactants (LS), 5 oils, and PEG400 were used. A high loading formulation was designed by a following stepwise approach using a high-throughput formulation screening (HTFS) system: (1) an oil/solvent was selected by solubility of the drug; (2) a suitable HS for highly loading was selected by the screenings of emulsion/micelle size and phase stability in binary systems (HS, oil/solvent) with increasing loading levels; (3) a LS that formed a broad SMEDDS/micelle area on a phase diagram containing the HS and oil/solvent was selected by the same screenings; (4) an optimized formulation was selected by evaluating the loading capacity of the crystalline drug. Aqueous solubility behavior and oral absorption (Beagle dog) of the optimized formulation were compared with conventional formulations (jet-milled, PEG400). As an optimized formulation, d-α-tocopheryl polyoxyethylene 1000 succinic ester: PEG400 = 8:2 was selected, and achieved the target loading level (200 mg/mL). The formulation formed fine emulsion/micelle (49.1 nm), and generated and maintained a supersaturated state at a higher level compared with the conventional formulations. In the oral absorption test, the area under the plasma concentration-time curve of the optimized formulation was 16.5-fold higher than that of the jet-milled formulation. The high loading formulation designed by the stepwise approach using the HTFS system improved the oral absorption of the poorly-soluble model drug.
Innovation strategies for generic drug companies: moving into supergenerics.
Ross, Malcolm S F
2010-04-01
Pharmaceutical companies that market generic products generally are not regarded as innovators, but rather as companies that produce copies of originator products to be launched at patent expiration. However, many generics companies have developed excellent scientific innovative skills in an effort to circumvent the defense patents of originator companies. More patents per product, in terms of both drug substances (process patents and polymorph patents) and formulations, are issued to generics companies than to companies that are traditionally considered to be 'innovators'. This quantity of issued patents highlights the technical knowledge and skill sets that are available in generics companies. In order to adopt a completely innovative model (ie, the development of NCEs), a generics company would require a completely new set of skills in several fields, including a sufficient knowledge base, project and risk management experience, and capability for clinical data evaluation. However, with relatively little investment, generics companies should be able to progress into the so-called 'supergeneric' drug space - an area of innovation that reflects the existing competencies of both innovative and generics companies.
Adaptive distance metric learning for diffusion tensor image segmentation.
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.
Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858
Lossless Compression of Classification-Map Data
NASA Technical Reports Server (NTRS)
Hua, Xie; Klimesh, Matthew
2009-01-01
A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.
Improving long time behavior of Poisson bracket mapping equation: A non-Hamiltonian approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hyun Woo; Rhee, Young Min, E-mail: ymrhee@postech.ac.kr
2014-05-14
Understanding nonadiabatic dynamics in complex systems is a challenging subject. A series of semiclassical approaches have been proposed to tackle the problem in various settings. The Poisson bracket mapping equation (PBME) utilizes a partial Wigner transform and a mapping representation for its formulation, and has been developed to describe nonadiabatic processes in an efficient manner. Operationally, it is expressed as a set of Hamilton's equations of motion, similar to more conventional classical molecular dynamics. However, this original Hamiltonian PBME sometimes suffers from a large deviation in accuracy especially in the long time limit. Here, we propose a non-Hamiltonian variant ofmore » PBME to improve its behavior especially in that limit. As a benchmark, we simulate spin-boson and photosynthetic model systems and find that it consistently outperforms the original PBME and its Ehrenfest style variant. We explain the source of this improvement by decomposing the components of the mapping Hamiltonian and by assessing the energy flow between the system and the bath. We discuss strengths and weaknesses of our scheme with a viewpoint of offering future prospects.« less
Modeling of Rolling Element Bearing Mechanics: Computer Program Updates
NASA Technical Reports Server (NTRS)
Ryan, S. G.
1997-01-01
The Rolling Element Bearing Analysis System (REBANS) extends the capability available with traditional quasi-static bearing analysis programs by including the effects of bearing race and support flexibility. This tool was developed under contract for NASA-MSFC. The initial version delivered at the close of the contract contained several errors and exhibited numerous convergence difficulties. The program has been modified in-house at MSFC to correct the errors and greatly improve the convergence. The modifications consist of significant changes in the problem formulation and nonlinear convergence procedures. The original approach utilized sequential convergence for nested loops to achieve final convergence. This approach proved to be seriously deficient in robustness. Convergence was more the exception than the rule. The approach was changed to iterate all variables simultaneously. This approach has the advantage of using knowledge of the effect of each variable on each other variable (via the system Jacobian) when determining the incremental changes. This method has proved to be quite robust in its convergence. This technical memorandum documents the changes required for the original Theoretical Manual and User's Manual due to the new approach.
Mustroph, Heinz
2016-09-05
The concept of a potential-energy surface (PES) is central to our understanding of spectroscopy, photochemistry, and chemical kinetics. However, the terminology used in connection with the basic approximations is variously, and somewhat confusingly, represented with such phrases as "adiabatic", "Born-Oppenheimer", or "Born-Oppenheimer adiabatic" approximation. Concerning the closely relevant and important Franck-Condon principle (FCP), the IUPAC definition differentiates between a classical and quantum mechanical formulation. Consequently, in many publications we find terms such as "Franck-Condon (excited) state", or a vertical transition to the "Franck-Condon point" with the "Franck-Condon geometry" that relaxes to the excited-state equilibrium geometry. The Born-Oppenheimer approximation and the "classical" model of the Franck-Condon principle are typical examples of misused terms and lax interpretations of the original theories. In this essay, we revisit the original publications of pioneers of the PES concept and the FCP to help stimulate a lively discussion and clearer thinking around these important concepts. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A bulk viscosity approach for shock capturing on unstructured grids
NASA Astrophysics Data System (ADS)
Shoeybi, Mohammad; Larsson, Nils Johan; Ham, Frank; Moin, Parviz
2008-11-01
The bulk viscosity approach for shock capturing (Cook and Cabot, JCP, 2005) augments the bulk part of the viscous stress tensor. The intention is to capture shock waves without dissipating turbulent structures. The present work extends and modifies this method for unstructured grids. We propose a method that properly scales the bulk viscosity with the grid spacing normal to the shock for unstructured grid for which the shock is not necessarily aligned with the grid. The magnitude of the strain rate tensor used in the original formulation is replaced with the dilatation, which appears to be more appropriate in the vortical turbulent flow regions (Mani et al., 2008). The original form of the model is found to have an impact on dilatational motions away form the shock wave, which is eliminated by a proposed localization of the bulk viscosity. Finally, to allow for grid adaptation around shock waves, an explicit/implicit time advancement scheme has been developed that adaptively identifies the stiff regions. The full method has been verified with several test cases, including 2D shock-vorticity entropy interaction, homogenous isotropic turbulence, and turbulent flow over a cylinder.
On large time step TVD scheme for hyperbolic conservation laws and its efficiency evaluation
NASA Astrophysics Data System (ADS)
Qian, ZhanSen; Lee, Chun-Hian
2012-08-01
A large time step (LTS) TVD scheme originally proposed by Harten is modified and further developed in the present paper and applied to Euler equations in multidimensional problems. By firstly revealing the drawbacks of Harten's original LTS TVD scheme, and reasoning the occurrence of the spurious oscillations, a modified formulation of its characteristic transformation is proposed and a high resolution, strongly robust LTS TVD scheme is formulated. The modified scheme is proven to be capable of taking larger number of time steps than the original one. Following the modified strategy, the LTS TVD schemes for Yee's upwind TVD scheme and Yee-Roe-Davis's symmetric TVD scheme are constructed. The family of the LTS schemes is then extended to multidimensional by time splitting procedure, and the associated boundary condition treatment suitable for the LTS scheme is also imposed. The numerical experiments on Sod's shock tube problem, inviscid flows over NACA0012 airfoil and ONERA M6 wing are performed to validate the developed schemes. Computational efficiencies for the respective schemes under different CFL numbers are also evaluated and compared. The results reveal that the improvement is sizable as compared to the respective single time step schemes, especially for the CFL number ranging from 1.0 to 4.0.
Beirowski, Jakob; Inghelbrecht, Sabine; Arien, Albertina; Gieseler, Henning
2012-01-01
On the basis of a previously developed formulation and process guideline for lyophilized, highly concentrated drug nanosuspensions for parenteral use, it was the purpose of this study to demonstrate that the original nanoparticle size distribution can be preserved over a minimum period of 3 months, even if aggressive primary drying conditions are used. Critical factors were evaluated that were originally believed to affect storage stability of freeze-dried drug nanoparticles. It was found that the nature and concentration of the steric stabilizer, such as Poloxamer 338 and Cremophor EL, are the most important factors for long-term stability of such formulations, independent of the used drug compound. The rational choice of an adequate steric stabilizer, namely Poloxamer 338, in combination with various lyoprotectants seems crucial to prevent physical instabilities of the lyophilized drug nanoparticles during short-term stability experiments at ambient and accelerated conditions. A 200 mg/mL concentration of nanoparticles could successfully be stabilized over the investigated time interval. In the course of the present experiments, polyvinylpyrrolidone, type K15 was found superior to trehalose or sucrose in preserving the original particle size distribution, presumably based on its surface-active properties. Lastly, it was demonstrated that lower water contents are generally beneficial to stabilize such systems. Copyright © 2011 Wiley-Liss, Inc.
Formulation of Efficient Finite Element Prediction Models.
1980-01-01
vorticity-divergence FEM formulation. This paper will compare these FEM formulations by considering the Vgeostrophic adjustment process with the linearized...by Fourier transforming the terms that are independent of t in (2.12)-(2.14) or (2.19)-(2.21). However, in this paper the final state will be...filtering in a baroclinic primitive equation model. 17 L . , 5. Conclusions The objective of this paper is to determine the response of various finite
Effects of Food and Pharmaceutical Formulation on Desmopressin Pharmacokinetics in Children.
Michelet, Robin; Dossche, Lien; De Bruyne, Pauline; Colin, Pieter; Boussery, Koen; Vande Walle, Johan; Van Bocxlaer, Jan; Vermeulen, An
2016-09-01
Desmopressin is used for treatment of nocturnal enuresis in children. In this study, we investigated the pharmacokinetics of two formulations-a tablet and a lyophilisate-in both fasted and fed children. Previously published data from two studies (one in 22 children aged 6-16 years, and the other in 25 children aged 6-13 years) were analyzed using population pharmacokinetic modeling. A one-compartment model with first-order absorption was fitted to the data. Covariates were selected using a forward selection procedure. The final model was evaluated, and sensitivity analysis was performed to improve future sampling designs. Simulations were subsequently performed to further explore the relative bioavailability of both formulations and the food effect. The final model described the plasma desmopressin concentrations adequately. The formulation and the fed state were included as covariates on the relative bioavailability. The lyophilisate was, on average, 32.1 % more available than the tablet, and fasted children exhibited an average increase in the relative bioavailability of 101 % in comparison with fed children. Body weight was included as a covariate on distribution volume, using a power function with an exponent of 0.402. Simulations suggested that both the formulation and the food effect were clinically relevant. Bioequivalence data on two formulations of the same drug in adults cannot be readily extrapolated to children. This was the first study in children suggesting that the two desmopressin formulations are not bioequivalent in children at the currently approved dose levels. Furthermore, the effect of food intake was found to be clinically relevant. Sampling times for a future study were suggested. This sampling design should result in more informative data and consequently generate a more robust model.
Meagher, Alison K.; Forrest, Alan; Dalhoff, Axel; Stass, Heino; Schentag, Jerome J.
2004-01-01
The pharmacokinetics of an extended-release (XR) formulation of ciprofloxacin has been compared to that of the immediate-release (IR) product in healthy volunteers. The only significant difference in pharmacokinetic parameters between the two formulations was seen in the rate constant of absorption, which was approximately 50% greater with the IR formulation. The geometric mean plasma ciprofloxacin concentrations were applied to an in vitro pharmacokinetic-pharmacodynamic model exposing three different clinical strains of Escherichia coli (MICs, 0.03, 0.5, and 2.0 mg/liter) to 24 h of simulated concentrations in plasma. A novel mathematical model was derived to describe the time course of bacterial CFU, including capacity-limited replication and first-order rate of bacterial clearance, and to model the effects of ciprofloxacin concentrations on these processes. A “mixture model” was employed which allowed as many as three bacterial subpopulations to describe the total bacterial load at any moment. Comparing the two formulations at equivalent daily doses, the rates and extents of bacterial killing were similar with the IR and XR formulations at MICs of 0.03 and 2.0 mg/liter. At an MIC of 0.5 mg/liter, however, the 1,000-mg/day XR formulation showed a moderate advantage in antibacterial effect: the area under the CFU-time curve was 45% higher for the IR regimen; the nadir log CFU and 24-h log CFU values for the IR regimen were 3.75 and 2.49, respectively; and those for XR were 4.54 and 3.13, respectively. The mathematical model explained the differences in bacterial killing rate for two regimens with identical AUC/MIC ratios. PMID:15155200
Boersen, Nathan; Carvajal, M Teresa; Morris, Kenneth R; Peck, Garnet E; Pinal, Rodolfo
2015-01-01
While previous research has demonstrated roller compaction operating parameters strongly influence the properties of the final product, a greater emphasis might be placed on the raw material attributes of the formulation. There were two main objectives to this study. First, to assess the effects of different process variables on the properties of the obtained ribbons and downstream granules produced from the rolled compacted ribbons. Second, was to establish if models obtained with formulations of one active pharmaceutical ingredient (API) could predict the properties of similar formulations in terms of the excipients used, but with a different API. Tolmetin and acetaminophen, chosen for their different compaction properties, were roller compacted on Fitzpatrick roller compactor using the same formulation. Models created using tolmetin and tested using acetaminophen. The physical properties of the blends, ribbon, granule and tablet were characterized. Multivariate analysis using partial least squares was used to analyze all data. Multivariate models showed that the operating parameters and raw material attributes were essential in the prediction of ribbon porosity and post-milled particle size. The post compacted ribbon and granule attributes also significantly contributed to the prediction of the tablet tensile strength. Models derived using tolmetin could reasonably predict the ribbon porosity of a second API. After further processing, the post-milled ribbon and granules properties, rather than the physical attributes of the formulation were needed to predict downstream tablet properties. An understanding of the percolation threshold of the formulation significantly improved the predictive ability of the models.
Influence of humidity on the phase behavior of API/polymer formulations.
Prudic, Anke; Ji, Yuanhui; Luebbert, Christian; Sadowski, Gabriele
2015-08-01
Amorphous formulations of APIs in polymers tend to absorb water from the atmosphere. This absorption of water can induce API recrystallization, leading to reduced long-term stability during storage. In this work, the phase behavior of different formulations was investigated as a function of relative humidity. Indomethacin and naproxen were chosen as model APIs and poly(vinyl pyrrolidone) (PVP) and poly(vinyl pyrrolidone-co-vinyl acetate) (PVPVA64) as excipients. The formulations were prepared by spray drying. The water sorption in pure polymers and in formulations was measured at 25°C and at different values of relative humidity (RH=25%, 50% and 75%). Most water was absorbed in PVP-containing systems, and water sorption was decreasing with increasing API content. These trends could also be predicted in good agreement with the experimental data using the thermodynamic model PC-SAFT. Furthermore, the effect of absorbed water on API solubility in the polymer and on the glass-transition temperature of the formulations was predicted with PC-SAFT and the Gordon-Taylor equation, respectively. The absorbed water was found to significantly decrease the API solubility in the polymer as well as the glass-transition temperature of the formulation. Based on a quantitative modeling of the API/polymer phase diagrams as a function of relative humidity, appropriate API/polymer compositions can now be selected to ensure long-term stable amorphous formulations at given storage conditions. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.
2018-04-01
Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select an appropriate discretization for a given problem size.
Tom, Brian Dm; Su, Li; Farewell, Vernon T
2016-10-01
For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. © The Author(s) 2013.
Niche construction game cancer cells play
NASA Astrophysics Data System (ADS)
Bergman, Aviv; Gligorijevic, Bojana
2015-10-01
Niche construction concept was originally defined in evolutionary biology as the continuous interplay between natural selection via environmental conditions and the modification of these conditions by the organism itself. Processes unraveling during cancer metastasis include construction of niches, which cancer cells use towards more efficient survival, transport into new environments and preparation of the remote sites for their arrival. Many elegant experiments were done lately illustrating, for example, the premetastatic niche construction, but there is practically no mathematical modeling done which would apply the niche construction framework. To create models useful for understanding niche construction role in cancer progression, we argue that a) genetic, b) phenotypic and c) ecological levels are to be included. While the model proposed here is phenomenological in its current form, it can be converted into a predictive outcome model via experimental measurement of the model parameters. Here we give an overview of an experimentally formulated problem in cancer metastasis and propose how niche construction framework can be utilized and broadened to model it. Other life science disciplines, such as host-parasite coevolution, may also benefit from niche construction framework adaptation, to satisfy growing need for theoretical considerations of data collected by experimental biology.
Long, Zhili; Wang, Rui; Fang, Jiwen; Dai, Xufei; Li, Zuohua
2017-07-01
Piezoelectric actuators invariably exhibit hysteresis nonlinearities that tend to become significant under the open-loop condition and could cause oscillations and errors in nanometer-positioning tasks. Chaotic map modified particle swarm optimization (MPSO) is proposed and implemented to identify the Prandtl-Ishlinskii model for piezoelectric actuators. Hysteresis compensation is attained through application of an inverse Prandtl-Ishlinskii model, in which the parameters are formulated based on the original model with chaotic map MPSO. To strengthen the diversity and improve the searching ergodicity of the swarm, an initial method of adaptive inertia weight based on a chaotic map is proposed. To compare and prove that the swarm's convergence occurs before stochastic initialization and to attain an optimal particle swarm optimization algorithm, the parameters of a proportional-integral-derivative controller are searched using self-tuning, and the simulated results are used to verify the search effectiveness of chaotic map MPSO. The results show that chaotic map MPSO is superior to its competitors for identifying the Prandtl-Ishlinskii model and that the inverse Prandtl-Ishlinskii model can provide hysteresis compensation under different conditions in a simple and effective manner.