DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhaoyuan Liu; Kord Smith; Benoit Forget
2016-05-01
A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices.more » Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.« less
A new multigroup method for cross-sections that vary rapidly in energy
Haut, Terry Scot; Ahrens, Cory D.; Jonko, Alexandra; ...
2016-11-04
Here, we present a numerical method for solving the time-independent thermal radiative transfer (TRT) equation or the neutron transport (NT) equation when the opacity (cross-section) varies rapidly in frequency (energy) on the microscale ε; ε corresponds to the characteristic spacing between absorption lines or resonances, and is much smaller than the macroscopic frequency (energy) variation of interest. The approach is based on a rigorous homogenization of the TRT/NT equation in the frequency (energy) variable. Discretization of the homogenized TRT/NT equation results in a multigroup-type system, and can therefore be solved by standard methods.
Joyce, Duncan; Parnell, William J; Assier, Raphaël C; Abrahams, I David
2017-05-01
In Parnell & Abrahams (2008 Proc. R. Soc. A 464 , 1461-1482. (doi:10.1098/rspa.2007.0254)), a homogenization scheme was developed that gave rise to explicit forms for the effective antiplane shear moduli of a periodic unidirectional fibre-reinforced medium where fibres have non-circular cross section. The explicit expressions are rational functions in the volume fraction. In that scheme, a (non-dilute) approximation was invoked to determine leading-order expressions. Agreement with existing methods was shown to be good except at very high volume fractions. Here, the theory is extended in order to determine higher-order terms in the expansion. Explicit expressions for effective properties can be derived for fibres with non-circular cross section, without recourse to numerical methods. Terms appearing in the expressions are identified as being associated with the lattice geometry of the periodic fibre distribution, fibre cross-sectional shape and host/fibre material properties. Results are derived in the context of antiplane elasticity but the analogy with the potential problem illustrates the broad applicability of the method to, e.g. thermal, electrostatic and magnetostatic problems. The efficacy of the scheme is illustrated by comparison with the well-established method of asymptotic homogenization where for fibres of general cross section, the associated cell problem must be solved by some computational scheme.
Joyce, Duncan
2017-01-01
In Parnell & Abrahams (2008 Proc. R. Soc. A 464, 1461–1482. (doi:10.1098/rspa.2007.0254)), a homogenization scheme was developed that gave rise to explicit forms for the effective antiplane shear moduli of a periodic unidirectional fibre-reinforced medium where fibres have non-circular cross section. The explicit expressions are rational functions in the volume fraction. In that scheme, a (non-dilute) approximation was invoked to determine leading-order expressions. Agreement with existing methods was shown to be good except at very high volume fractions. Here, the theory is extended in order to determine higher-order terms in the expansion. Explicit expressions for effective properties can be derived for fibres with non-circular cross section, without recourse to numerical methods. Terms appearing in the expressions are identified as being associated with the lattice geometry of the periodic fibre distribution, fibre cross-sectional shape and host/fibre material properties. Results are derived in the context of antiplane elasticity but the analogy with the potential problem illustrates the broad applicability of the method to, e.g. thermal, electrostatic and magnetostatic problems. The efficacy of the scheme is illustrated by comparison with the well-established method of asymptotic homogenization where for fibres of general cross section, the associated cell problem must be solved by some computational scheme. PMID:28588412
Computer program for thin-wire structures in a homogeneous conducting medium
NASA Technical Reports Server (NTRS)
Richmond, J. H.
1974-01-01
A computer program is presented for thin-wire antennas and scatters in a homogeneous conducting medium. The anaylsis is performed in the real or complex frequency domain. The program handles insulated and bare wires with finite conductivity and lumped loads. The output data includes the current distribution, impedance, radiation efficiency, gain, absorption cross section, scattering cross section, echo area and the polarization scattering matrix. The program uses sinusoidal bases and Galerkin's method.
Advanced nodal neutron diffusion method with space-dependent cross sections: ILLICO-VX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajic, H.L.; Ougouag, A.M.
1987-01-01
Advanced transverse integrated nodal methods for neutron diffusion developed since the 1970s require that node- or assembly-homogenized cross sections be known. The underlying structural heterogeneity can be accurately accounted for in homogenization procedures by the use of heterogeneity or discontinuity factors. Other (milder) types of heterogeneity, burnup-induced or due to thermal-hydraulic feedback, can be resolved by explicitly accounting for the spatial variations of material properties. This can be done during the nodal computations via nonlinear iterations. The new method has been implemented in the code ILLICO-VX (ILLICO variable cross-section method). Numerous numerical tests were performed. As expected, the convergence ratemore » of ILLICO-VX is lower than that of ILLICO, requiring approx. 30% more outer iterations per k/sub eff/ computation. The methodology has also been implemented as the NOMAD-VX option of the NOMAD, multicycle, multigroup, two- and three-dimensional nodal diffusion depletion code. The burnup-induced heterogeneities (space dependence of cross sections) are calculated during the burnup steps.« less
Chen, Weiqi; Wang, Lifei; Van Berkel, Gary J; Kertesz, Vilmos; Gan, Jinping
2016-03-25
Herein, quantitation aspects of a fully automated autosampler/HPLC-MS/MS system applied for unattended droplet-based surface sampling of repaglinide dosed thin tissue sections with subsequent HPLC separation and mass spectrometric analysis of parent drug and various drug metabolites were studied. Major organs (brain, lung, liver, kidney and muscle) from whole-body thin tissue sections and corresponding organ homogenates prepared from repaglinide dosed mice were sampled by surface sampling and by bulk extraction, respectively, and analyzed by HPLC-MS/MS. A semi-quantitative agreement between data obtained by surface sampling and that by employing organ homogenate extraction was observed. Drug concentrations obtained by the two methods followed the same patterns for post-dose time points (0.25, 0.5, 1 and 2 h). Drug amounts determined in the specific tissues was typically higher when analyzing extracts from the organ homogenates. In addition, relative comparison of the levels of individual metabolites between the two analytical methods also revealed good semi-quantitative agreement. Copyright © 2015 Elsevier B.V. All rights reserved.
Chen, Weiqi; Wang, Lifei; Van Berkel, Gary J.; ...
2015-11-03
Herein, quantitation aspects of a fully automated autosampler/HPLC-MS/MS system applied for unattended droplet-based surface sampling of repaglinide dosed thin tissue sections with subsequent HPLC separation and mass spectrometric analysis of parent drug and various drug metabolites was studied. Major organs (brain, lung, liver, kidney, muscle) from whole-body thin tissue sections and corresponding organ homogenates prepared from repaglinide dosed mice were sampled by surface sampling and by bulk extraction, respectively, and analyzed by HPLC-MS/MS. A semi-quantitative agreement between data obtained by surface sampling and that by employing organ homogenate extraction was observed. Drug concentrations obtained by the two methods followed themore » same patterns for post-dose time points (0.25, 0.5, 1 and 2 h). Drug amounts determined in the specific tissues was typically higher when analyzing extracts from the organ homogenates. Furthermore, relative comparison of the levels of individual metabolites between the two analytical methods also revealed good semi-quantitative agreement.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Weiqi; Wang, Lifei; Van Berkel, Gary J.
Herein, quantitation aspects of a fully automated autosampler/HPLC-MS/MS system applied for unattended droplet-based surface sampling of repaglinide dosed thin tissue sections with subsequent HPLC separation and mass spectrometric analysis of parent drug and various drug metabolites was studied. Major organs (brain, lung, liver, kidney, muscle) from whole-body thin tissue sections and corresponding organ homogenates prepared from repaglinide dosed mice were sampled by surface sampling and by bulk extraction, respectively, and analyzed by HPLC-MS/MS. A semi-quantitative agreement between data obtained by surface sampling and that by employing organ homogenate extraction was observed. Drug concentrations obtained by the two methods followed themore » same patterns for post-dose time points (0.25, 0.5, 1 and 2 h). Drug amounts determined in the specific tissues was typically higher when analyzing extracts from the organ homogenates. Furthermore, relative comparison of the levels of individual metabolites between the two analytical methods also revealed good semi-quantitative agreement.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamba, Irene M.; ICES, The University of Texas at Austin, 201 E. 24th St., Stop C0200, Austin, TX 78712; Haack, Jeffrey R.
2014-08-01
We present the formulation of a conservative spectral method for the Boltzmann collision operator with anisotropic scattering cross-sections. The method is an extension of the conservative spectral method of Gamba and Tharkabhushanam [17,18], which uses the weak form of the collision operator to represent the collisional term as a weighted convolution in Fourier space. The method is tested by computing the collision operator with a suitably cut-off angular cross section and comparing the results with the solution of the Landau equation. We analytically study the convergence rate of the Fourier transformed Boltzmann collision operator in the grazing collisions limit tomore » the Fourier transformed Landau collision operator under the assumption of some regularity and decay conditions of the solution to the Boltzmann equation. Our results show that the angular singularity which corresponds to the Rutherford scattering cross section is the critical singularity for which a grazing collision limit exists for the Boltzmann operator. Additionally, we numerically study the differences between homogeneous solutions of the Boltzmann equation with the Rutherford scattering cross section and an artificial cross section, which give convergence to solutions of the Landau equation at different asymptotic rates. We numerically show the rate of the approximation as well as the consequences for the rate of entropy decay for homogeneous solutions of the Boltzmann equation and Landau equation.« less
PDF methods for combustion in high-speed turbulent flows
NASA Technical Reports Server (NTRS)
Pope, Stephen B.
1995-01-01
This report describes the research performed during the second year of this three-year project. The ultimate objective of the project is extend the applicability of probability density function (pdf) methods from incompressible to compressible turbulent reactive flows. As described in subsequent sections, progress has been made on: (1) formulation and modelling of pdf equations for compressible turbulence, in both homogeneous and inhomogeneous inert flows; and (2) implementation of the compressible model in various flow configurations, namely decaying isotropic turbulence, homogeneous shear flow and plane mixing layer.
NASA Technical Reports Server (NTRS)
Richmond, J. H.
1974-01-01
Piecewise-sinusoidal expansion functions and Galerkin's method are employed to formulate a solution for an arbitrary thin-wire configuration in a homogeneous conducting medium. The analysis is performed in the real or complex frequency domain. In antenna problems, the solution determines the current distribution, impedance, radiation efficiency, gain and far-field patterns. In scattering problems, the solution determines the absorption cross section, scattering cross section and the polarization scattering matrix. The electromagnetic theory is presented for thin wires and the forward-scattering theorem is developed for an arbitrary target in a homogeneous conducting medium.
Quasi-heterogeneous efficient 3-D discrete ordinates CANDU calculations using Attila
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preeti, T.; Rulko, R.
2012-07-01
In this paper, 3-D quasi-heterogeneous large scale parallel Attila calculations of a generic CANDU test problem consisting of 42 complete fuel channels and a perpendicular to fuel reactivity device are presented. The solution method is that of discrete ordinates SN and the computational model is quasi-heterogeneous, i.e. fuel bundle is partially homogenized into five homogeneous rings consistently with the DRAGON code model used by the industry for the incremental cross-section generation. In calculations, the HELIOS-generated 45 macroscopic cross-sections library was used. This approach to CANDU calculations has the following advantages: 1) it allows detailed bundle (and eventually channel) power calculationsmore » for each fuel ring in a bundle, 2) it allows the exact reactivity device representation for its precise reactivity worth calculation, and 3) it eliminates the need for incremental cross-sections. Our results are compared to the reference Monte Carlo MCNP solution. In addition, the Attila SN method performance in CANDU calculations characterized by significant up scattering is discussed. (authors)« less
A new multigroup method for cross-sections that vary rapidly in energy
NASA Astrophysics Data System (ADS)
Haut, T. S.; Ahrens, C.; Jonko, A.; Lowrie, R.; Till, A.
2017-01-01
We present a numerical method for solving the time-independent thermal radiative transfer (TRT) equation or the neutron transport (NT) equation when the opacity (cross-section) varies rapidly in frequency (energy) on the microscale ε; ε corresponds to the characteristic spacing between absorption lines or resonances, and is much smaller than the macroscopic frequency (energy) variation of interest. The approach is based on a rigorous homogenization of the TRT/NT equation in the frequency (energy) variable. Discretization of the homogenized TRT/NT equation results in a multigroup-type system, and can therefore be solved by standard methods. We demonstrate the accuracy and efficiency of the approach on three model problems. First we consider the Elsasser band model with constant temperature and a line spacing ε =10-4 . Second, we consider a neutron transport application for fast neutrons incident on iron, where the characteristic resonance spacing ε necessitates ≈ 16 , 000 energy discretization parameters if Planck-weighted cross sections are used. Third, we consider an atmospheric TRT problem for an opacity corresponding to water vapor over a frequency range 1000-2000 cm-1, where we take 12 homogeneous layers between 1-15 km, and temperature/pressure values in each layer from the standard US atmosphere. For all three problems, we demonstrate that we can achieve between 0.1 and 1 percent relative error in the solution, and with several orders of magnitude fewer parameters than a standard multigroup formulation using Planck-weighted (source-weighted) opacities for a comparable accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, W.
2012-07-01
Recent assessment results indicate that the coarse-mesh finite-difference method (FDM) gives consistently smaller percent differences in channel powers than the fine-mesh FDM when compared to the reference MCNP solution for CANDU-type reactors. However, there is an impression that the fine-mesh FDM should always give more accurate results than the coarse-mesh FDM in theory. To answer the question if the better performance of the coarse-mesh FDM for CANDU-type reactors was just a coincidence (cancellation of errors) or caused by the use of heavy water or the use of lattice-homogenized cross sections for the cluster fuel geometry in the diffusion calculation, threemore » benchmark problems were set up with three different fuel lattices: CANDU, HWR and PWR. These benchmark problems were then used to analyze the root cause of the better performance of the coarse-mesh FDM for CANDU-type reactors. The analyses confirm that the better performance of the coarse-mesh FDM for CANDU-type reactors is mainly caused by the use of lattice-homogenized cross sections for the sub-meshes of the cluster fuel geometry in the diffusion calculation. Based on the analyses, it is recommended to use 2 x 2 coarse-mesh FDM to analyze CANDU-type reactors when lattice-homogenized cross sections are used in the core analysis. (authors)« less
NASA Astrophysics Data System (ADS)
Kandilian, Razmig; Pruvost, Jérémy; Artu, Arnaud; Lemasson, Camille; Legrand, Jack; Pilon, Laurent
2016-05-01
This paper aims to experimentally and directly validate a recent theoretical method for predicting the radiation characteristics of photosynthetic microorganisms. Such predictions would facilitate light transfer analysis in photobioreactors (PBRs) to control their operation and to maximize their production of biofuel and other high-value products. The state of the art experimental method can be applied to microorganisms of any shape and inherently accounts for their non-spherical and heterogeneous nature. On the other hand, the theoretical method treats the microorganisms as polydisperse homogeneous spheres with some effective optical properties. The absorption index is expressed as the weighted sum of the pigment mass absorption cross-sections and the refractive index is estimated based on the subtractive Kramers-Kronig relationship given an anchor refractive index and wavelength. Here, particular attention was paid to green microalgae Chlamydomonas reinhardtii grown under nitrogen-replete and nitrogen-limited conditions and to Chlorella vulgaris grown under nitrogen-replete conditions. First, relatively good agreement was found between the two methods for determining the mass absorption and scattering cross-sections and the asymmetry factor of both nitrogen-replete and nitrogen-limited C. reinhardtii with the proper anchor point. However, the homogeneous sphere approximation significantly overestimated the absorption cross-section of C. vulgaris cells. The latter were instead modeled as polydisperse coated spheres consisting of an absorbing core containing pigments and a non-absorbing but strongly refracting wall made of sporopollenin. The coated sphere approximation gave good predictions of the experimentally measured integral radiation characteristics of C. vulgaris. In both cases, the homogeneous and coated sphere approximations predicted resonance in the scattering phase function that were not observed experimentally. However, these approximations were sufficiently accurate to predict the fluence rate and local rate of photon absorption in PBRs.
Validation of the U.S. NRC NGNP evaluation model with the HTTR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saller, T.; Seker, V.; Downar, T.
2012-07-01
The High Temperature Test Reactor (HTTR) was modeled with TRITON/PARCS. Traditional light water reactor (LWR) homogenization methods rely on the short mean free paths of neutrons in LWR. In gas-cooled, graphite-moderated reactors like the HTTR neutrons have much longer mean free paths and penetrate further into neighboring assemblies than in LWRs. Because of this, conventional lattice calculations with a single assembly may not be valid. In addition to difficulties caused by the longer mean free paths, the HTTR presents unique axial and radial heterogeneities that require additional modifications to the single assembly homogenization method. To handle these challenges, the homogenizationmore » domain is decreased while the computational domain is increased. Instead of homogenizing a single hexagonal fuel assembly, the assembly is split into six triangles on the radial plane and five blocks axially in order to account for the placement of burnable poisons. Furthermore, the radial domain is increased beyond a single fuel assembly to account for spectrum effects from neighboring fuel, reflector, and control rod assemblies. A series of five two-dimensional cases, each closer to the full core, were calculated to evaluate the effectiveness of the homogenization method and cross-sections. (authors)« less
Cosmological models with homogeneous and isotropic spatial sections
NASA Astrophysics Data System (ADS)
Katanaev, M. O.
2017-05-01
The assumption that the universe is homogeneous and isotropic is the basis for the majority of modern cosmological models. We give an example of a metric all of whose spatial sections are spaces of constant curvature but the space-time is nevertheless not homogeneous and isotropic as a whole. We give an equivalent definition of a homogeneous and isotropic universe in terms of embedded manifolds.
Sharifi, Zohreh; Atlasbaf, Zahra
2016-10-01
A new design procedure for near perfect triangular carpet cloaks, fabricated based on only isotropic homogeneous materials, is proposed. This procedure enables us to fabricate a cloak with simple metamaterials or even without employing metamaterials. The proposed procedure together with an invasive weed optimization algorithm is used to design carpet cloaks based on quasi-isotropic metamaterial structures, Teflon and AN-73. According to the simulation results, the proposed cloaks have good invisibility properties against radar, especially monostatic radar. The procedure is a new method to derive isotropic and homogeneous parameters from transformation optics formulas so we do not need to use complicated structures to fabricate the carpet cloaks.
A New Equivalence Theory Method for Treating Doubly Heterogeneous Fuel - I. Theory
Williams, Mark L.; Lee, Deokjung; Choi, Sooyoung
2015-03-04
A new methodology has been developed to treat resonance self-shielding in doubly heterogeneous very high temperature gas-cooled reactor systems in which the fuel compact region of a reactor lattice consists of small fuel grains dispersed in a graphite matrix. This new method first homogenizes the fuel grain and matrix materials using an analytically derived disadvantage factor from a two-region problem with equivalence theory and intermediate resonance method. This disadvantage factor accounts for spatial self-shielding effects inside each grain within the framework of an infinite array of grains. Then the homogenized fuel compact is self-shielded using a Bondarenko method to accountmore » for interactions between the fuel compact regions in the fuel lattice. In the final form of the equations for actual implementations, the double-heterogeneity effects are accounted for by simply using a modified definition of a background cross section, which includes geometry parameters and cross sections for both the grain and fuel compact regions. With the new method, the doubly heterogeneous resonance self-shielding effect can be treated easily even with legacy codes programmed only for a singly heterogeneous system by simple modifications in the background cross section for resonance integral interpolations. This paper presents a detailed derivation of the new method and a sensitivity study of double-heterogeneity parameters introduced during the derivation. The implementation of the method and verification results for various test cases are presented in the companion paper.« less
NASA Astrophysics Data System (ADS)
Wang, Aiming; Cheng, Xiaohan; Meng, Guoying; Xia, Yun; Wo, Lei; Wang, Ziyi
2017-03-01
Identification of rotor unbalance is critical for normal operation of rotating machinery. The single-disc and single-span rotor, as the most fundamental rotor-bearing system, has attracted research attention over a long time. In this paper, the continuous single-disc and single-span rotor is modeled as a homogeneous and elastic Euler-Bernoulli beam, and the forces applied by bearings and disc on the shaft are considered as point forces. A fourth-order non-homogeneous partial differential equation set with homogeneous boundary condition is solved for analytical solution, which expresses the unbalance response as a function of position, rotor unbalance and the stiffness and damping coefficients of bearings. Based on this analytical method, a novel Measurement Point Vector Method (MPVM) is proposed to identify rotor unbalance while operating. Only a measured unbalance response registered for four selected cross-sections of the rotor-shaft under steady-state operating conditions is needed when using the method. Numerical simulation shows that the detection error of the proposed method is very small when measurement error is negligible. The proposed method provides an efficient way for rotor balancing without test runs and external excitations.
Distributed parameter modeling of repeated truss structures
NASA Technical Reports Server (NTRS)
Wang, Han-Ching
1994-01-01
A new approach to find homogeneous models for beam-like repeated flexible structures is proposed which conceptually involves two steps. The first step involves the approximation of 3-D non-homogeneous model by a 1-D periodic beam model. The structure is modeled as a 3-D non-homogeneous continuum. The displacement field is approximated by Taylor series expansion. Then, the cross sectional mass and stiffness matrices are obtained by energy equivalence using their additive properties. Due to the repeated nature of the flexible bodies, the mass, and stiffness matrices are also periodic. This procedure is systematic and requires less dynamics detail. The first step involves the homogenization from a 1-D periodic beam model to a 1-D homogeneous beam model. The periodic beam model is homogenized into an equivalent homogeneous beam model using the additive property of compliance along the generic axis. The major departure from previous approaches in literature is using compliance instead of stiffness in homogenization. An obvious justification is that the stiffness is additive at each cross section but not along the generic axis. The homogenized model preserves many properties of the original periodic model.
Resolving Rapid Variation in Energy for Particle Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haut, Terry Scot; Ahrens, Cory Douglas; Jonko, Alexandra
2016-08-23
Resolving the rapid variation in energy in neutron and thermal radiation transport is needed for the predictive simulation capability in high-energy density physics applications. Energy variation is difficult to resolve due to rapid variations in cross sections and opacities caused by quantized energy levels in the nuclei and electron clouds. In recent work, we have developed a new technique to simultaneously capture slow and rapid variations in the opacities and the solution using homogenization theory, which is similar to multiband (MB) and to the finite-element with discontiguous support (FEDS) method, but does not require closure information. We demonstrated the accuracymore » and efficiency of the method for a variety of problems. We are researching how to extend the method to problems with multiple materials and the same material but with different temperatures and densities. In this highlight, we briefly describe homogenization theory and some results.« less
Two-dimensional arbitrarily shaped acoustic cloaks composed of homogeneous parts
NASA Astrophysics Data System (ADS)
Li, Qi; Vipperman, Jeffrey S.
2017-10-01
Acoustic cloaking is an important application of acoustic metamaterials. Although the topic has received much attention, there are a number of areas where contributions are needed. In this paper, a design method for producing acoustic cloaks with arbitrary shapes that are composed of homogeneous parts is presented. The cloak is divided into sections, each of which, in turn, is further divided into two parts, followed by the application of transformation acoustics to derive the required properties for cloaking. With the proposed mapping relations, the properties of each part of the cloak are anisotropic but homogeneous, which can be realized using two alternating layers of homogeneous and isotropic materials. A hexagonal and an irregular cloak are presented as design examples. The full wave simulations using COMSOL Multiphysics finite element software show that the cloaks function well at reducing reflections and shadows. The variation of the cloak properties is investigated as a function of three important geometric parameters used in the transformations. A balance can be found between cloaking performance and materials properties that are physically realizable.
Homogenization versus homogenization-free method to measure muscle glycogen fractions.
Mojibi, N; Rasouli, M
2016-12-01
The glycogen is extracted from animal tissues with or without homogenization using cold perchloric acid. Three methods were compared for determination of glycogen in rat muscle at different physiological states. Two groups of five rats were kept at rest or 45 minutes muscular activity. The glycogen fractions were extracted and measured by using three methods. The data of homogenization method shows that total glycogen decreased following 45 min physical activity and the change occurred entirely in acid soluble glycogen (ASG), while AIG did not change significantly. Similar results were obtained by using "total-glycogen-fractionation methods". The findings of "homogenization-free method" indicate that the acid insoluble fraction (AIG) was the main portion of muscle glycogen and the majority of changes occurred in AIG fraction. The results of "homogenization method" are identical with "total glycogen fractionation", but differ with "homogenization-free" protocol. The ASG fraction is the major portion of muscle glycogen and is more metabolically active form.
Use and users of the Appalachian Trail: a geographic study
Robert E. Manning; William Valliere; Jim Bacon; Alan Graefe; Gerard Kyle; Rita Hennessy
2001-01-01
The Appalachian National Scenic Trail (AT) is a public footpath that spans 2,160 miles of Appalachian Mountain ridgelines from Maine to Georgia. This paper describes the first comprehensive study of recreational use and users of the AT. The primary study method was a survey of visitors to the AT. The Trail was divided into 22 relatively homogeneous sections within four...
Synchronous characterization of semiconductor microcavity laser beam.
Wang, T; Lippi, G L
2015-06-01
We report on a high-resolution double-channel imaging method used to synchronously map the intensity- and optical-frequency-distribution of a laser beam in the plane orthogonal to the propagation direction. The synchronous measurement allows us to show that the laser frequency is an inhomogeneous distribution below threshold, but that it becomes homogeneous across the fundamental Gaussian mode above threshold. The beam's tails deviations from the Gaussian shape, however, are accompanied by sizeable fluctuations in the laser wavelength, possibly deriving from manufacturing details and from the influence of spontaneous emission in the very low intensity wings. In addition to the synchronous spatial characterization, a temporal analysis at any given point in the beam cross section is carried out. Using this method, the beam homogeneity and spatial shape, energy density, energy center, and the defects-related spectrum can also be extracted from these high-resolution pictures.
SUBGR: A Program to Generate Subgroup Data for the Subgroup Resonance Self-Shielding Calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog
2016-06-06
The Subgroup Data Generation (SUBGR) program generates subgroup data, including levels and weights from the resonance self-shielded cross section table as a function of background cross section. Depending on the nuclide and the energy range, these subgroup data can be generated by (a) narrow resonance approximation, (b) pointwise flux calculations for homogeneous media; and (c) pointwise flux calculations for heterogeneous lattice cells. The latter two options are performed by the AMPX module IRFFACTOR. These subgroup data are to be used in the Consortium for Advanced Simulation of Light Water Reactors (CASL) neutronic simulator MPACT, for which the primary resonance self-shieldingmore » method is the subgroup method.« less
NASA Astrophysics Data System (ADS)
Farsadnia, Farhad; Ghahreman, Bijan
2016-04-01
Hydrologic homogeneous group identification is considered both fundamental and applied research in hydrology. Clustering methods are among conventional methods to assess the hydrological homogeneous regions. Recently, Self-Organizing feature Map (SOM) method has been applied in some studies. However, the main problem of this method is the interpretation on the output map of this approach. Therefore, SOM is used as input to other clustering algorithms. The aim of this study is to apply a two-level Self-Organizing feature map and Ward hierarchical clustering method to determine the hydrologic homogenous regions in North and Razavi Khorasan provinces. At first by principal component analysis, we reduced SOM input matrix dimension, then the SOM was used to form a two-dimensional features map. To determine homogeneous regions for flood frequency analysis, SOM output nodes were used as input into the Ward method. Generally, the regions identified by the clustering algorithms are not statistically homogeneous. Consequently, they have to be adjusted to improve their homogeneity. After adjustment of the homogeneity regions by L-moment tests, five hydrologic homogeneous regions were identified. Finally, adjusted regions were created by a two-level SOM and then the best regional distribution function and associated parameters were selected by the L-moment approach. The results showed that the combination of self-organizing maps and Ward hierarchical clustering by principal components as input is more effective than the hierarchical method, by principal components or standardized inputs to achieve hydrologic homogeneous regions.
Improving homogeneity by dynamic speed limit systems.
van Nes, Nicole; Brandenburg, Stefan; Twisk, Divera
2010-05-01
Homogeneity of driving speeds is an important variable in determining road safety; more homogeneous driving speeds increase road safety. This study investigates the effect of introducing dynamic speed limit systems on homogeneity of driving speeds. A total of 46 subjects twice drove a route along 12 road sections in a driving simulator. The speed limit system (static-dynamic), the sophistication of the dynamic speed limit system (basic roadside, advanced roadside, and advanced in-car) and the situational condition (dangerous-non-dangerous) were varied. The homogeneity of driving speed, the rated credibility of the posted speed limit and the acceptance of the different dynamic speed limit systems were assessed. The results show that the homogeneity of individual speeds, defined as the variation in driving speed for an individual subject along a particular road section, was higher with the dynamic speed limit system than with the static speed limit system. The more sophisticated dynamic speed limit system tested within this study led to higher homogeneity than the less sophisticated systems. The acceptance of the dynamic speed limit systems used in this study was positive, they were perceived as quite useful and rather satisfactory. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Mechanized syringe homogenization of human and animal tissues.
Kurien, Biji T; Porter, Andrew C; Patel, Nisha C; Kurono, Sadamu; Matsumoto, Hiroyuki; Scofield, R Hal
2004-06-01
Tissue homogenization is a prerequisite to any fractionation schedule. A plethora of hands-on methods are available to homogenize tissues. Here we report a mechanized method for homogenizing animal and human tissues rapidly and easily. The Bio-Mixer 1200 (manufactured by Innovative Products, Inc., Oklahoma City, OK) utilizes the back-and-forth movement of two motor-driven disposable syringes, connected to each other through a three-way stopcock, to homogenize animal or human tissue. Using this method, we were able to homogenize human or mouse tissues (brain, liver, heart, and salivary glands) in 5 min. From sodium dodecyl sulfate-polyacrylamide gel electrophoresis analysis and a matrix-assisted laser desorption/ionization time-of-flight mass spectrometric enzyme assay for prolidase, we have found that the homogenates obtained were as good or even better than that obtained used a manual glass-on-Teflon (DuPont, Wilmington, DE) homogenization protocol (all-glass tube and Teflon pestle). Use of the Bio-Mixer 1200 to homogenize animal or human tissue precludes the need to stay in the cold room as is the case with the other hands-on homogenization methods available, in addition to freeing up time for other experiments.
ERIC Educational Resources Information Center
Caldwell-Wood, Naomi; And Others
1987-01-01
The first of three articles describes procedures for using ANSI statistical methods for estimating the number of pieces in large homogeneous collections of microfiche. The second discusses causes of curl, its control, and measurement, and the third compares the advantages and disadvantages of cellulose acetate and polyester base for microforms.…
Atomic density functional and diagram of structures in the phase field crystal model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ankudinov, V. E., E-mail: vladimir@ankudinov.org; Galenko, P. K.; Kropotin, N. V.
2016-02-15
The phase field crystal model provides a continual description of the atomic density over the diffusion time of reactions. We consider a homogeneous structure (liquid) and a perfect periodic crystal, which are constructed from the one-mode approximation of the phase field crystal model. A diagram of 2D structures is constructed from the analytic solutions of the model using atomic density functionals. The diagram predicts equilibrium atomic configurations for transitions from the metastable state and includes the domains of existence of homogeneous, triangular, and striped structures corresponding to a liquid, a body-centered cubic crystal, and a longitudinal cross section of cylindricalmore » tubes. The method developed here is employed for constructing the diagram for the homogeneous liquid phase and the body-centered iron lattice. The expression for the free energy is derived analytically from density functional theory. The specific features of approximating the phase field crystal model are compared with the approximations and conclusions of the weak crystallization and 2D melting theories.« less
A Tissue-Specific Approach to the Analysis of Metabolic Changes in Caenorhabditis elegans
Pujol, Claire; Ipsen, Sabine; Brodesser, Susanne; Mourier, Arnaud; Tolnay, Markus; Frank, Stephan; Trifunović, Aleksandra
2011-01-01
The majority of metabolic principles are evolutionarily conserved from nematodes to humans. Caenorhabditis elegans has widely accelerated the discovery of new genes important to maintain organismic metabolic homeostasis. Various methods exist to assess the metabolic state in worms, yet they often require large animal numbers and tend to be performed as bulk analyses of whole worm homogenates, thereby largely precluding a detailed studies of metabolic changes in specific worm tissues. Here, we have adapted well-established histochemical methods for the use on C. elegans fresh frozen sections and demonstrate their validity for analyses of morphological and metabolic changes on tissue level in wild type and various mutant strains. We show how the worm presents on hematoxylin and eosin (H&E) stained sections and demonstrate their usefulness in monitoring and the identification of morphological abnormalities. In addition, we demonstrate how Oil-Red-O staining on frozen worm cross-sections permits quantification of lipid storage, avoiding the artifact-prone fixation and permeabilization procedures of traditional whole-mount protocols. We also adjusted standard enzymatic stains for respiratory chain subunits (NADH, SDH, and COX) to monitor metabolic states of various C. elegans tissues. In summary, the protocols presented here provide technical guidance to obtain robust, reproducible and quantifiable tissue-specific data on worm morphology as well as carbohydrate, lipid and mitochondrial energy metabolism that cannot be obtained through traditional biochemical bulk analyses of worm homogenates. Furthermore, analysis of worm cross-sections overcomes the common problem with quantification in three-dimensional whole-mount specimens. PMID:22162770
Broadband computation of the scattering coefficients of infinite arbitrary cylinders.
Blanchard, Cédric; Guizal, Brahim; Felbacq, Didier
2012-07-01
We employ a time-domain method to compute the near field on a contour enclosing infinitely long cylinders of arbitrary cross section and constitution. We therefore recover the cylindrical Hankel coefficients of the expansion of the field outside the circumscribed circle of the structure. The recovered coefficients enable the wideband analysis of complex systems, e.g., the determination of the radar cross section becomes straightforward. The prescription for constructing such a numerical tool is provided in great detail. The method is validated by computing the scattering coefficients for a homogeneous circular cylinder illuminated by a plane wave, a problem for which an analytical solution exists. Finally, some radiation properties of an optical antenna are examined by employing the proposed technique.
Nonlinear Deformation of a Piecewise Homogeneous Cylinder Under the Action of Rotation
NASA Astrophysics Data System (ADS)
Akhundov, V. M.; Kostrova, M. M.
2018-05-01
Deformation of a piecewise cylinder under the action of rotation is investigated. The cylinder consists of an elastic matrix with circular fibers of square cross section made of a more rigid elastic material and arranged doubly periodically in the cylinder. Behavior of the cylinder under large displacements and deformations is examined using the equations of a nonlinear elasticity theory for cylinder constituents. The problem posed is solved by the finite-difference method using the method of continuation with respect to the rotational speed of the cylinder.
Kazemzadeh, Mohammad-Rahim; Alighanbari, Abbas
2018-04-16
A three-dimensional transformation optics method, leading to homogeneous materials, applicable to any non-Cartesian coordinate systems or waveguides/objects of arbitrary cross-sections is presented. Both the conductive boundary and internal material of the desired device is determined by the proposed formulation. The method is applicable to a wide range of waveguide, radiation, and cloaking problems, and is demonstrated for circular waveguide couplers and an external cloak. An advantage of the present method is that the material properties are simplified by appropriately selecting the conductive boundaries. For instance, a right-angle circular waveguide bend is presented which uses only one homogenous material. Also, transformation of conductive materials and boundaries are studied. The conditions in which the transformed boundaries remain conductive are discussed. In addition, it is demonstrated that negative infinite conductivity can be replaced with positive conductivity, without affecting the field outside the conductive boundary. It is also observed that a negative finite conductivity can be replaced with a positive one, by accepting some small errors. The general mathematical procedure and formulation for calculating the parametric surface equations of the conductive peripheries are presented.
Xiao, Xia; Feng, Ya-Ping; Du, Bin; Sun, Han-Ru; Ding, You-Quan; Qi, Jian-Guo
2017-03-01
Fluorescent immunolabeling and imaging in free-floating thick (50-60 μm) tissue sections is relatively simple in practice and enables design-based non-biased stereology, or 3-D reconstruction and analysis. This method is widely used for 3-D in situ quantitative biology in many areas of biological research. However, the labeling quality and efficiency of standard protocols for fluorescent immunolabeling of these tissue sections are not always satisfactory. Here, we systematically evaluate the effects of raising the conventional antibody incubation temperatures (4°C or 21°C) to mammalian body temperature (37°C) in these protocols. Our modification significantly enhances the quality (labeling sensitivity, specificity, and homogeneity) and efficiency (antibody concentration and antibody incubation duration) of fluorescent immunolabeling of free-floating thick tissue sections.
Bellez, Sami; Bourlier, Christophe; Kubické, Gildas
2015-03-01
This paper deals with the evaluation of electromagnetic scattering from a three-dimensional structure consisting of two nested homogeneous dielectric bodies with arbitrary shape. The scattering problem is formulated in terms of a set of Poggio-Miller-Chang-Harrington-Wu integral equations that are afterwards converted into a system of linear equations (impedance matrix equation) by applying the Galerkin method of moments (MoM) with Rao-Wilton-Glisson basis functions. The MoM matrix equation is then solved by deploying the iterative propagation-inside-layer expansion (PILE) method in order to obtain the unknown surface current densities, which are thereafter used to handle the radar cross-section (RCS) patterns. Some numerical results for various structures including canonical geometries are presented and compared with those of the FEKO software in order to validate the PILE-based approach as well as to show its efficiency to analyze the full-polarized RCS patterns.
Improved Neutronics Treatment of Burnable Poisons for the Prismatic HTR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y. Wang; A. A. Bingham; J. Ortensi
2012-10-01
In prismatic block High Temperature Reactors (HTR), highly absorbing material such a burnable poison (BP) cause local flux depressions and large gradients in the flux across the blocks which can be a challenge to capture accurately with traditional homogenization methods. The purpose of this paper is to quantify the error associated with spatial homogenization, spectral condensation and discretization and to highlight what is needed for improved neutronics treatments of burnable poisons for the prismatic HTR. A new triangular based mesh is designed to separate the BP regions from the fuel assembly. A set of packages including Serpent (Monte Carlo), Xuthosmore » (1storder Sn), Pronghorn (diffusion), INSTANT (Pn) and RattleSnake (2ndorder Sn) is used for this study. The results from the deterministic calculations show that the cross sections generated directly in Serpent are not sufficient to accurately reproduce the reference Monte Carlo solution in all cases. The BP treatment produces good results, but this is mainly due to error cancellation. However, the Super Cell (SC) approach yields cross sections that are consistent with cross sections prepared on an “exact” full core calculation. In addition, very good agreement exists between the various deterministic transport and diffusion codes in both eigenvalue and power distributions. Future research will focus on improving the cross sections and quantifying the error cancellation.« less
Method of chaotic mixing and improved stirred tank reactors
Muzzio, F.J.; Lamberto, D.J.
1999-07-13
The invention provides a method and apparatus for efficiently achieving a homogeneous mixture of fluid components by introducing said components having a Reynolds number of between about [le]1 to about 500 into a vessel and continuously perturbing the mixing flow by altering the flow speed and mixing time until homogeneity is reached. This method prevents the components from aggregating into non-homogeneous segregated regions within said vessel during mixing and substantially reduces the time the admixed components reach homogeneity. 19 figs.
Recent developments in multidimensional transport methods for the APOLLO 2 lattice code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zmijarevic, I.; Sanchez, R.
1995-12-31
A usual method of preparation of homogenized cross sections for reactor coarse-mesh calculations is based on two-dimensional multigroup transport treatment of an assembly together with an appropriate leakage model and reaction-rate-preserving homogenization technique. The actual generation of assembly spectrum codes based on collision probability methods is capable of treating complex geometries (i.e., irregular meshes of arbitrary shape), thus avoiding the modeling error that was introduced in codes with traditional tracking routines. The power and architecture of current computers allow the treatment of spatial domains comprising several mutually interacting assemblies using fine multigroup structure and retaining all geometric details of interest.more » Increasing safety requirements demand detailed two- and three-dimensional calculations for very heterogeneous problems such as control rod positioning, broken Pyrex rods, irregular compacting of mixed- oxide (MOX) pellets at an MOX-UO{sub 2} interface, and many others. An effort has been made to include accurate multi- dimensional transport methods in the APOLLO 2 lattice code. These include extension to three-dimensional axially symmetric geometries of the general-geometry collision probability module TDT and the development of new two- and three-dimensional characteristics methods for regular Cartesian meshes. In this paper we discuss the main features of recently developed multidimensional methods that are currently being tested.« less
Spatial homogenization methods for pin-by-pin neutron transport calculations
NASA Astrophysics Data System (ADS)
Kozlowski, Tomasz
For practical reactor core applications low-order transport approximations such as SP3 have been shown to provide sufficient accuracy for both static and transient calculations with considerably less computational expense than the discrete ordinate or the full spherical harmonics methods. These methods have been applied in several core simulators where homogenization was performed at the level of the pin cell. One of the principal problems has been to recover the error introduced by pin-cell homogenization. Two basic approaches to treat pin-cell homogenization error have been proposed: Superhomogenization (SPH) factors and Pin-Cell Discontinuity Factors (PDF). These methods are based on well established Equivalence Theory and Generalized Equivalence Theory to generate appropriate group constants. These methods are able to treat all sources of error together, allowing even few-group diffusion with one mesh per cell to reproduce the reference solution. A detailed investigation and consistent comparison of both homogenization techniques showed potential of PDF approach to improve accuracy of core calculation, but also reveal its limitation. In principle, the method is applicable only for the boundary conditions at which it was created, i.e. for boundary conditions considered during the homogenization process---normally zero current. Therefore, there exists a need to improve this method, making it more general and environment independent. The goal of proposed general homogenization technique is to create a function that is able to correctly predict the appropriate correction factor with only homogeneous information available, i.e. a function based on heterogeneous solution that could approximate PDFs using homogeneous solution. It has been shown that the PDF can be well approximated by least-square polynomial fit of non-dimensional heterogeneous solution and later used for PDF prediction using homogeneous solution. This shows a promise for PDF prediction for off-reference conditions, such as during reactor transients which provide conditions that can not typically be anticipated a priori.
Homogenization of periodic bi-isotropic composite materials
NASA Astrophysics Data System (ADS)
Ouchetto, Ouail; Essakhi, Brahim
2018-07-01
In this paper, we present a new method for homogenizing the bi-periodic materials with bi-isotropic components phases. The presented method is a numerical method based on the finite element method to compute the local electromagnetic properties. The homogenized constitutive parameters are expressed as a function of the macroscopic electromagnetic properties which are obtained from the local properties. The obtained results are compared to Unfolding Finite Element Method and Maxwell-Garnett formulas.
Thermal optimization of second harmonic generation at high pump powers.
Sahm, Alexander; Uebernickel, Mirko; Paschke, Katrin; Erbert, Götz; Tränkle, Günther
2011-11-07
We measure the temperature distribution of a 3 cm long periodically poled LiNbO₃ crystal in a single-pass second harmonic generation (SHG) setup at 488 nm. By means of three resistance heaters and directly mounted Pt100 sensors the crystal is subdivided in three sections. 9.4 W infrared pump light and 1.3 W of SHG light cause a de-homogenized temperature distribution of 0.2 K between the middle and back section. A sectional offset heating is used to homogenize the temperature in those two sections and thus increasing the conversion efficiency. A 15% higher SHG output power matching the prediction of our theoretical model is achieved.
Ke, Tracy; Fan, Jianqing; Wu, Yichao
2014-01-01
This paper explores the homogeneity of coefficients in high-dimensional regression, which extends the sparsity concept and is more general and suitable for many applications. Homogeneity arises when regression coefficients corresponding to neighboring geographical regions or a similar cluster of covariates are expected to be approximately the same. Sparsity corresponds to a special case of homogeneity with a large cluster of known atom zero. In this article, we propose a new method called clustering algorithm in regression via data-driven segmentation (CARDS) to explore homogeneity. New mathematics are provided on the gain that can be achieved by exploring homogeneity. Statistical properties of two versions of CARDS are analyzed. In particular, the asymptotic normality of our proposed CARDS estimator is established, which reveals better estimation accuracy for homogeneous parameters than that without homogeneity exploration. When our methods are combined with sparsity exploration, further efficiency can be achieved beyond the exploration of sparsity alone. This provides additional insights into the power of exploring low-dimensional structures in high-dimensional regression: homogeneity and sparsity. Our results also shed lights on the properties of the fussed Lasso. The newly developed method is further illustrated by simulation studies and applications to real data. Supplementary materials for this article are available online. PMID:26085701
Adhesion of Mineral and Soot Aerosols can Strongly Affect their Scattering and Absorption Properties
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Jana M.
2012-01-01
We use the numerically exact superposition T-matrix method to compute the optical cross sections and the Stokes scattering matrix for polydisperse mineral aerosols (modeled as homogeneous spheres) covered with a large number of much smaller soot particles. These results are compared with the Lorenz-Mie results for a uniform external mixture of mineral and soot aerosols. We show that the effect of soot particles adhering to large mineral particles can be to change the extinction and scattering cross sections and the asymmetry parameter quite substantially. The effect on the phase function and degree of linear polarization can be equally significant.
Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong
2017-04-01
This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shemon, Emily R.; Smith, Micheal A.; Lee, Changho
2016-02-16
PROTEUS-SN is a three-dimensional, highly scalable, high-fidelity neutron transport code developed at Argonne National Laboratory. The code is applicable to all spectrum reactor transport calculations, particularly those in which a high degree of fidelity is needed either to represent spatial detail or to resolve solution gradients. PROTEUS-SN solves the second order formulation of the transport equation using the continuous Galerkin finite element method in space, the discrete ordinates approximation in angle, and the multigroup approximation in energy. PROTEUS-SN’s parallel methodology permits the efficient decomposition of the problem by both space and angle, permitting large problems to run efficiently on hundredsmore » of thousands of cores. PROTEUS-SN can also be used in serial or on smaller compute clusters (10’s to 100’s of cores) for smaller homogenized problems, although it is generally more computationally expensive than traditional homogenized methodology codes. PROTEUS-SN has been used to model partially homogenized systems, where regions of interest are represented explicitly and other regions are homogenized to reduce the problem size and required computational resources. PROTEUS-SN solves forward and adjoint eigenvalue problems and permits both neutron upscattering and downscattering. An adiabatic kinetics option has recently been included for performing simple time-dependent calculations in addition to standard steady state calculations. PROTEUS-SN handles void and reflective boundary conditions. Multigroup cross sections can be generated externally using the MC2-3 fast reactor multigroup cross section generation code or internally using the cross section application programming interface (API) which can treat the subgroup or resonance table libraries. PROTEUS-SN is written in Fortran 90 and also includes C preprocessor definitions. The code links against the PETSc, METIS, HDF5, and MPICH libraries. It optionally links against the MOAB library and is a part of the SHARP multi-physics suite for coupled multi-physics analysis of nuclear reactors. This user manual describes how to set up a neutron transport simulation with the PROTEUS-SN code. A companion methodology manual describes the theory and algorithms within PROTEUS-SN.« less
Monte Carlo Determination of Gamma Ray Exposure from a Homogeneous Ground Plane
1990-03-01
A HOMOGENEOUS GROUND PLANE SOURCE THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technology Air University...come from a standard ANISN format library called FEWG1-85. This state-of-the- art cross section library which contains 37 neutron energy groups and 21...purpose. The FEWGl library, a state-of-the- art cross section library developed for the Defense Nuclear Agency con- sisting of 21 gamma-ray enerQj
Pseudo-thermosetting chitosan hydrogels for biomedical application.
Berger, J; Reist, M; Chenite, A; Felt-Baeyens, O; Mayer, J M; Gurny, R
2005-01-20
To prepare transparent chitosan/beta-glycerophosphate (betaGP) pseudo-thermosetting hydrogels, the deacetylation degree (DD) of chitosan has been modified by reacetylation with acetic anhydride. Two methods (I and II) of reacetylation have been compared and have shown that the use of previously filtered chitosan, dilution of acetic anhydride and reduction of temperature in method II improves efficiency and reproducibility. Chitosans with DD ranging from 35.0 to 83.2% have been prepared according to method II under homogeneous and non-homogeneous reacetylation conditions and the turbidity of chitosan/betaGP hydrogels containing homogeneously or non-homogeneously reacetylated chitosan has been investigated. Turbidity is shown to be modulated by the DD of chitosan and by the homogeneity of the medium during reacetylation, which influences the distribution mode of the chitosan monomers. The preparation of transparent chitosan/betaGP hydrogels requires a homogeneously reacetylated chitosan with a DD between 35 and 50%.
Pseudo-thermosetting chitosan hydrogels for biomedical application.
Berger, J; Reist, M; Chenite, A; Felt-Baeyens, O; Mayer, J M; Gurny, R
2005-01-06
To prepare transparent chitosan/beta-glycerophosphate (betaGP) pseudo-thermosetting hydrogels, the deacetylation degree (DD) of chitosan has been modified by reacetylation with acetic anhydride. Two methods (I and II) of reacetylation have been compared and have shown that the use of previously filtered chitosan, dilution of acetic anhydride and reduction of temperature in method II improves efficiency and reproducibility. Chitosans with DD ranging from 35.0 to 83.2% have been prepared according to method II under homogeneous and non-homogeneous reacetylation conditions and the turbidity of chitosan/betaGP hydrogels containing homogeneously or non-homogeneously reacetylated chitosan has been investigated. Turbidity is shown to be modulated by the DD of chitosan and by the homogeneity of the medium during reacetylation, which influences the distribution mode of the chitosan monomers. The preparation of transparent chitosan/betaGP hydrogels requires a homogeneously reacetylated chitosan with a DD between 35 and 50%.
NASA Astrophysics Data System (ADS)
Lombard, Bruno; Maurel, Agnès; Marigo, Jean-Jacques
2017-04-01
Homogenization of a thin micro-structure yields effective jump conditions that incorporate the geometrical features of the scatterers. These jump conditions apply across a thin but nonzero thickness interface whose interior is disregarded. This paper aims (i) to propose a numerical method able to handle the jump conditions in order to simulate the homogenized problem in the time domain, (ii) to inspect the validity of the homogenized problem when compared to the real one. For this purpose, we adapt the Explicit Simplified Interface Method originally developed for standard jump conditions across a zero-thickness interface. Doing so allows us to handle arbitrary-shaped interfaces on a Cartesian grid with the same efficiency and accuracy of the numerical scheme than those obtained in a homogeneous medium. Numerical experiments are performed to test the properties of the numerical method and to inspect the validity of the homogenization problem.
Homogenization of Mammalian Cells.
de Araújo, Mariana E G; Lamberti, Giorgia; Huber, Lukas A
2015-11-02
Homogenization is the name given to the methodological steps necessary for releasing organelles and other cellular constituents as a free suspension of intact individual components. Most homogenization procedures used for mammalian cells (e.g., cavitation pump and Dounce homogenizer) rely on mechanical force to break the plasma membrane and may be supplemented with osmotic or temperature alterations to facilitate membrane disruption. In this protocol, we describe a syringe-based homogenization method that does not require specialized equipment, is easy to handle, and gives reproducible results. The method may be adapted for cells that require hypotonic shock before homogenization. We routinely use it as part of our workflow to isolate endocytic organelles from mammalian cells. © 2015 Cold Spring Harbor Laboratory Press.
Method of fabricating a homogeneous wire of inter-metallic alloy
Ohriner, Evan Keith; Blue, Craig Alan
2001-01-01
A method for fabricating a homogeneous wire of inter-metallic alloy comprising the steps of providing a base-metal wire bundle comprising a metal, an alloy or a combination thereof; working the wire bundle through at least one die to obtain a desired dimension and to form a precursor wire; and, controllably heating the precursor wire such that a portion of the wire will become liquid while simultaneously maintaining its desired shape, whereby substantial homogenization of the wire occurs in the liquid state and additional homogenization occurs in the solid state resulting in a homogenous alloy product.
[Methods for enzymatic determination of triglycerides in liver homogenates].
Höhn, H; Gartzke, J; Burck, D
1987-10-01
An enzymatic method is described for the determination of triacylglycerols in liver homogenate. In contrast to usual methods, higher reliability and selectivity are achieved by omitting the extraction step.
Moeller, A; Ambrose, R F; Que Hee, S S
2001-01-01
Four catfish fillet homogenate treatments before multielemental metal analysis by simultaneous inductively coupled plasma/atomic emission spectroscopy were compared in triplicate. These treatments were: nitric acid wet-ashing by Parr bomb digestion; nitric acid wet-ashing by microwave digestion; tetramethylammonium hydroxide/nitric acid wet digestion; and dry-ashing. The tetramethylammonium hydroxide/nitric acid method was imprecise (coefficients of variation > 20%). The dry-ashing method was fast and sensitive but had low recoveries of 50% for spiked Pb and Al and was not as precise as the Parr bomb or microwave treatments. The Parr bomb method was the most precise method but was less sensitive than the microwave method which had nearly the same precision. The microwave method was then adapted to homogenates of small whole fish < or = 3 cm in length. The whole fish homogenate required more vigorous digestion conditions, and addition of more acid after the evaporative step because of the presence of less oxidizable and acid-soluble components than fillet. The whole fish homogenate was also more heterogeneous than catfish fillet. A quality assurance protocol to demonstrate homogenate uniformity is essential. The use of a non-specialized microwave oven system allowed precise results for fillet and whole fish homogenates.
Stefenelli, Mario; Todt, Juraj; Riedl, Angelika; Ecker, Werner; Müller, Thomas; Daniel, Rostislav; Burghammer, Manfred; Keckes, Jozef
2013-10-01
Novel scanning synchrotron cross-sectional nanobeam and conventional laboratory as well as synchrotron Laplace X-ray diffraction methods are used to characterize residual stresses in exemplary 11.5 µm-thick TiN coatings. Both real and Laplace space approaches reveal a homogeneous tensile stress state and a very pronounced compressive stress gradient in as-deposited and blasted coatings, respectively. The unique capabilities of the cross-sectional approach operating with a beam size of 100 nm in diameter allow the analysis of stress variation with sub-micrometre resolution at arbitrary depths and the correlation of the stress evolution with the local coating microstructure. Finally, advantages and disadvantages of both approaches are extensively discussed.
Progesterone lipid nanoparticles: Scaling up and in vivo human study.
Esposito, Elisabetta; Sguizzato, Maddalena; Drechsler, Markus; Mariani, Paolo; Carducci, Federica; Nastruzzi, Claudio; Cortesi, Rita
2017-10-01
This investigation describes a scaling up study aimed at producing progesterone containing nanoparticles in a pilot scale. Particularly hot homogenization techniques based on ultrasound homogenization or high pressure homogenization have been employed to produce lipid nanoparticles constituted of tristearin or tristearin in association with caprylic-capric triglyceride. It was found that the high pressure homogenization method enabled to obtain nanoparticles without agglomerates and smaller mean diameters with respect to ultrasound homogenization method. X-ray characterization suggested a lamellar structural organization of both type of nanoparticles. Progesterone encapsulation efficiency was almost 100% in the case of high pressure homogenization method. Shelf life study indicated a double fold stability of progesterone when encapsulated in nanoparticles produced by the high pressure homogenization method. Dialysis and Franz cell methods were performed to mimic subcutaneous and skin administration. Nanoparticles constituted of tristearin in mixture with caprylic/capric triglyceride display a slower release of progesterone with respect to nanoparticles constituted of pure tristearin. Franz cell evidenced a higher progesterone skin uptake in the case of pure tristearin nanoparticles. A human in vivo study, based on tape stripping, was conducted to investigate the performance of nanoparticles as progesterone skin delivery systems. Tape stripping results indicated a decrease of progesterone concentration in stratum corneum within six hours, suggesting an interaction between nanoparticle material and skin lipids. Copyright © 2017 Elsevier B.V. All rights reserved.
Abrupt skin lesion border cutoff measurement for malignancy detection in dermoscopy images.
Kaya, Sertan; Bayraktar, Mustafa; Kockara, Sinan; Mete, Mutlu; Halic, Tansel; Field, Halle E; Wong, Henry K
2016-10-06
Automated skin lesion border examination and analysis techniques have become an important field of research for distinguishing malignant pigmented lesions from benign lesions. An abrupt pigment pattern cutoff at the periphery of a skin lesion is one of the most important dermoscopic features for detection of neoplastic behavior. In current clinical setting, the lesion is divided into a virtual pie with eight sections. Each section is examined by a dermatologist for abrupt cutoff and scored accordingly, which can be tedious and subjective. This study introduces a novel approach to objectively quantify abruptness of pigment patterns along the lesion periphery. In the proposed approach, first, the skin lesion border is detected by the density based lesion border detection method. Second, the detected border is gradually scaled through vector operations. Then, along gradually scaled borders, pigment pattern homogeneities are calculated at different scales. Through this process, statistical texture features are extracted. Moreover, different color spaces are examined for the efficacy of texture analysis. The proposed method has been tested and validated on 100 (31 melanoma, 69 benign) dermoscopy images. Analyzed results indicate that proposed method is efficient on malignancy detection. More specifically, we obtained specificity of 0.96 and sensitivity of 0.86 for malignancy detection in a certain color space. The F-measure, harmonic mean of recall and precision, of the framework is reported as 0.87. The use of texture homogeneity along the periphery of the lesion border is an effective method to detect malignancy of the skin lesion in dermoscopy images. Among different color spaces tested, RGB color space's blue color channel is the most informative color channel to detect malignancy for skin lesions. That is followed by YCbCr color spaces Cr channel, and Cr is closely followed by the green color channel of RGB color space.
Yamashita, Shizuya; Kawase, Ryota; Nakaoka, Hajime; Nakatani, Kazuhiro; Inagaki, Miwako; Yuasa-Kawase, Miyako; Tsubakio-Yamamoto, Kazumi; Sandoval, Jose C; Masuda, Daisaku; Ohama, Tohru; Nakagawa-Toyama, Yumiko; Matsuyama, Akifumi; Nishida, Makoto; Ishigami, Masato
2009-12-01
In routine clinical laboratory testing and numerous epidemiological studies, LDL-cholesterol (LDL-C) has been estimated commonly using the Friedewald equation. We investigated the relationship between the Friedewald equation and 4 homogeneous assays for LDL-C. LDL-C was determined by 4 homogeneous assays [liquid selective detergent method: LDL-C (L), selective solubilization method: LDL-C (S), elimination method: LDL-C (E), and enzyme selective protecting method: LDL-C (P)]. Samples with discrepancies between the Friedewald equation and the 4 homogeneous assays for LDL-C were subjected to polyacrylamide gel electrophoresis and the beta-quantification method. The correlations between the Friedewald equation and the 4 homogeneous LDL-C assays were as follows: LDL-C (L) (r=0.962), LDL-C (S) (r=0.986), LDL-C (E) (r=0.946) and LDL-C (P) (r=0.963). Discrepancies were observed in sera from type III hyperlipoproteinemia patients and in sera containing large amounts of midband and small dense LDL on polyacrylamide gel electrophoresis. LDL-C (S) was most strongly correlated with the beta-quantification method even in sera from patients with type III hyperlipoproteinemia. Of the 4 homogeneous assays for LDL-C, LDL-C (S) exhibited the closest correlation with the Friedewald equation and the beta-quantification method, thus reflecting the current clinical databases for coronary heart disease.
Drawing method can improve musculoskeletal anatomy comprehension in medical faculty student.
Joewono, Muliani; Karmaya, I Nyoman Mangku; Wirata, Gede; Yuliana; Widianti, I Gusti Ayu; Wardana, I Nyoman Gede
2018-03-01
The Chinese philosophy of Confucianism said "What I heard I forgot, what I see, I remember, what I do, I understand." During this time, most of the teaching and learning process relies on viewing and listening modalities only. As a result, much information does not last long in memory as well as the material understanding achieves became less deep. In studying anatomy science, drawing is one of effective important methods because it is an integration of ideas and knowledge of vision thereby increasing comprehension and learning motivation of college students. The purpose of this research is to know the musculoskeletal anatomy comprehension by drawing learning method in Medical Faculty student. This research uses observational analytic design with the cross-sectional design. Total sampling was done to the entire student of Physiotherapy Study Program in 2012, 2013, and 2014, Medical Faculty of Udayana University. The average value of musculoskeletal anatomy of the student in 2012, 2013, and 2014 sequentially are 31.67, 33.57, and 45.00, respectively. Normality test with Shapiro-Wilk and homogeneity with Levene's test showed normal results and homogeneous. One-way ANOVA test between groups showed a significant result that is 11.00 ( P <0.05). It is concluded that the drawing method can improve the musculoskeletal anatomy comprehension in Medical Faculty student.
NASA Astrophysics Data System (ADS)
Zhang, L. F.; Chen, D. Y.; Wang, Q.; Li, H.; Zhao, Z. G.
2018-01-01
A preparation technology of ultra-thin Carbon-fiber paper is reported. Carbon fiber distribution homogeneity has a great influence on the properties of ultra-thin Carbon-fiber paper. In this paper, a self-developed homogeneity analysis system is introduced to assist users to evaluate the distribution homogeneity of Carbon fiber among two or more two-value images of carbon-fiber paper. A relative-uniformity factor W/H is introduced. The experimental results show that the smaller the W/H factor, the higher uniformity of the distribution of Carbon fiber is. The new uniformity-evaluation method provides a practical and reliable tool for analyzing homogeneity of materials.
Comparison of Methods to Assay Liver Glycogen Fractions: The Effects of Starvation
Mojibi, Nastaran
2017-01-01
Introduction There are several methods to extract and measure glycogen in animal tissues. Glycogen is extracted with or without homogenization by using cold Perchloric Acid (PCA). Aim Three procedures were compared to determine glycogen fractions in rat liver at different physiological states. Materials and Methods The present study was conducted on two groups of rats, one group of five rats were fed standard rodent laboratory food and were marked as controls, and another five rats were starved overnight (15 hour) as cases. The glycogen fractions were extracted and measured by using three methods: classical homogenization, total-glycogen-fractionation and homogenization-free protocols. Results The data of homogenization methods showed that following 15 hour starvation, total glycogen decreased (36.4±1.9 vs. 27.7±2.5, p=0.01) and the change occurred entirely in Acid Soluble Glycogen (ASG) (32.0±1.1 vs. 22.7±2.5, p=0.01), while Acid Insoluble Glycogen (AIG) did not change significantly (4.9±0.9 vs. 4.6±0.3, p=0.7). Similar results were achieved by using the method of total-glycogen-fractionation. Homogenization-free procedure indicated that ASG and AIG fractions compromise about 2/3 and 1/3 of total glycogen and the changes occurred in both ASG (24.4±2.6 vs. 16.7±0.4, p<0.05) and AIG fraction (8.7±0.8 vs. 7.1±0.3, p=0.05). Conclusion The findings of ‘homogenization assay method’ indicate that ASG is the major portion of liver glycogen and is more metabolically active form. The same results were obtained by using ‘total-glycogen-fractionation method’. ‘Homogenization-free method’ gave different results, because AIG has been contaminated with ASG fraction. In both ‘homogenization’ and ‘homogenization-free’ methods ASG must be extracted at least twice to prevent contamination of AIG with ASG. PMID:28511372
Efficiency of operation of wind turbine rotors optimized by the Glauert and Betz methods
NASA Astrophysics Data System (ADS)
Okulov, V. L.; Mikkelsen, R.; Litvinov, I. V.; Naumov, I. V.
2015-11-01
The models of two types of rotors with blades constructed using different optimization methods are compared experimentally. In the first case, the Glauert optimization by the pulsed method is used, which is applied independently for each individual blade cross section. This method remains the main approach in designing rotors of various duties. The construction of the other rotor is based on the Betz idea about optimization of rotors by determining a special distribution of circulation over the blade, which ensures the helical structure of the wake behind the rotor. It is established for the first time as a result of direct experimental comparison that the rotor constructed using the Betz method makes it possible to extract more kinetic energy from the homogeneous incoming flow.
Homogeneity study of fixed-point continuous marine environmental and meteorological data: a review
NASA Astrophysics Data System (ADS)
Yang, Jinkun; Yang, Yang; Miao, Qingsheng; Dong, Mingmei; Wan, Fangfang
2018-02-01
The principle of inhomogeneity and the classification of homogeneity test methods are briefly described, and several common inhomogeneity methods and relative merits are described in detail. Then based on the applications of the different homogeneity methods to the ground meteorological data and marine environment data, the present status and the progress are reviewed. At present, the homogeneity research of radiosonde and ground meteorological data is mature at home and abroad, and the research and application in the marine environmental data should also be given full attention. To carry out a variety of test and correction methods combined with the use of multi-mode test system, will make the results more reasonable and scientific, and also can be used to provide accurate first-hand information for the coastal climate change researches.
Hydrogen storage materials and method of making by dry homogenation
Jensen, Craig M.; Zidan, Ragaiy A.
2002-01-01
Dry homogenized metal hydrides, in particular aluminum hydride compounds, as a material for reversible hydrogen storage is provided. The reversible hydrogen storage material comprises a dry homogenized material having transition metal catalytic sites on a metal aluminum hydride compound, or mixtures of metal aluminum hydride compounds. A method of making such reversible hydrogen storage materials by dry doping is also provided and comprises the steps of dry homogenizing metal hydrides by mechanical mixing, such as be crushing or ball milling a powder, of a metal aluminum hydride with a transition metal catalyst. In another aspect of the invention, a method of powering a vehicle apparatus with the reversible hydrogen storage material is provided.
Macro-architectured cellular materials: Properties, characteristic modes, and prediction methods
NASA Astrophysics Data System (ADS)
Ma, Zheng-Dong
2017-12-01
Macro-architectured cellular (MAC) material is defined as a class of engineered materials having configurable cells of relatively large (i.e., visible) size that can be architecturally designed to achieve various desired material properties. Two types of novel MAC materials, negative Poisson's ratio material and biomimetic tendon reinforced material, were introduced in this study. To estimate the effective material properties for structural analyses and to optimally design such materials, a set of suitable homogenization methods was developed that provided an effective means for the multiscale modeling of MAC materials. First, a strain-based homogenization method was developed using an approach that separated the strain field into a homogenized strain field and a strain variation field in the local cellular domain superposed on the homogenized strain field. The principle of virtual displacements for the relationship between the strain variation field and the homogenized strain field was then used to condense the strain variation field onto the homogenized strain field. The new method was then extended to a stress-based homogenization process based on the principle of virtual forces and further applied to address the discrete systems represented by the beam or frame structures of the aforementioned MAC materials. The characteristic modes and the stress recovery process used to predict the stress distribution inside the cellular domain and thus determine the material strengths and failures at the local level are also discussed.
Benabdallah, Nadia; Benahmed, Nasreddine; Benyoucef, Boumediene; Bouhmidi, Rachid; Khelif, M'Hamed
2007-08-21
In this paper we present electromagnetic (EM) analysis of the unloaded slotted-tube resonator (STR) with a circular cross section, using the finite element method (FEM) and method of moments (MoM) in two dimensions. This analysis allows the determination of the primary parameters: [L] and [C] matrices, optimization of the field homogeneity, and simulates the frequency response of S(11) at the RF port of the designed STR. The optimum configuration is presented, taking into account the effect of the thickness of the STR and the effect of the RF shield. As an application, we present the design results of a MRI probe using the STR and operating at 500 MHz (proton imaging at 11.74 T). The resonator has -69.37 dB minimum reflection and an unloaded quality factor (Q(o)) > 500 at 500 MHz.
Method of chaotic mixing and improved stirred tank reactors
Muzzio, Fernando J.; Lamberto, David J.
1999-01-01
The invention provides a method and apparatus for efficiently achieving a homogeneous mixture of fluid components by introducing said components having a Reynolds number of between about .ltoreq.1 to about 500 into a vessel and continuously perturbing the mixing flow by altering the flow speed and mixing time until homogeniety is reached. This method prevents the components from aggregating into non-homogeneous segregated regions within said vessel during mixing and substantially reduces the time the admixed components reach homogeneity.
NASA Astrophysics Data System (ADS)
Soltanmoradi, Elmira; Shokri, Babak
2017-05-01
In this article, the electromagnetic wave scattering from plasma columns with inhomogeneous electron density distribution is studied by the Green's function volume integral equation method. Due to the ready production of such plasmas in the laboratories and their practical application in various technological fields, this study tries to find the effects of plasma parameters such as the electron density, radius, and pressure on the scattering cross-section of a plasma column. Moreover, the incident wave frequency influence of the scattering pattern is demonstrated. Furthermore, the scattering cross-section of a plasma column with an inhomogeneous collision frequency profile is calculated and the effect of this inhomogeneity is discussed first in this article. These results are especially used to determine the appropriate conditions for radar cross-section reduction purposes. It is shown that the radar cross-section of a plasma column reduces more for a larger collision frequency, for a relatively lower plasma frequency, and also for a smaller radius. Furthermore, it is found that the effect of the electron density on the scattering cross-section is more obvious in comparison with the effect of other plasma parameters. Also, the plasma column with homogenous collision frequency can be used as a better shielding in contrast to its inhomogeneous counterpart.
[Simulation and data analysis of stereological modeling based on virtual slices].
Wang, Hao; Shen, Hong; Bai, Xiao-yan
2008-05-01
To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.
Reactive sintering of ceramic lithium ion electrolyte membranes
Badding, Michael Edward; Dutta, Indrajit; Iyer, Sriram Rangarajan; Kent, Brian Alan; Lonnroth, Nadja Teresia
2017-06-06
Disclosed herein are methods for making a solid lithium ion electrolyte membrane, the methods comprising combining a first reactant chosen from amorphous, glassy, or low melting temperature solid reactants with a second reactant chosen from refractory oxides to form a mixture; heating the mixture to a first temperature to form a homogenized composite, wherein the first temperature is between a glass transition temperature of the first reactant and a crystallization onset temperature of the mixture; milling the homogenized composite to form homogenized particles; casting the homogenized particles to form a green body; and sintering the green body at a second temperature to form a solid membrane. Solid lithium ion electrolyte membranes manufactured according to these methods are also disclosed herein.
Jang, Ji-Yong; Kim, Jung-Sun; Shin, Dong-Ho; Kim, Byeong-Keuk; Ko, Young-Guk; Choi, Donghoon; Jang, Yangsoo; Hong, Myeong-Ki
2015-10-01
Serial follow-up optical coherence tomography (OCT) was used to evaluate the effect of optimal lipid-lowering therapy on qualitative changes in neointimal tissue characteristics after drug-eluting stent (DES) implantation. DES-treated patients (n = 218) who received statin therapy were examined with serial follow-up OCT. First and second follow-up OCT evaluations were performed approximately 6 and 18 months after the index procedure, respectively. Patients were divided into two groups, based on the level of low-density lipoprotein-cholesterol (LDL-C), which was measured at the second follow-up. The optimal lipid-lowering group (n = 121) had an LDL-C reduction of ≥50% or an LDL-C level ≤70 mg/dL, and the conventional group (n = 97). Neointimal characteristics were qualitatively categorized as homogeneous or non-homogeneous patterns using OCT. The non-homogeneous group included heterogeneous, layered, or neoatherosclerosis patterns. Qualitative changes in neointimal tissue characteristics between the first and second follow-up OCT examinations were assessed. Between the first and second follow-up OCT procedures, the neointimal cross-sectional area increased more substantially in the conventional group (0.4 mm(2) vs. 0.2 mm(2) in the optimal lipid-lowering group, p = 0.01). The neointimal pattern changed from homogeneous to non-homogeneous less often in the optimal lipid-lowering group (1.3%, 1/77, p < 0.001) than in the conventional group (15.3%, 11/72, p = 0.44). Optimal LDL-C reduction was an independent predictor for the prevention of neointimal pattern change from homogeneous to non-homogeneous (odds ratio: 0.05, 95% confidence interval: 0.01∼0.46, p = 0.008). Our findings suggest that an intensive reduction in LDL-C levels can prevent non-homogeneous changes in the neointima and increases in neointimal cross-sectional area compared with conventional LDL-C controls. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Homogeneous Immunoassays: Historical Perspective and Future Promise
NASA Astrophysics Data System (ADS)
Ullman, Edwin F.
1999-06-01
The founding and growth of Syva Company is examined in the context of its leadership role in the development of homogeneous immunoassays. The simple mix and read protocols of these methods offer advantages in routine analytical and clinical applications. Early homogeneous methods were based on insensitive detection of immunoprecipitation during antigen/antibody binding. The advent of reporter groups in biology provided a means of quantitating immunochemical binding by labeling antibody or antigen and physically separating label incorporated into immune complexes from free label. Although high sensitivity was achieved, quantitative separations were experimentally demanding. Only when it became apparent that reporter groups could provide information, not only about the location of a molecule but also about its microscopic environment, was it possible to design practical non-separation methods. The evolution of early homogenous immunoassays was driven largely by the development of improved detection strategies. The first commercial spin immunoassays, developed by Syva for drug abuse testing during the Vietnam war, were followed by increasingly powerful methods such as immunochemical modulation of enzyme activity, fluorescence, and photo-induced chemiluminescence. Homogeneous methods that quantify analytes at femtomolar concentrations within a few minutes now offer important new opportunities in clinical diagnostics, nucleic acid detection and drug discovery.
Boundary element modelling of dynamic behavior of piecewise homogeneous anisotropic elastic solids
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Markov, I. P.; Litvinchuk, S. Yu
2018-04-01
A traditional direct boundary integral equations method is applied to solve three-dimensional dynamic problems of piecewise homogeneous linear elastic solids. The materials of homogeneous parts are considered to be generally anisotropic. The technique used to solve the boundary integral equations is based on the boundary element method applied together with the Radau IIA convolution quadrature method. A numerical example of suddenly loaded 3D prismatic rod consisting of two subdomains with different anisotropic elastic properties is presented to verify the accuracy of the proposed formulation.
Environment-based pin-power reconstruction method for homogeneous core calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less
Distribution of Particles in the Z-axis of Tissue Sections: Relevance for Counting Methods.
von Bartheld, Christopher S
2012-01-01
The distribution of particles in the z-axis of thick tissue sections has gained considerable attention, primarily because of implications for the accuracy of modern stereological counting methods. Three major types of artifacts can affect these sections: loss of particles from the surfaces of tissue sections (lost caps), homogeneous collapse in the z-axis, and differential deformation in the z-axis. Initially it was assumed that thick sections were not compromised by differential shrinkage or compression (differential uniform deformation). Studies in the last decade showed that such artifacts are common and that they depend on embedding media and sectioning devices. Paraffin, glycolmethacrylate and vibratome sections are affected by this artifact, but not celloidin sections or cryostat-derived cryosections. Differential distribution of particles in the z-axis is likely due to compression of the surface areas (margins) during sectioning, resulting in differential particle densities in the core and margin of tissue sections. This deformation of tissue sections can be rapidly assessed by measuring the position of particles in the z-axis. The analysis is complicated by potential secondary effects on section surfaces through loss of particles, the so-called "lost caps" phenomenon. Secondary effects necessitate the use of guard spaces, while their use in case of primary effects (compression due to sectioning) would enhance the artifact's impact on bias. Symmetric versus asymmetric patterns of z-axis distortion can give clues to distinguish primary and secondary effects. Studies that use the optical disector need to take these parameters into account to minimize biases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, E. Costa, E-mail: edusilva@ele.puc-rio.br; Gusmão, L. A. P.; Barbosa, C. R. Hall
2014-08-15
Recently, our research group at PUC-Rio discovered that magnetic transducers based on the impedance phase characteristics of GMI sensors have the potential to multiply by one hundred the sensitivity values when compared to magnitude-based GMI transducers. Those GMI sensors can be employed in the measurement of ultra-weak magnetic fields, which intensities are even lower than the environmental magnetic noise. A traditional solution for cancelling the electromagnetic noise and interference makes use of gradiometric configurations, but the performance is strongly tied to the homogeneity of the sensing elements. This paper presents a new method that uses electronic circuits to modify themore » equivalent impedance of the GMI samples, aiming at homogenizing their phase characteristics and, consequently, improving the performance of gradiometric configurations based on GMI samples. It is also shown a performance comparison between this new method and another homogenization method previously developed.« less
Generating highly uniform electromagnetic field characteristics
Crow, James Terry
1998-01-01
An apparatus and method for generating homogenous electromagnetic fields within a volume. The homogeneity provided may be for magnetic and/or electric fields, and for field magnitude, radial gradient, or higher order radial derivative. The invention comprises conductive pathways oriented mirror symmetrically about a desired region of homogeneity. A corresponding apparatus and method is provided for substantially canceling the electromagnetic field outside of the apparatus, comprising a second set of conductive pathways placed outside the first set.
Generating highly uniform electromagnetic field characteristics
Crow, James T.
1998-01-01
An apparatus and method for generating homogenous electromagnetic fields within a volume. The homogeneity provided may be for magnetic and/or electric fields, and for field magnitude, radial gradient, or higher order radial derivative. The invention comprises conductive pathways oriented about a desired region of homogeneity. A corresponding apparatus and method is provided for substantially canceling the electromagnetic field outside of the apparatus, comprising a second set of conductive pathways placed outside the first set.
Generating highly uniform electromagnetic field characteristics
Crow, James T.
1997-01-01
An apparatus and method for generating homogenous electromagnetic fields within a volume. The homogeneity provided may be for magnetic and/or electric fields, and for field magnitude, radial gradient, or higher order radial derivative. The invention comprises conductive pathways oriented mirror symmetrically about a desired region of homogeneity. A corresponding apparatus and method is provided for substantially cancelling the electromagnetic field outside of the apparatus, comprising a second set of conductive pathways placed outside the first set.
Nonlinear vibration of a traveling belt with non-homogeneous boundaries
NASA Astrophysics Data System (ADS)
Ding, Hu; Lim, C. W.; Chen, Li-Qun
2018-06-01
Free and forced nonlinear vibrations of a traveling belt with non-homogeneous boundary conditions are studied. The axially moving materials in operation are always externally excited and produce strong vibrations. The moving materials with the homogeneous boundary condition are usually considered. In this paper, the non-homogeneous boundaries are introduced by the support wheels. Equilibrium deformation of the belt is produced by the non-homogeneous boundaries. In order to solve the equilibrium deformation, the differential and integral quadrature methods (DIQMs) are utilized to develop an iterative scheme. The influence of the equilibrium deformation on free and forced nonlinear vibrations of the belt is explored. The DIQMs are applied to solve the natural frequencies and forced resonance responses of transverse vibration around the equilibrium deformation. The Galerkin truncation method (GTM) is utilized to confirm the DIQMs' results. The numerical results demonstrate that the non-homogeneous boundary conditions cause the transverse vibration to deviate from the straight equilibrium, increase the natural frequencies, and lead to coexistence of square nonlinear terms and cubic nonlinear terms. Moreover, the influence of non-homogeneous boundaries can be exacerbated by the axial speed. Therefore, non-homogeneous boundary conditions of axially moving materials especially should be taken into account.
NASA Astrophysics Data System (ADS)
Kazadzis, Stelios; Kouremeti, Natalia; Nyeki, Stephan; Gröbner, Julian; Wehrli, Christoph
2018-02-01
The World Optical Depth Research Calibration Center (WORCC) is a section within the World Radiation Center at Physikalisches-Meteorologisches Observatorium (PMOD/WRC), Davos, Switzerland, established after the recommendations of the World Meteorological Organization for calibration of aerosol optical depth (AOD)-related Sun photometers. WORCC is mandated to develop new methods for instrument calibration, to initiate homogenization activities among different AOD networks and to run a network (GAW-PFR) of Sun photometers. In this work we describe the calibration hierarchy and methods used under WORCC and the basic procedures, tests and processing techniques in order to ensure the quality assurance and quality control of the AOD-retrieved data.
Minaya-Sánchez, Mirna; Medina-Solís, Carlo E.; Vallejos-Sánchez, Ana A.; Marquez-Corona, Maria L.; Pontigo-Loyola, América P.; Islas-Granillo, Horacio; Maupomé, Gerardo
2012-01-01
Background: Diverse variables are implicated in the pathogenesis of gingival recession; more detailed knowledge about the relationship between the clinical presentation of gingival recession and assorted risk indicators may lead to improved patient monitoring, early intervention, and subsequent prevention. The objective was to evaluate clinically gingival recession in a homogeneous Mexican adult male population and to determine the strength of association with related factors. Method: A cross-sectional study was carried out in a largely homogeneous group in terms of ethnic background, socioeconomic status, gender, occupation, and medical/dental insurance, in Campeche, Mexico. Periodontal examinations were undertaken to determine diverse clinical dental variables. All periodontal clinical examinations were assessed using the Florida Probe System, a dental chair and one examiner. Questionnaires were used to collect diverse risk indicators. Statistical analyses were undertaken with negative binomial regression models. Results: The mean number of sites with gingival recession per subject was 6.73±5.81; the prevalence was 87.6%. In the negative binomial regression model we observed that for (i) each year of age, and (ii) each percentage unit of increase in sites with plaque, and (iii) with suppuration, mean sites with gingival recession increased 2.9%, 1.0% and 13.0%, respectively. Having a spouse was associated with gingival recession. Conclusions: We observed association between gingival recession, and sociodemographic and clinical parameters. Patients need to be educated about risk indicators for gingival recession as well as the preventive maneuvers that may be implemented to minimize its occurrence. The potential of improved oral self-care to prevent a largely benign condition such as gingival recession is important, given the associated disorders that may ensue root exposure, such as root caries and root hypersensitivity. Key words:Oral health, periodontal health, gingival recession, adults, Mexico. PMID:22549678
NASA Astrophysics Data System (ADS)
Ding, Anxin; Li, Shuxin; Wang, Jihui; Ni, Aiqing; Sun, Liangliang; Chang, Lei
2016-10-01
In this paper, the corner spring-in angles of AS4/8552 L-shaped composite profiles with different thicknesses are predicted using path-dependent constitutive law with the consideration of material properties variation due to phase change during curing. The prediction accuracy mainly depends on the properties in the rubbery and glassy states obtained by homogenization method rather than experimental measurements. Both analytical and finite element (FE) homogenization methods are applied to predict the overall properties of AS4/8552 composite. The effect of fiber volume fraction on the properties is investigated for both rubbery and glassy states using both methods. And the predicted results are compared with experimental measurements for the glassy state. Good agreement is achieved between the predicted results and available experimental data, showing the reliability of the homogenization method. Furthermore, the corner spring-in angles of L-shaped composite profiles are measured experimentally and the reliability of path-dependent constitutive law is validated as well as the properties prediction by FE homogenization method.
An improved high-throughput lipid extraction method for the analysis of human brain lipids.
Abbott, Sarah K; Jenner, Andrew M; Mitchell, Todd W; Brown, Simon H J; Halliday, Glenda M; Garner, Brett
2013-03-01
We have developed a protocol suitable for high-throughput lipidomic analysis of human brain samples. The traditional Folch extraction (using chloroform and glass-glass homogenization) was compared to a high-throughput method combining methyl-tert-butyl ether (MTBE) extraction with mechanical homogenization utilizing ceramic beads. This high-throughput method significantly reduced sample handling time and increased efficiency compared to glass-glass homogenizing. Furthermore, replacing chloroform with MTBE is safer (less carcinogenic/toxic), with lipids dissolving in the upper phase, allowing for easier pipetting and the potential for automation (i.e., robotics). Both methods were applied to the analysis of human occipital cortex. Lipid species (including ceramides, sphingomyelins, choline glycerophospholipids, ethanolamine glycerophospholipids and phosphatidylserines) were analyzed via electrospray ionization mass spectrometry and sterol species were analyzed using gas chromatography mass spectrometry. No differences in lipid species composition were evident when the lipid extraction protocols were compared, indicating that MTBE extraction with mechanical bead homogenization provides an improved method for the lipidomic profiling of human brain tissue.
Gao, Kai; Chung, Eric T.; Gibson, Richard L.; ...
2015-06-05
The development of reliable methods for upscaling fine scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. We therefore propose a numerical homogenization algorithm based on multiscale finite element methods for simulating elasticmore » wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that is similar to the rotated staggered-grid finite difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity where the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.« less
Mixed mode control method and engine using same
Kesse, Mary L [Peoria, IL; Duffy, Kevin P [Metamora, IL
2007-04-10
A method of mixed mode operation of an internal combustion engine includes the steps of controlling a homogeneous charge combustion event timing in a given engine cycle, and controlling a conventional charge injection event to be at least a predetermined time after the homogeneous charge combustion event. An internal combustion engine is provided, including an electronic controller having a computer readable medium with a combustion timing control algorithm recorded thereon, the control algorithm including means for controlling a homogeneous charge combustion event timing and means for controlling a conventional injection event timing to be at least a predetermined time from the homogeneous charge combustion event.
Homogenizing Advanced Alloys: Thermodynamic and Kinetic Simulations Followed by Experimental Results
NASA Astrophysics Data System (ADS)
Jablonski, Paul D.; Hawk, Jeffrey A.
2017-01-01
Segregation of solute elements occurs in nearly all metal alloys during solidification. The resultant elemental partitioning can severely degrade as-cast material properties and lead to difficulties during post-processing (e.g., hot shorts and incipient melting). Many cast articles are subjected to a homogenization heat treatment in order to minimize segregation and improve their performance. Traditionally, homogenization heat treatments are based upon past practice or time-consuming trial and error experiments. Through the use of thermodynamic and kinetic modeling software, NETL has designed a systematic method to optimize homogenization heat treatments. Use of the method allows engineers and researchers to homogenize casting chemistries to levels appropriate for a given application. The method also allows for the adjustment of heat treatment schedules to fit limitations on in-house equipment (capability, reliability, etc.) while maintaining clear numeric targets for segregation reduction. In this approach, the Scheil module within Thermo-Calc is used to predict the as-cast segregation present within an alloy, and then diffusion controlled transformations is used to model homogenization kinetics as a function of time and temperature. Examples of computationally designed heat treatments and verification of their effects on segregation and properties of real castings are presented.
NASA Astrophysics Data System (ADS)
Heng, Ri-Liang; Pilon, Laurent
2016-05-01
This study presents experimental measurements of the radiation characteristics of unicellular freshwater cyanobacterium Synechocystis sp. during their exponential growth in F medium. Their scattering phase function at 633 nm average spectral absorption and scattering cross-sections between 400 and 750 nm were measured. In addition, an inverse method was used for retrieving the spectral effective complex index of refraction of overlapping or touching bispheres and quadspheres from their absorption and scattering cross-sections. The inverse method combines a genetic algorithm and a forward model based on Lorenz-Mie theory, treating bispheres and quadspheres as projected area and volume-equivalent coated spheres. The inverse method was successfully validated with numerically predicted average absorption and scattering cross-sections of suspensions consisting of bispheres and quadspheres, with realistic size distributions, using the T-matrix method. It was able to retrieve the monomers' complex index of refraction with size parameter up to 11, relative refraction index less than 1.3, and absorption index less than 0.1. Then, the inverse method was applied to retrieve the effective spectral complex index of refraction of Synechocystis sp. approximated as randomly oriented aggregates consisting of two overlapping homogeneous spheres. Both the measured absorption cross-section and the retrieved absorption index featured peaks at 435 and 676 nm corresponding to chlorophyll a, a peak at 625 nm corresponding to phycocyanin, and a shoulder around 485 nm corresponding to carotenoids. These results can be used to optimize and control light transfer in photobioreactors. The inverse method and the equivalent coated sphere model could be applied to other optically soft particles of similar morphologies.
NASA Astrophysics Data System (ADS)
Semchishen, Vladimir A.; Mrochen, Michael; Seminogov, Vladimir N.; Panchenko, Vladislav Y.; Seiler, Theo
1998-04-01
Purpose: The increasing interest in a homogeneous Gaussian light beam profile for applications in ophthalmology e.g. photorefractive keratectomy (PRK) requests simple optical systems with low energy losses. Therefore, we developed the Light Shaping Beam Homogenizer (LSBH) working from UV up to mid-IR. Method: The irregular microlenses structure on a quartz surface was fabricated by using photolithography, chemical etching and chemical polishing processes. This created a three dimensional structure on the quartz substrate characterized in case of a Gaussian beam by random law distribution of individual irregularities tilts. The LSBH was realized for the 193 nm and the 2.94 micrometer wavelengths. Simulation results obtained by 3-D analysis for an arbitrary incident light beam were compared to experimental results. Results: The correlation to a numerical Gaussian fit is better than 94% with high uniformity for an incident beam with an intensity modulation of nearly 100%. In the far field the cross section of the beam shows always rotation symmetry. Transmittance and damage threshold of the LSBH are only dependent on the substrate characteristics. Conclusions: considering our experimental and simulation results it is possible to control the angular distribution of the beam intensity after LSBH with higher efficiency compared to diffraction or holographic optical elements.
Layout optimization using the homogenization method
NASA Technical Reports Server (NTRS)
Suzuki, Katsuyuki; Kikuchi, Noboru
1993-01-01
A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.
Generating highly uniform electromagnetic field characteristics
Crow, J.T.
1997-06-24
An apparatus and method are disclosed for generating homogeneous electromagnetic fields within a volume. The homogeneity provided may be for magnetic and/or electric fields, and for field magnitude, radial gradient, or higher order radial derivative. The invention comprises conductive pathways oriented mirror symmetrically about a desired region of homogeneity. A corresponding apparatus and method is provided for substantially canceling the electromagnetic field outside of the apparatus, comprising a second set of conductive pathways placed outside the first set. 26 figs.
Generating highly uniform electromagnetic field characteristics
Crow, J.T.
1998-05-05
An apparatus and method are disclosed for generating homogeneous electromagnetic fields within a volume. The homogeneity provided may be for magnetic and/or electric fields, and for field magnitude, radial gradient, or higher order radial derivative. The invention comprises conductive pathways oriented about a desired region of homogeneity. A corresponding apparatus and method is provided for substantially canceling the electromagnetic field outside of the apparatus, comprising a second set of conductive pathways placed outside the first set. 55 figs.
Generating highly uniform electromagnetic field characteristics
Crow, J.T.
1998-02-10
An apparatus and method for generating homogeneous electromagnetic fields within a volume is disclosed. The homogeneity provided may be for magnetic and/or electric fields, and for field magnitude, radial gradient, or higher order radial derivative. The invention comprises conductive pathways oriented mirror symmetrically about a desired region of homogeneity. A corresponding apparatus and method is provided for substantially canceling the electromagnetic field outside of the apparatus, comprising a second set of conductive pathways placed outside the first set. 39 figs.
Physical characteristics of the Bahia Blanca estuary (Argentina)
NASA Astrophysics Data System (ADS)
Piccolo, Maria Cintia; Perillo, Gerardo M. E.
1990-09-01
Based on temperature, salinity and current velocity and direction data, the physical characteristics of the Bahia Blanca estuary are described. Data were gathered in vertical profiles made in longitudinal as well as in hourly surveys. Freshwater runoff averages 2 m 3 s -1; however, peak floods may reach 10-50 m 3 s -1. The temperature distribution is quite homogeneous in the estuary. Based on the salinity distribution, the estuary can be divided into two sectors: an inner one showing partially mixed characteristics with a strong tendency to become sectionally homogeneous during runoff conditions similar to the historical averages, and an outer sector which is sectionally homogeneous. Salinity values in the inner sector may be larger than those observed in the inner continental shelf. This is initiated by the restricted circulation in the inner estuary and added to by the tidal washing of back-estuary salt flats and by evaporation processes. Analysis of the residual circulation shows a marked difference in the direction of mass transport. In the deeper regions of the sections (northern flank) the flow reverses with depth, being headward near the bottom. However, net transport is landward in the shallower parts.
LANDSAT-D investigations in snow hydrology
NASA Technical Reports Server (NTRS)
Dozier, J. (Principal Investigator)
1984-01-01
Two stream methods provide rapid approximate calculations of radiative transfer in scattering and absorbing media. Although they provide information on fluxes only, and not on intensities, their speed makes them attractive to more precise methods. The methods provide a comprehensive, unified review for a homogeneous layer, and solve the equations for reflectance and transmittance for a homogeneous layer over a non reflecting surface. Any of the basic kernels for a single layer can be extended to a vertically inhomogeneous medium over a surface whose reflectance properties vary with illumination angle, as long as the medium can be subdivided into homogeneous layers.
NASA Astrophysics Data System (ADS)
Lugovtsova, Y. D.; Soldatov, A. I.
2016-01-01
Three different methods for pile integrity testing are proposed to compare on a cylindrical homogeneous polyamide specimen. The methods are low strain pile integrity testing, multichannel pile integrity testing and testing with a shaker system. Since the low strain pile integrity testing is well-established and standardized method, the results from it are used as a reference for other two methods.
ERIC Educational Resources Information Center
Blakley, G. R.
1982-01-01
Reviews mathematical techniques for solving systems of homogeneous linear equations and demonstrates that the algebraic method of balancing chemical equations is a matter of solving a system of homogeneous linear equations. FORTRAN programs using this matrix method to chemical equation balancing are available from the author. (JN)
A Homogenization Approach for Design and Simulation of Blast Resistant Composites
NASA Astrophysics Data System (ADS)
Sheyka, Michael
Structural composites have been used in aerospace and structural engineering due to their high strength to weight ratio. Composite laminates have been successfully and extensively used in blast mitigation. This dissertation examines the use of the homogenization approach to design and simulate blast resistant composites. Three case studies are performed to examine the usefulness of different methods that may be used in designing and optimizing composite plates for blast resistance. The first case study utilizes a single degree of freedom system to simulate the blast and a reliability based approach. The first case study examines homogeneous plates and the optimal stacking sequence and plate thicknesses are determined. The second and third case studies use the homogenization method to calculate the properties of composite unit cell made of two different materials. The methods are integrated with dynamic simulation environments and advanced optimization algorithms. The second case study is 2-D and uses an implicit blast simulation, while the third case study is 3-D and simulates blast using the explicit blast method. Both case studies 2 and 3 rely on multi-objective genetic algorithms for the optimization process. Pareto optimal solutions are determined in case studies 2 and 3. Case study 3 is an integrative method for determining optimal stacking sequence, microstructure and plate thicknesses. The validity of the different methods such as homogenization, reliability, explicit blast modeling and multi-objective genetic algorithms are discussed. Possible extension of the methods to include strain rate effects and parallel computation is also examined.
Staniszewska-Slezak, Emilia; Malek, Kamilla; Baranska, Malgorzata
2015-08-05
Raman spectroscopy and four excitation lines in the visible (Vis: 488, 532, 633 nm) and near infrared (NIR: 785 nm) were used for biochemical analysis of rat tissue homogenates, i.e. myocardium, brain, liver, lung, intestine, and kidney. The Vis Raman spectra are very similar for some organs (brain/intestines and kidney/liver) and dominated by heme signals when tissues of lung and myocardium were investigated (especially with 532 nm excitation). On the other hand, the NIR Raman spectra are specific for each tissue and more informative than the corresponding ones collected with the Vis excitations. The spectra analyzed without any special pre-processing clearly illustrate different chemical composition of each tissue and give information about main components e.g. lipids or proteins, but also about the content of some specific compounds such as amino acid residues, nucleotides and nucleobases. However, in order to obtain the whole spectral information about tissues complex composition the spectra of Vis and NIR excitations should be collected and analyzed together. A good agreement of data gathered from Raman spectra of the homogenates and those obtained previously from Raman imaging of the tissue cross-sections indicates that the presented here approach can be a method of choice for an investigation of biochemical variation in animal tissues. Moreover, the Raman spectral profile of tissue homogenates is specific enough to be used for an investigation of potential pathological changes the organism undergoes, in particular when supported by the complementary FTIR spectroscopy. Copyright © 2015 Elsevier B.V. All rights reserved.
The improvement of the method of equivalent cross section in HTR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, J.; Li, F.
The Method of Equivalence Cross-Sections (MECS) is a combined transport-diffusion method. By appropriately adjusting the diffusion coefficient of homogenized absorber region, the diffusion theory could yield satisfactory results for the full core model with strong neutron absorber material, for example the control rod in High temperature gas cooled reactor (HTR). Original implementation of MECS based on 1-D cell transport model has some limitation on accuracy and applicability, a new implementation of MECS based on 2-D transport model are proposed and tested in this paper. This improvement can extend the MECS to the calculation of twin small absorber ball system whichmore » have a non-circular boring in graphite reflector and different radial position. A least-square algorithm for the calculation of equivalent diffusion coefficient is adopted, and special treatment for diffusion coefficient for higher energy group is proposed in the case that absorber is absent. Numerical results to adopt MECS into control rod calculation in HTR are encouraging. However, there are some problems left. (authors)« less
Multicomponent homogeneous alloys and method for making same
Dutta, Partha S.; Miller, Thomas R.
2003-09-02
The present application discloses a method for preparing a homogeneous ternary or quaternary alloy from a quaternary melt. The method includes providing a family of phase diagrams for the quaternary melt which shows (i) composition/temperature data, (ii) tie lines connecting equilibrium liquid and solid compositions, and (iii) isotherms representing boundaries of a miscibility gap. Based on the family of phase diagrams, a quaternary melt composition and an alloy growth temperature is selected. A quaternary melt having the selected quaternary melt composition is provided and a ternary or quaternary alloy is grown from the quaternary melt at the selected alloy growth temperature. A method for making homogeneous ternary or quaternary alloy from a ternary or quaternary melt is also disclosed, as are homogeneous quaternary single-crystal alloys which are substantially free from crystal defects and which have the formula A.sub.x B.sub.1-x C.sub.y D.sub.1-y, x and y being the same or different and in the range of 0.001 to 0.999.
NASA Astrophysics Data System (ADS)
Lye, Peter G.; Bradbury, Ronald; Lamb, David W.
Silica optical fibres were used to measure colour (mg anthocyanin/g fresh berry weight) in samples of red wine grape homogenates via optical Fibre Evanescent Field Absorbance (FEFA). Colour measurements from 126 samples of grape homogenate were compared against the standard industry spectrophotometric reference method that involves chemical extraction and subsequent optical absorption measurements of clarified samples at 520 nm. FEFA absorbance on homogenates at 520 nm (FEFA520h) was correlated with the industry reference method measurements of colour (R2 = 0.46, n = 126). Using a simple regression equation colour could be predicted with a standard error of cross-validation (SECV) of 0.21 mg/g, with a range of 0.6 to 2.2 mg anthocyanin/g and a standard deviation of 0.33 mg/g. With a Ratio of Performance Deviation (RPD) of 1.6, the technique when utilizing only a single detection wavelength, is not robust enough to apply in a diagnostic sense, however the results do demonstrate the potential of the FEFA method as a fast and low-cost assay of colour in homogenized samples.
Yong, Dongeun; Ki, Chang-Seok; Kim, Jae-Seok; Seong, Moon-Woo; Lee, Hyukmin
2016-01-01
Background Real-time reverse transcription PCR (rRT-PCR) of sputum samples is commonly used to diagnose Middle East respiratory syndrome coronavirus (MERS-CoV) infection. Owing to the difficulty of extracting RNA from sputum containing mucus, sputum homogenization is desirable prior to nucleic acid isolation. We determined optimal homogenization methods for isolating viral nucleic acids from sputum. Methods We evaluated the following three sputum-homogenization methods: proteinase K and DNase I (PK-DNase) treatment, phosphate-buffered saline (PBS) treatment, and N-acetyl-L-cysteine and sodium citrate (NALC) treatment. Sputum samples were spiked with inactivated MERS-CoV culture isolates. RNA was extracted from pretreated, spiked samples using the easyMAG system (bioMérieux, France). Extracted RNAs were then subjected to rRT-PCR for MERS-CoV diagnosis (DiaPlex Q MERS-coronavirus, SolGent, Korea). Results While analyzing 15 spiked sputum samples prepared in technical duplicate, false-negative results were obtained with five (16.7%) and four samples (13.3%), respectively, by using the PBS and NALC methods. The range of threshold cycle (Ct) values observed when detecting upE in sputum samples was 31.1–35.4 with the PK-DNase method, 34.7–39.0 with the PBS method, and 33.9–38.6 with the NALC method. Compared with the control, which were prepared by adding a one-tenth volume of 1:1,000 diluted viral culture to PBS solution, the ranges of Ct values obtained by the PBS and NALC methods differed significantly from the mean control Ct of 33.2 (both P<0.0001). Conclusions The PK-DNase method is suitable for homogenizing sputum samples prior to RNA extraction. PMID:27374711
Method for preparing hydrous zirconium oxide gels and spherules
Collins, Jack L.
2003-08-05
Methods for preparing hydrous zirconium oxide spherules, hydrous zirconium oxide gels such as gel slabs, films, capillary and electrophoresis gels, zirconium monohydrogen phosphate spherules, hydrous zirconium oxide spherules having suspendable particles homogeneously embedded within to form a composite sorbent, zirconium monohydrogen phosphate spherules having suspendable particles of at least one different sorbent homogeneously embedded within to form a composite sorbent having a desired crystallinity, zirconium oxide spherules having suspendable particles homogeneously embedded within to form a composite, hydrous zirconium oxide fiber materials, zirconium oxide fiber materials, hydrous zirconium oxide fiber materials having suspendable particles homogeneously embedded within to form a composite, zirconium oxide fiber materials having suspendable particles homogeneously embedded within to form a composite and spherules of barium zirconate. The hydrous zirconium oxide spherules and gel forms prepared by the gel-sphere, internal gelation process are useful as inorganic ion exchangers, catalysts, getters and ceramics.
Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris
2015-07-17
Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation.
NASA Astrophysics Data System (ADS)
Vagh, Hardik A.; Baghai-Wadji, Alireza
2008-12-01
Current technological challenges in materials science and high-tech device industry require the solution of boundary value problems (BVPs) involving regions of various scales, e.g. multiple thin layers, fibre-reinforced composites, and nano/micro pores. In most cases straightforward application of standard variational techniques to BVPs of practical relevance necessarily leads to unsatisfactorily ill-conditioned analytical and/or numerical results. To remedy the computational challenges associated with sub-sectional heterogeneities various sophisticated homogenization techniques need to be employed. Homogenization refers to the systematic process of smoothing out the sub-structural heterogeneities, leading to the determination of effective constitutive coefficients. Ordinarily, homogenization involves a sophisticated averaging and asymptotic order analysis to obtain solutions. In the majority of the cases only zero-order terms are constructed due to the complexity of the processes involved. In this paper we propose a constructive scheme for obtaining homogenized solutions involving higher order terms, and thus, guaranteeing higher accuracy and greater robustness of the numerical results. We present
Magnetically controlled multifrequency invisibility cloak with a single shell of ferrite material
NASA Astrophysics Data System (ADS)
Wang, Xiaohua; Liu, Youwen
2015-02-01
A magnetically controlled multifrequency invisibility cloak with a single shell of the isotropic and homogeneous ferrite material has been investigated based on the scattering cancellation method from the Mie scattering theory. The analytical and simulated results have demonstrated that such this shell can drastically reduce the total scattering cross-section of this cloaking system at multiple frequencies. These multiple cloaking frequencies of this shell can be externally controlled since the magnetic permeability of ferrites is well tuned by the applied magnetic field. This may provide a potential way to design a tunable multifrequency invisibility cloak with considerable flexibility.
Sung, Heungsup; Yong, Dongeun; Ki, Chang Seok; Kim, Jae Seok; Seong, Moon Woo; Lee, Hyukmin; Kim, Mi Na
2016-09-01
Real-time reverse transcription PCR (rRT-PCR) of sputum samples is commonly used to diagnose Middle East respiratory syndrome coronavirus (MERS-CoV) infection. Owing to the difficulty of extracting RNA from sputum containing mucus, sputum homogenization is desirable prior to nucleic acid isolation. We determined optimal homogenization methods for isolating viral nucleic acids from sputum. We evaluated the following three sputum-homogenization methods: proteinase K and DNase I (PK-DNase) treatment, phosphate-buffered saline (PBS) treatment, and N-acetyl-L-cysteine and sodium citrate (NALC) treatment. Sputum samples were spiked with inactivated MERS-CoV culture isolates. RNA was extracted from pretreated, spiked samples using the easyMAG system (bioMérieux, France). Extracted RNAs were then subjected to rRT-PCR for MERS-CoV diagnosis (DiaPlex Q MERS-coronavirus, SolGent, Korea). While analyzing 15 spiked sputum samples prepared in technical duplicate, false-negative results were obtained with five (16.7%) and four samples (13.3%), respectively, by using the PBS and NALC methods. The range of threshold cycle (Ct) values observed when detecting upE in sputum samples was 31.1-35.4 with the PK-DNase method, 34.7-39.0 with the PBS method, and 33.9-38.6 with the NALC method. Compared with the control, which were prepared by adding a one-tenth volume of 1:1,000 diluted viral culture to PBS solution, the ranges of Ct values obtained by the PBS and NALC methods differed significantly from the mean control Ct of 33.2 (both P<0.0001). The PK-DNase method is suitable for homogenizing sputum samples prior to RNA extraction.
APPLICATIONS OF RESEARCH TO THE PROBLEM OF INSTRUCTIONAL FLEXIBILITY.
ERIC Educational Resources Information Center
SARTAIN, HARRY W.
SELECTED RESEARCH ON THE PROBLEM OF INSTRUCTIONAL FLEXIBILITY IS SURVEYED AND DISCUSSED. BROAD TOPICS OF DISCUSSION ARE DEPARTMENTALIZATION, HOMOGENEOUS SECTIONING, INTERCLASS ABILITY SECTIONING, THE EXTENT OF VARIABILITY IN READING DEVELOPMENT, AND PRACTICES THAT MAY INCREASE FLEXIBILITY. AMONG THOSE PRACTICES TO INCREASE FLEXIBILITY ARE TEAM…
NASA Astrophysics Data System (ADS)
Rymarczyk, Joanna; Kowalczyk, Piotr; Czerwosz, Elzbieta; Bielski, Włodzimierz
2011-09-01
The nanomechanical properties of nanostructural carbonaceous-palladium films are studied. The nanoindentation experiments are numerically using the Finite Element Method. The homogenization theory is applied to compute the properties of the composite material used as the input data for nanoindentation calculations.
Configuration optimization of space structures
NASA Technical Reports Server (NTRS)
Felippa, Carlos; Crivelli, Luis A.; Vandenbelt, David
1991-01-01
The objective is to develop a computer aid for the conceptual/initial design of aerospace structures, allowing configurations and shape to be apriori design variables. The topics are presented in viewgraph form and include the following: Kikuchi's homogenization method; a classical shape design problem; homogenization method steps; a 3D mechanical component design example; forming a homogenized finite element; a 2D optimization problem; treatment of volume inequality constraint; algorithms for the volume inequality constraint; object function derivatives--taking advantage of design locality; stiffness variations; variations of potential; and schematics of the optimization problem.
NASA Astrophysics Data System (ADS)
Liu, Shuyuan; Zhang, Yong; Feng, Yu; Shi, Changbin; Cao, Yong; Yuan, Wei
2018-02-01
A coupled population balance sectional method (PBSM) coupled with computational fluid dynamics (CFD) is presented to simulate the capture of aerosolized oil droplets (AODs) in a range hood exhaust. The homogeneous nucleation and coagulation processes are modeled and simulated with this CFD-PBSM method. With the design angle, α of the range hood exhaust varying from 60° to 30°, the AODs capture increases meanwhile the pressure drop between the inlet and the outlet of the range hood also increases from 8.38Pa to 175.75Pa. The increasing inlet flow velocities also result in less AODs capture although the total suction increases due to higher flow rates to the range hood. Therefore, the CFD-PBSM method provides an insight into the formation and capture of AODs as well as their impact on the operation and design of the range hood exhaust.
A new treatment of nonlocality in scattering process
NASA Astrophysics Data System (ADS)
Upadhyay, N. J.; Bhagwat, A.; Jain, B. K.
2018-01-01
Nonlocality in the scattering potential leads to an integro-differential equation. In this equation nonlocality enters through an integral over the nonlocal potential kernel. The resulting Schrödinger equation is usually handled by approximating r,{r}{\\prime }-dependence of the nonlocal kernel. The present work proposes a novel method to solve the integro-differential equation. The method, using the mean value theorem of integral calculus, converts the nonhomogeneous term to a homogeneous term. The effective local potential in this equation turns out to be energy independent, but has relative angular momentum dependence. This method is accurate and valid for any form of nonlocality. As illustrative examples, the total and differential cross sections for neutron scattering off 12C, 56Fe and 100Mo nuclei are calculated with this method in the low energy region (up to 10 MeV) and are found to be in reasonable accord with the experiments.
Derrick, Timothy R; Edwards, W Brent; Fellin, Rebecca E; Seay, Joseph F
2016-02-08
The purpose of this research was to utilize a series of models to estimate the stress in a cross section of the tibia, located 62% from the proximal end, during walking. Twenty-eight male, active duty soldiers walked on an instrumented treadmill while external force data and kinematics were recorded. A rigid body model was used to estimate joint moments and reaction forces. A musculoskeletal model was used to gather muscle length, muscle velocity, moment arm and orientation information. Optimization procedures were used to estimate muscle forces and finally internal bone forces and moments were applied to an inhomogeneous, subject specific bone model obtained from CT scans to estimate stress in the bone cross section. Validity was assessed by comparison to stresses calculated from strain gage data in the literature and sensitivity was investigated using two simplified versions of the bone model-a homogeneous model and an ellipse approximation. Peak compressive stress occurred on the posterior aspect of the cross section (-47.5 ± 14.9 MPa). Peak tensile stress occurred on the anterior aspect (27.0 ± 11.7 MPa) while the location of peak shear was variable between subjects (7.2 ± 2.4 MPa). Peak compressive, tensile and shear stresses were within 0.52 MPa, 0.36 MPa and 3.02 MPa respectively of those calculated from the converted strain gage data. Peak values from a inhomogeneous model of the bone correlated well with homogeneous model (normal: 0.99; shear: 0.94) as did the normal ellipse model (r=0.89-0.96). However, the relationship between shear stress in the inhomogeneous model and ellipse model was less accurate (r=0.64). The procedures detailed in this paper provide a non-invasive and relatively quick method of estimating cross sectional stress that holds promise for assessing injury and osteogenic stimulus in bone during normal physical activity. Copyright © 2016 Elsevier Ltd. All rights reserved.
Peripheral nerve magnetic stimulation: influence of tissue non-homogeneity
Krasteva, Vessela TZ; Papazov, Sava P; Daskalov, Ivan K
2003-01-01
Background Peripheral nerves are situated in a highly non-homogeneous environment, including muscles, bones, blood vessels, etc. Time-varying magnetic field stimulation of the median and ulnar nerves in the carpal region is studied, with special consideration of the influence of non-homogeneities. Methods A detailed three-dimensional finite element model (FEM) of the anatomy of the wrist region was built to assess the induced currents distribution by external magnetic stimulation. The electromagnetic field distribution in the non-homogeneous domain was defined as an internal Dirichlet problem using the finite element method. The boundary conditions were obtained by analysis of the vector potential field excited by external current-driven coils. Results The results include evaluation and graphical representation of the induced current field distribution at various stimulation coil positions. Comparative study for the real non-homogeneous structure with anisotropic conductivities of the tissues and a mock homogeneous media is also presented. The possibility of achieving selective stimulation of either of the two nerves is assessed. Conclusion The model developed could be useful in theoretical prediction of the current distribution in the nerves during diagnostic stimulation and therapeutic procedures involving electromagnetic excitation. The errors in applying homogeneous domain modeling rather than real non-homogeneous biological structures are demonstrated. The practical implications of the applied approach are valid for any arbitrary weakly conductive medium. PMID:14693034
Segmentation and clustering as complementary sources of information
NASA Astrophysics Data System (ADS)
Dale, Michael B.; Allison, Lloyd; Dale, Patricia E. R.
2007-03-01
This paper examines the effects of using a segmentation method to identify change-points or edges in vegetation. It identifies coherence (spatial or temporal) in place of unconstrained clustering. The segmentation method involves change-point detection along a sequence of observations so that each cluster formed is composed of adjacent samples; this is a form of constrained clustering. The protocol identifies one or more models, one for each section identified, and the quality of each is assessed using a minimum message length criterion, which provides a rational basis for selecting an appropriate model. Although the segmentation is less efficient than clustering, it does provide other information because it incorporates textural similarity as well as homogeneity. In addition it can be useful in determining various scales of variation that may apply to the data, providing a general method of small-scale pattern analysis.
Comparison of up-scaling methods in poroelasticity and its generalizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berryman, J G
2003-12-13
Four methods of up-scaling coupled equations at the microscale to equations valid at the mesoscale and/or macroscale for fluid-saturated and partially saturated porous media will be discussed, compared, and contrasted. The four methods are: (1) effective medium theory, (2) mixture theory, (3) two-scale and multiscale homogenization, and (4) volume averaging. All these methods have advantages for some applications and disadvantages for others. For example, effective medium theory, mixture theory, and homogenization methods can all give formulas for coefficients in the up-scaled equations, whereas volume averaging methods give the form of the up-scaled equations but generally must be supplemented with physicalmore » arguments and/or data in order to determine the coefficients. Homogenization theory requires a great deal of mathematical insight from the user in order to choose appropriate scalings for use in the resulting power-law expansions, while volume averaging requires more physical insight to motivate the steps needed to find coefficients. Homogenization often is performed on periodic models, while volume averaging does not require any assumption of periodicity and can therefore be related very directly to laboratory and/or field measurements. Validity of the homogenization process is often limited to specific ranges of frequency - in order to justify the scaling hypotheses that must be made - and therefore cannot be used easily over wide ranges of frequency. However, volume averaging methods can quite easily be used for wide band data analysis. So, we learn from these comparisons that a researcher in the theory of poroelasticity and its generalizations needs to be conversant with two or more of these methods to solve problems generally.« less
Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris
2015-01-01
Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation. PMID:26182891
Homogenization of Periodic Masonry Using Self-Consistent Scheme and Finite Element Method
NASA Astrophysics Data System (ADS)
Kumar, Nitin; Lambadi, Harish; Pandey, Manoj; Rajagopal, Amirtham
2016-01-01
Masonry is a heterogeneous anisotropic continuum, made up of the brick and mortar arranged in a periodic manner. Obtaining the effective elastic stiffness of the masonry structures has been a challenging task. In this study, the homogenization theory for periodic media is implemented in a very generic manner to derive the anisotropic global behavior of the masonry, through rigorous application of the homogenization theory in one step and through a full three-dimensional behavior. We have considered the periodic Eshelby self-consistent method and the finite element method. Two representative unit cells that represent the microstructure of the masonry wall exactly are considered for calibration and numerical application of the theory.
Xiao, Yiling; McElheny, Dan; Hoshi, Minako; Ishii, Yoshitaka
2018-01-01
Intense efforts have been made to understand the molecular structures of misfolded amyloid β (Aβ) in order to gain insight into the pathological mechanism of Alzheimer's disease. Solid-state NMR spectroscopy (SSNMR) is considered a primary tool for elucidating the structures of insoluble and noncrystalline amyloid fibrils and other amyloid assemblies. In this chapter, we describe a detailed protocol to obtain the first atomic model of the 42-residue human Aβ peptide Aβ(1-42) in structurally homogeneous amyloid fibrils from our recent SSNMR study (Nat Struct Mol Biol 22:499-505, 2015). Despite great biological and clinical interest in Aβ(1-42) fibrils, their structural details have been long-elusive until this study. The protocol is divided into four sections. First, the solid-phase peptide synthesis (SPPS) and purification of monomeric Aβ(1-42) is described. We illustrate a controlled incubation method to prompt misfolding of Aβ(1-42) into homogeneous amyloid fibrils in an aqueous solution with fragmented Aβ(1-42) fibrils as seeds. Next, we detail analysis of Aβ(1-42) fibrils by SSNMR to obtain structural restraints. Finally, we describe methods to construct atomic models of Aβ(1-42) fibrils based on SSNMR results through two-stage molecular dynamics calculations.
Limit analysis, rammed earth material and Casagrande test
NASA Astrophysics Data System (ADS)
El-Nabouch, Ranime; Pastor, Joseph; Bui, Quoc-Bao; Plé, Olivier
2018-02-01
The present paper is concerned with the simulation of the Casagrande test carried out on a rammed earth material for wall-type structures in the framework of Limit Analysis (LA). In a preliminary study, the material is considered as a homogeneous Coulomb material, and existing LA static and kinematic codes are used for the simulation of the test. In each loading case, static and kinematic bounds coincide; the corresponding exact solution is a two-rigid-block mechanism together with a quasi-constant stress vector and a velocity jump also constant along the interface, for the three loading cases. In a second study, to take into account the influence of compressive loadings related to the porosity of the material, an elliptic criterion (denoted Cohesive Cam-Clay, CCC) is defined based on recent homogenization results about the hollow sphere model for porous Coulomb materials. Finally, original finite element formulations of the static and mixed kinematic methods for the CCC material are developed and applied to the Casagrande test. The results are the same than above, except that this time the velocity jump depends on the compressive loading, which is more realistic but not satisfying fully the experimental observations. Therefore, the possible extensions of this work towards non-standard direct methods are analyzed in the conclusion section.
D-Aspartic acid and nitric oxide as regulators of androgen production in boar testis.
Lamanna, Claudia; Assisi, Loredana; Vittoria, Alfredo; Botte, Virgilio; Di Fiore, Maria Maddalena
2007-01-15
D-Aspartic acid (D-Asp) and nitric oxide (NO) are two biologically active molecules playing important functions as neurotransmitters and neuromodulators of nerve impulse and as regulators of hormone production by endocrine organs. We studied the occurrence of D-Asp and NO as well as their effects on testosterone synthesis in the testis of boar. This model was chosen for our investigations because it contains more Leydig cells than other mammals. Indirect immunofluorescence applied to cryostat sections was used to evaluate the co-localization of D-Asp and of the enzyme nitric oxide synthase (NOS) in the same Leydig cells. D-Asp and NOS often co-existed in the same Leydig cells and were found, separately, in many other testicular cytotypes. D-Asp level was dosed by an enzymatic method performed on boar testis extracts and was 40+/-3.6 nmol/g of fresh tissue. NO measurement was carried out using a biochemical method by NOS activity determination and expressed as quantity of nitrites produced: it was 155.25+/-21.9 nmol/mg of tissue. The effects of the two molecules on steroid hormone production were evaluated by incubating testis homogenates, respectively with or without D-Asp and/or the NO-donor L-arginine (L-Arg). After incubation, the testosterone presence was measured by immunoenzymatic assay (EIA). These in vitro experiments showed that the addition of D-Asp to incubated testicular homogenates significantly increased testosterone concentration, whereas the addition of L-Arg decreased the hormone production. Moreover, the inclusion of L-Arg to an incubation medium of testicular homogenates with added D-Asp, completely inhibited the stimulating effects of this enantiomer. Our results suggest an autocrine action of both D-Asp and NO on the steroidogenetic activity of the Leydig cell.
Hong Qian; Qinfeng Guo
2010-01-01
Aim Biotic homogenization is a growing phenomenon and has recently attracted much attention. Here, we analyse a large dataset of native and alien plants in North America to examine whether biotic homogenization is related to several ecological and biological attributes. Location North America (north of Mexico). Methods We assembled...
Kwiatkowski, M; Wurlitzer, M; Krutilin, A; Kiani, P; Nimer, R; Omidi, M; Mannaa, A; Bussmann, T; Bartkowiak, K; Kruber, S; Uschold, S; Steffen, P; Lübberstedt, J; Küpker, N; Petersen, H; Knecht, R; Hansen, N O; Zarrine-Afsar, A; Robertson, W D; Miller, R J D; Schlüter, H
2016-02-16
Posttranslational modifications and proteolytic processing regulate almost all physiological processes. Dysregulation can potentially result in pathologic protein species causing diseases. Thus, tissue species proteomes of diseased individuals provide diagnostic information. Since the composition of tissue proteomes can rapidly change during tissue homogenization by the action of enzymes released from their compartments, disease specific protein species patterns can vanish. Recently, we described a novel, ultrafast and soft method for cold vaporization of tissue via desorption by impulsive vibrational excitation (DIVE) using a picosecond-infrared-laser (PIRL). Given that DIVE extraction may provide improved access to the original composition of protein species in tissues, we compared the proteome composition of tissue protein homogenates after DIVE homogenization with conventional homogenizations. A higher number of intact protein species was observed in DIVE homogenates. Due to the ultrafast transfer of proteins from tissues via gas phase into frozen condensates of the aerosols, intact protein species were exposed to a lesser extent to enzymatic degradation reactions compared with conventional protein extraction. In addition, total yield of the number of proteins is higher in DIVE homogenates, because they are very homogenous and contain almost no insoluble particles, allowing direct analysis with subsequent analytical methods without the necessity of centrifugation. Enzymatic protein modifications during tissue homogenization are responsible for changes of the in-vivo protein species composition. Cold vaporization of tissues by PIRL-DIVE is comparable with taking a snapshot at the time of the laser irradiation of the dynamic changes that occur continuously under in-vivo conditions. At that time point all biomolecules are transferred into an aerosol, which is immediately frozen. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Kwiatkowski, M.; Wurlitzer, M.; Krutilin, A.; Kiani, P.; Nimer, R.; Omidi, M.; Mannaa, A.; Bussmann, T.; Bartkowiak, K.; Kruber, S.; Uschold, S.; Steffen, P.; Lübberstedt, J.; Küpker, N.; Petersen, H.; Knecht, R.; Hansen, N.O.; Zarrine-Afsar, A.; Robertson, W.D.; Miller, R.J.D.; Schlüter, H.
2016-01-01
Posttranslational modifications and proteolytic processing regulate almost all physiological processes. Dysregulation can potentially result in pathologic protein species causing diseases. Thus, tissue species proteomes of diseased individuals provide diagnostic information. Since the composition of tissue proteomes can rapidly change during tissue homogenization by the action of enzymes released from their compartments, disease specific protein species patterns can vanish. Recently, we described a novel, ultrafast and soft method for cold vaporization of tissue via desorption by impulsive vibrational excitation (DIVE) using a picosecond-infrared-laser (PIRL). Given that DIVE extraction may provide improved access to the original composition of protein species in tissues, we compared the proteome composition of tissue protein homogenates after DIVE homogenization with conventional homogenizations. A higher number of intact protein species was observed in DIVE homogenates. Due to the ultrafast transfer of proteins from tissues via gas phase into frozen condensates of the aerosols, intact protein species were exposed to a lesser extent to enzymatic degradation reactions compared with conventional protein extraction. In addition, total yield of the number of proteins is higher in DIVE homogenates, because they are very homogenous and contain almost no insoluble particles, allowing direct analysis with subsequent analytical methods without the necessity of centrifugation. Biological significance Enzymatic protein modifications during tissue homogenization are responsible for changes of the in-vivo protein species composition. Cold vaporization of tissues by PIRL-DIVE is comparable with taking a snapshot at the time of the laser irradiation of the dynamic changes that occur continuously under in-vivo conditions. At that time point all biomolecules are transferred into an aerosol, which is immediately frozen. PMID:26778141
USDA-ARS?s Scientific Manuscript database
Using a homogenization-evaporation method, beta-carotene (BC) loaded nano-particles were prepared with different ratios of food-grade sodium caseinate (SC), whey protein isolate (WPI), or soy protein isolate (SPI) to BC and evaluated for their physiochemical stability, in vitro cytotoxicity, and cel...
Optimization of the Magnetic Field Homogeneity Area for Solenoid Type Magnets
NASA Astrophysics Data System (ADS)
Perepelkin, Eugene; Polyakova, Rima; Tarelkin, Aleksandr; Kovalenko, Alexander; Sysoev, Pavel; Sadovnikova, Marianne; Yudin, Ivan
2018-02-01
Homogeneous magnetic fields are important requisites in modern physics research. In this paper we discuss the problem of magnetic field homogeneity area maximization for solenoid magnets. We discuss A-model and B-model, which are basic types of solenoid magnets used to provide a homogeneous field, and methods for their optimization. We propose C-model which can be used for the NICA project. We have also carried out a cross-check of the C-model with the parameters stated for the CLEO II detector.
NASA Technical Reports Server (NTRS)
Kumar, P.; Patel, S. R.
1974-01-01
A method is described for studying theoretically the concentration fluctuations of a dilute contaminate undergoing a first-order chemical reaction. The method is based on Deissler's (1958) theory for homogeneous turbulence for times before the final period, and it follows the approach used by Loeffler and Deissler (1961) to study temperature fluctuations in homogeneous turbulence. Four-point correlation equations are obtained; it is assumed that terms containing fifth-order correlation are very small in comparison with those containing fourth-order correlations, and can therefore be neglected. A spectrum equation is obtained in a form which can be solved numerically, yielding the decay law for the concentration fluctuations in homogeneous turbulence for the period much before the final period of decay.
Kurien, B T; Kaufman, K M; Harley, J B; Scofield, R H
2001-09-15
A simple method for extracting DNA from agarose gel slices is described. The extraction is rapid and does not involve harsh chemicals or sophisticated equipment. The method involves homogenization of the excised gel slice (in Tris-EDTA buffer), containing the DNA fragment of interest, at 45 degrees C in a microcentrifuge tube with a Kontes pellet pestle for 1 min. The "homogenate" is then centrifuged for 30 s and the supernatant is saved. The "homogenized" agarose is extracted one more time and the supernatant obtained is combined with the previous supernatant. The DNA extracted using this method lent itself to restriction enzyme analysis, ligation, transformation, and expression of functional protein in bacteria. This method was found to be applicable with 0.8, 1.0, and 2.0% agarose gels. DNA fragments varying from 23 to 0.4 kb were extracted using this procedure and a yield ranging from 40 to 90% was obtained. The yield was higher for fragments 2.0 kb and higher (70-90%). This range of efficiency was maintained when the starting material was kept between 10 and 300 ng. The heat step was found to be critical since homogenization at room temperature failed to yield any DNA. Extracting DNA with our method elicited an increased yield (up to twofold) compared with that extracted with a commercial kit. Also, the number of transformants obtained using the DNA extracted with our method was at least twice that obtained using the DNA extracted with the commercial kit. Copyright 2001 Academic Press.
Grant, Irene R; Williams, Alan G; Rowe, Michael T; Muir, D Donald
2005-06-01
The effect of various pasteurization time-temperature conditions with and without homogenization on the viability of Mycobacterium avium subsp. paratuberculosis was investigated using a pilot-scale commercial high-temperature, short-time (HTST) pasteurizer and raw milk spiked with 10(1) to 10(5) M. avium subsp. paratuberculosis cells/ml. Viable M. avium subsp. paratuberculosis was cultured from 27 (3.3%) of 816 pasteurized milk samples overall, 5 on Herrold's egg yolk medium and 22 by BACTEC culture. Therefore, in 96.7% of samples, M. avium subsp. paratuberculosis had been completely inactivated by HTST pasteurization, alone or in combination with homogenization. Heat treatments incorporating homogenization at 2,500 lb/in2, applied upstream (as a separate process) or in hold (at the start of a holding section), resulted in significantly fewer culture-positive samples than pasteurization treatments without homogenization (P < 0.001 for those in hold and P < 0.05 for those upstream). Where colony counts were obtained, the number of surviving M. avium subsp. paratuberculosis cells was estimated to be 10 to 20 CFU/150 ml, and the reduction in numbers achieved by HTST pasteurization with or without homogenization was estimated to be 4.0 to 5.2 log10. The impact of homogenization on clump size distribution in M. avium subsp. paratuberculosis broth suspensions was subsequently assessed using a Mastersizer X spectrometer. These experiments demonstrated that large clumps of M. avium subsp. paratuberculosis cells were reduced to single-cell or "miniclump" status by homogenization at 2,500 lb/in2. Consequently, when HTST pasteurization was being applied to homogenized milk, the M. avium subsp. paratuberculosis cells would have been present as predominantly declumped cells, which may possibly explain the greater inactivation achieved by the combination of pasteurization and homogenization.
Grant, Irene R.; Williams, Alan G.; Rowe, Michael T.; Muir, D. Donald
2005-01-01
The effect of various pasteurization time-temperature conditions with and without homogenization on the viability of Mycobacterium avium subsp. paratuberculosis was investigated using a pilot-scale commercial high-temperature, short-time (HTST) pasteurizer and raw milk spiked with 101 to 105 M. avium subsp. paratuberculosis cells/ml. Viable M. avium subsp. paratuberculosis was cultured from 27 (3.3%) of 816 pasteurized milk samples overall, 5 on Herrold's egg yolk medium and 22 by BACTEC culture. Therefore, in 96.7% of samples, M. avium subsp. paratuberculosis had been completely inactivated by HTST pasteurization, alone or in combination with homogenization. Heat treatments incorporating homogenization at 2,500 lb/in2, applied upstream (as a separate process) or in hold (at the start of a holding section), resulted in significantly fewer culture-positive samples than pasteurization treatments without homogenization (P < 0.001 for those in hold and P < 0.05 for those upstream). Where colony counts were obtained, the number of surviving M. avium subsp. paratuberculosis cells was estimated to be 10 to 20 CFU/150 ml, and the reduction in numbers achieved by HTST pasteurization with or without homogenization was estimated to be 4.0 to 5.2 log10. The impact of homogenization on clump size distribution in M. avium subsp. paratuberculosis broth suspensions was subsequently assessed using a Mastersizer X spectrometer. These experiments demonstrated that large clumps of M. avium subsp. paratuberculosis cells were reduced to single-cell or “miniclump” status by homogenization at 2,500 lb/in2. Consequently, when HTST pasteurization was being applied to homogenized milk, the M. avium subsp. paratuberculosis cells would have been present as predominantly declumped cells, which may possibly explain the greater inactivation achieved by the combination of pasteurization and homogenization. PMID:15932977
Method for preparing hydrous titanium oxide spherules and other gel forms thereof
Collins, J.L.
1998-10-13
The present invention are methods for preparing hydrous titanium oxide spherules, hydrous titanium oxide gels such as gel slabs, films, capillary and electrophoresis gels, titanium monohydrogen phosphate spherules, hydrous titanium oxide spherules having suspendible particles homogeneously embedded within to form a composite sorbent, titanium monohydrogen phosphate spherules having suspendible particles of at least one different sorbent homogeneously embedded within to form a composite sorbent having a desired crystallinity, titanium oxide spherules in the form of anatase, brookite or rutile, titanium oxide spherules having suspendible particles homogeneously embedded within to form a composite, hydrous titanium oxide fiber materials, titanium oxide fiber materials, hydrous titanium oxide fiber materials having suspendible particles homogeneously embedded within to form a composite, titanium oxide fiber materials having suspendible particles homogeneously embedded within to form a composite and spherules of barium titanate. These variations of hydrous titanium oxide spherules and gel forms prepared by the gel-sphere, internal gelation process offer more useful forms of inorganic ion exchangers, catalysts, getters and ceramics. 6 figs.
Method for preparing hydrous titanium oxide spherules and other gel forms thereof
Collins, Jack L.
1998-01-01
The present invention are methods for preparing hydrous titanium oxide spherules, hydrous titanium oxide gels such as gel slabs, films, capillary and electrophoresis gels, titanium monohydrogen phosphate spherules, hydrous titanium oxide spherules having suspendible particles homogeneously embedded within to form a composite sorbent, titanium monohydrogen phosphate spherules having suspendible particles of at least one different sorbent homogeneously embedded within to form a composite sorbent having a desired crystallinity, titanium oxide spherules in the form of anatase, brookite or rutile, titanium oxide spherules having suspendible particles homogeneously embedded within to form a composite, hydrous titanium oxide fiber materials, titanium oxide fiber materials, hydrous titanium oxide fiber materials having suspendible particles homogeneously embedded within to form a composite, titanium oxide fiber materials having suspendible particles homogeneously embedded within to form a composite and spherules of barium titanate. These variations of hydrous titanium oxide spherules and gel forms prepared by the gel-sphere, internal gelation process offer more useful forms of inorganic ion exchangers, catalysts, getters and ceramics.
NASA Technical Reports Server (NTRS)
Wegener, P. P.
1980-01-01
A cryogenic wind tunnel is based on the twofold idea of lowering drive power and increasing Reynolds number by operating with nitrogen near its boiling point. There are two possible types of condensation problems involved in this mode of wind tunnel operation. They concern the expansion from the nozzle supply to the test section at relatively low cooling rates, and secondly the expansion around models in the test section. This secondary expansion involves higher cooling rates and shorter time scales. In addition to these two condensation problems it is not certain what purity of nitrogen can be achieved in a large facility. Therefore, one cannot rule out condensation processes other than those of homogeneous nucleation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Österreicher, Johannes Albert; Kumar, Manoj
Characterization of Mg-Si precipitates is crucial for optimizing the homogenization heat treatment of Al-Mg-Si alloys. Although sample preparation is key for high quality scanning electron microscopy imaging, most common methods lead to dealloying of Mg-Si precipitates. In this article we systematically evaluate different sample preparation methods: mechanical polishing, etching with various reagents, and electropolishing using different electrolytes. We demonstrate that the use of a nitric acid and methanol electrolyte for electropolishing a homogenized Al-Mg-Si alloy prevents the dissolution of Mg-Si precipitates, resulting in micrographs of higher quality. This preparation method is investigated in depth and the obtained scanning electron microscopymore » images are compared with transmission electron micrographs: the shape and size of Mg-Si precipitates appear very similar in either method. The scanning electron micrographs allow proper identification and measurement of the Mg-Si phases including needles with lengths of roughly 200 nm. These needles are β″ precipitates as confirmed by high resolution transmission electron microscopy. - Highlights: •Secondary precipitation in homogenized 6xxx Al alloys is crucial for extrudability. •Existing sample preparation methods for SEM are improvable. •Electropolishing with nitric acid/methanol yields superior quality in SEM. •The obtained micrographs are compared to TEM micrographs.« less
MC 2 -3: Multigroup Cross Section Generation Code for Fast Reactor Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Changho; Yang, Won Sik
This paper presents the methods and performance of the MC2 -3 code, which is a multigroup cross-section generation code for fast reactor analysis, developed to improve the resonance self-shielding and spectrum calculation methods of MC2 -2 and to simplify the current multistep schemes generating region-dependent broad-group cross sections. Using the basic neutron data from ENDF/B data files, MC2 -3 solves the consistent P1 multigroup transport equation to determine the fundamental mode spectra for use in generating multigroup neutron cross sections. A homogeneous medium or a heterogeneous slab or cylindrical unit cell problem is solved in ultrafine (2082) or hyperfine (~400more » 000) group levels. In the resolved resonance range, pointwise cross sections are reconstructed with Doppler broadening at specified temperatures. The pointwise cross sections are directly used in the hyperfine group calculation, whereas for the ultrafine group calculation, self-shielded cross sections are prepared by numerical integration of the pointwise cross sections based upon the narrow resonance approximation. For both the hyperfine and ultrafine group calculations, unresolved resonances are self-shielded using the analytic resonance integral method. The ultrafine group calculation can also be performed for a two-dimensional whole-core problem to generate region-dependent broad-group cross sections. Verification tests have been performed using the benchmark problems for various fast critical experiments including Los Alamos National Laboratory critical assemblies; Zero-Power Reactor, Zero-Power Physics Reactor, and Bundesamt für Strahlenschutz experiments; Monju start-up core; and Advanced Burner Test Reactor. Verification and validation results with ENDF/B-VII.0 data indicated that eigenvalues from MC2 -3/DIF3D agreed well with Monte Carlo N-Particle5 MCNP5 or VIM Monte Carlo solutions within 200 pcm and regionwise one-group fluxes were in good agreement with Monte Carlo solutions.« less
Measurement of electromagnetic fields over a small electrolytic tank
NASA Astrophysics Data System (ADS)
Caffey, T. W. H.; Morris, H. E.
1990-12-01
In 1986, Hart proposed a large, hemispherical electrolytic tank and the use of the Surface Electrical Potential method with which to study resistivity changes due to energy-extraction processes in the earth. A second method for the inference of underground resistivity changes, the Controlled Source Audio-MagnetoTelluric method, has been widely used in the field. This method uses measurements of the electromagnetic field from a surface dipole, rather than the surface potential distribution from a buried vertical electrode, as the basis of the technique. If both SEP and CSAMT could be applied to the same model structure in the same electrolytic tank, it would seem that the diagnostic information would be enhanced over the use of each technique separately. Accordingly, the specific objectives were: to determine to what radial extent the bowl could be used as a homogeneous half-space; and to demonstrate acceptable accuracy by measuring the effect of a conducting target immersed in the bowl and comparing the measurements with numerical modeling. Electromagnetic fields over an electrolytic tank have been measured by others, and this report begins with a comparative summary of both prior and present work. The next section presents the formulas for the electromagnetic fields, and explains the choice of a particular method of measuring apparent resistivity. The field theory is also used in the subsequent section to provide error estimates needed for design guidance. The following sections describe the measurements, and the considerations for a larger facility. The appendices include the derivatives of the fields, the electrolyte characteristics, a description of the apparatus, and calibration methods.
Sandeep S. Nair; Sudhir Sharma; Yunqiao Pu; Qining Sun; Shaobo Pan; J.Y. Zhu; Yulin Deng; Art J. Ragauskas
2014-01-01
A new method to prepare nanolignin using a simple high shear homogenizer is presented. The kraft lignin particles with a broad distribution ranging from large micron- to nano-sized particles were completely homogenized to nanolignin particles with sizes less than 100 nm after 4 h of mechanical shearing. The 13C nuclear magnetic resonance (NMR)...
Samak, Yassmin O; El Massik, Magda; Coombes, Allan G A
2017-01-01
Alginate microparticles incorporating hydrocortisone hemisuccinate were produced by aerosolization and homogenization methods to investigate their potential for colonic drug delivery. Microparticle stabilization was achieved by CaCl 2 crosslinking solution (0.5 M and 1 M), and drug loading was accomplished by diffusion into blank microparticles or by direct encapsulation. Homogenization method produced smaller microparticles (45-50 μm), compared to aerosolization (65-90 μm). High drug loadings (40% wt/wt) were obtained for diffusion-loaded aerosolized microparticles. Aerosolized microparticles suppressed drug release in simulated gastric fluid (SGF) and simulated intestinal fluid (SIF) prior to drug release in simulated colonic fluid (SCF) to a higher extent than homogenized microparticles. Microparticles prepared using aerosolization or homogenization (1 M CaCl 2 , diffusion loaded) released 5% and 17% of drug content after 2 h in SGF and 4 h in SIF, respectively, and 75% after 12 h in SCF. Thus, aerosolization and homogenization techniques show potential for producing alginate microparticles for colonic drug delivery in the treatment of inflammatory bowel disease. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Honarvar, M.; Lobo, J.; Mohareri, O.; Salcudean, S. E.; Rohling, R.
2015-05-01
To produce images of tissue elasticity, the vibro-elastography technique involves applying a steady-state multi-frequency vibration to tissue, estimating displacements from ultrasound echo data, and using the estimated displacements in an inverse elasticity problem with the shear modulus spatial distribution as the unknown. In order to fully solve the inverse problem, all three displacement components are required. However, using ultrasound, the axial component of the displacement is measured much more accurately than the other directions. Therefore, simplifying assumptions must be used in this case. Usually, the equations of motion are transformed into a Helmholtz equation by assuming tissue incompressibility and local homogeneity. The local homogeneity assumption causes significant imaging artifacts in areas of varying elasticity. In this paper, we remove the local homogeneity assumption. In particular we introduce a new finite element based direct inversion technique in which only the coupling terms in the equation of motion are ignored, so it can be used with only one component of the displacement. Both Cartesian and cylindrical coordinate systems are considered. The use of multi-frequency excitation also allows us to obtain multiple measurements and reduce artifacts in areas where the displacement of one frequency is close to zero. The proposed method was tested in simulations and experiments against a conventional approach in which the local homogeneity is used. The results show significant improvements in elasticity imaging with the new method compared to previous methods that assumes local homogeneity. For example in simulations, the contrast to noise ratio (CNR) for the region with spherical inclusion increases from an average value of 1.5-17 after using the proposed method instead of the local inversion with homogeneity assumption, and similarly in the prostate phantom experiment, the CNR improved from an average value of 1.6 to about 20.
Peng, Jie; Dong, Wu-Jun; Li, Ling; Xu, Jia-Ming; Jin, Du-Jia; Xia, Xue-Jun; Liu, Yu-Ling
2015-12-01
The effect of different high pressure homogenization energy input parameters on mean diameter droplet size (MDS) and droplets with > 5 μm of lipid injectable emulsions were evaluated. All emulsions were prepared at different water bath temperatures or at different rotation speeds and rotor-stator system times, and using different homogenization pressures and numbers of high-pressure system recirculations. The MDS and polydispersity index (PI) value of the emulsions were determined using the dynamic light scattering (DLS) method, and large-diameter tail assessments were performed using the light-obscuration/single particle optical sensing (LO/SPOS) method. Using 1000 bar homogenization pressure and seven recirculations, the energy input parameters related to the rotor-stator system will not have an effect on the final particle size results. When rotor-stator system energy input parameters are fixed, homogenization pressure and recirculation will affect mean particle size and large diameter droplet. Particle size will decrease with increasing homogenization pressure from 400 bar to 1300 bar when homogenization recirculation is fixed; when the homogenization pressure is fixed at 1000 bar, the particle size of both MDS and percent of fat droplets exceeding 5 μm (PFAT 5 ) will decrease with increasing homogenization recirculations, MDS dropped to 173 nm after five cycles and maintained this level, volume-weighted PFAT 5 will drop to 0.038% after three cycles, so the "plateau" of MDS will come up later than that of PFAT 5 , and the optimal particle size is produced when both of them remained at plateau. Excess homogenization recirculation such as nine times under the 1000 bar may lead to PFAT 5 increase to 0.060% rather than a decrease; therefore, the high-pressure homogenization procedure is the key factor affecting the particle size distribution of emulsions. Varying storage conditions (4-25°C) also influenced particle size, especially the PFAT 5 . Copyright © 2015. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Selb, Juliette; Ogden, Tyler M.; Dubb, Jay; Fang, Qianqian; Boas, David A.
2013-03-01
Time-domain near-infrared spectroscopy (TD-NIRS) offers the ability to measure the absolute baseline optical properties of a tissue. Specifically, for brain imaging, the robust assessment of cerebral blood volume and oxygenation based on measurement of cerebral hemoglobin concentrations is essential for reliable cross-sectional and longitudinal studies. In adult heads, these baseline measurements are complicated by the presence of thick extra-cerebral tissue (scalp, skull, CSF). A simple semi-infinite homogeneous model of the head has proven to have limited use because of the large errors it introduces in the recovered brain absorption. Analytical solutions for layered media have shown improved performance on Monte-Carlo simulated data and layered phantom experiments, but their validity on real adult head data has never been demonstrated. With the advance of fast Monte Carlo approaches based on GPU computation, numerical methods to solve the radiative transfer equation become viable alternatives to analytical solutions of the diffusion equation. Monte Carlo approaches provide the additional advantage to be adaptable to any geometry, in particular more realistic head models. The goals of the present study were twofold: (1) to implement a fast and flexible Monte Carlo-based fitting routine to retrieve the brain optical properties; (2) to characterize the performances of this fitting method on realistic adult head data. We generated time-resolved data at various locations over the head, and fitted them with different models of light propagation: the homogeneous analytical model, and Monte Carlo simulations for three head models: a two-layer slab, the true subject's anatomy, and that of a generic atlas head. We found that the homogeneous model introduced a median 20 to 25% error on the recovered brain absorption, with large variations over the range of true optical properties. The two-layer slab model only improved moderately the results over the homogeneous one. On the other hand, using a generic atlas head registered to the subject's head surface decreased the error by a factor of 2. When the information is available, using the true subject anatomy offers the best performance.
The hydrodynamic design and critical techniques for 1m×1m water tunnel
NASA Astrophysics Data System (ADS)
Jiang, Yubiao; Gao, Chao; Geng, Zihai; Chen, Cheng
2018-04-01
China aerodynamics research and development Center has built 1m×1m water tunnel featured by good flow field quality and comprehensive experimental abilities for the researches on flow visualization and measurement. In detail, it has several advantages, such as low turbulence intensity, spatially homogeneous velocity field, stable flow velocity and convenience for use. The experimental section has low turbulence intensity and good quality of flow field over a wide range of flow velocity from 0.1m/s to 1m/s, implying that the hydrodynamic design method and critical techniques for the tunnel are worthy of popularization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Londono, J.D.; Wignall, G.D.; Lin, J.S.
1995-12-31
The solid-state morphology and liquid-state homogeneity of blends of high-density polyethylene (HDPE) and low-density polyethylene (LDPE) were investigated by small-angle neutron and x-ray scattering (SANS and SAXS). The solid state morphology was investigated as a function of composition and cooling rate from the melt. After slow cooling, the evidence indicated that the mixtures were either completely (HDPE-rich blends) or almost completely (LDPE-rich blends) phase separated into separate HDPE and LDPE lamellae over the whole compositional range. In contrast, for rapidly quenched blends the components are extensively co-crystallized for all concentrations, though the SANS data indicated that the branched component hadmore » a tendency to be preferentially located in the inter-lamellar regions. In the liquid state, the blends were homogeneous at all compositions, showing that the solid state morphology is not determined by the melt structure, but is a function of the crystallization kinetics. Further evidence for blend homogeneity in the liquid is presented. In particular the authors examine the hypothesis that a phase separated mixture might give a scattering pattern similar to a homogeneous blend if the domain sizes were larger than the maximum spatial resolution of the SANS experiment (D > 2{pi}/Q{sub min} {approximately} 2,000 {angstrom}). In this scenario, the differential scattering cross section d{Sigma}/d{Omega}(Q) {approximately} Q{sup {minus}2}, though phase separation decreases the cross section in this Q-range with respect to the homogeneous blend. For HDPE/LDPE blends in the melt, this decrease in intensity was not observed, thus ruling out the possibility of phase separation.« less
NASA Astrophysics Data System (ADS)
Leviandier, Thierry; Alber, A.; Le Ber, F.; Piégay, H.
2012-02-01
Seven methods designed to delineate homogeneous river segments, belonging to four families, namely — tests of homogeneity, contrast enhancing, spatially constrained classification, and hidden Markov models — are compared, firstly on their principles, then on a case study, and on theoretical templates. These templates contain patterns found in the case study but not considered in the standard assumptions of statistical methods, such as gradients and curvilinear structures. The influence of data resolution, noise and weak satisfaction of the assumptions underlying the methods is investigated. The control of the number of reaches obtained in order to achieve meaningful comparisons is discussed. No method is found that outperforms all the others on all trials. However, the methods with sequential algorithms (keeping at order n + 1 all breakpoints found at order n) fail more often than those running complete optimisation at any order. The Hubert-Kehagias method and Hidden Markov Models are the most successful at identifying subpatterns encapsulated within the templates. Ergodic Hidden Markov Models are, moreover, liable to exhibit transition areas.
NASA Astrophysics Data System (ADS)
Farsadnia, F.; Rostami Kamrood, M.; Moghaddam Nia, A.; Modarres, R.; Bray, M. T.; Han, D.; Sadatinejad, J.
2014-02-01
One of the several methods in estimating flood quantiles in ungauged or data-scarce watersheds is regional frequency analysis. Amongst the approaches to regional frequency analysis, different clustering techniques have been proposed to determine hydrologically homogeneous regions in the literature. Recently, Self-Organization feature Map (SOM), a modern hydroinformatic tool, has been applied in several studies for clustering watersheds. However, further studies are still needed with SOM on the interpretation of SOM output map for identifying hydrologically homogeneous regions. In this study, two-level SOM and three clustering methods (fuzzy c-mean, K-mean, and Ward's Agglomerative hierarchical clustering) are applied in an effort to identify hydrologically homogeneous regions in Mazandaran province watersheds in the north of Iran, and their results are compared with each other. Firstly the SOM is used to form a two-dimensional feature map. Next, the output nodes of the SOM are clustered by using unified distance matrix algorithm and three clustering methods to form regions for flood frequency analysis. The heterogeneity test indicates the four regions achieved by the two-level SOM and Ward approach after adjustments are sufficiently homogeneous. The results suggest that the combination of SOM and Ward is much better than the combination of either SOM and FCM or SOM and K-mean.
NASA Astrophysics Data System (ADS)
Sokołowski, Damian; Kamiński, Marcin
2018-01-01
This study proposes a framework for determination of basic probabilistic characteristics of the orthotropic homogenized elastic properties of the periodic composite reinforced with ellipsoidal particles and a high stiffness contrast between the reinforcement and the matrix. Homogenization problem, solved by the Iterative Stochastic Finite Element Method (ISFEM) is implemented according to the stochastic perturbation, Monte Carlo simulation and semi-analytical techniques with the use of cubic Representative Volume Element (RVE) of this composite containing single particle. The given input Gaussian random variable is Young modulus of the matrix, while 3D homogenization scheme is based on numerical determination of the strain energy of the RVE under uniform unit stretches carried out in the FEM system ABAQUS. The entire series of several deterministic solutions with varying Young modulus of the matrix serves for the Weighted Least Squares Method (WLSM) recovery of polynomial response functions finally used in stochastic Taylor expansions inherent for the ISFEM. A numerical example consists of the High Density Polyurethane (HDPU) reinforced with the Carbon Black particle. It is numerically investigated (1) if the resulting homogenized characteristics are also Gaussian and (2) how the uncertainty in matrix Young modulus affects the effective stiffness tensor components and their PDF (Probability Density Function).
3D geometric split-merge segmentation of brain MRI datasets.
Marras, Ioannis; Nikolaidis, Nikolaos; Pitas, Ioannis
2014-05-01
In this paper, a novel method for MRI volume segmentation based on region adaptive splitting and merging is proposed. The method, called Adaptive Geometric Split Merge (AGSM) segmentation, aims at finding complex geometrical shapes that consist of homogeneous geometrical 3D regions. In each volume splitting step, several splitting strategies are examined and the most appropriate is activated. A way to find the maximal homogeneity axis of the volume is also introduced. Along this axis, the volume splitting technique divides the entire volume in a number of large homogeneous 3D regions, while at the same time, it defines more clearly small homogeneous regions within the volume in such a way that they have greater probabilities of survival at the subsequent merging step. Region merging criteria are proposed to this end. The presented segmentation method has been applied to brain MRI medical datasets to provide segmentation results when each voxel is composed of one tissue type (hard segmentation). The volume splitting procedure does not require training data, while it demonstrates improved segmentation performance in noisy brain MRI datasets, when compared to the state of the art methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Miskiewicz, M.; Lachowicz, J.; Tysiac, P.; Jaskula, P.; Wilde, K.
2018-05-01
The article presents the possibility of using non-destructive methods of road pavement diagnostics as an alternative to traditional means to assess the reasons for premature cracks adjacent to bridge objects. Two scanning methods were used: laser scanning to measure geometric surface deformation and ground penetrating radar (GPR) inspection to assess the road pavement condition. With the use of a laser scanner, an effective tool for road deformation assessment several approach pavement surfaces next to the bridges were scanned. As the result, a point cloud was obtained including spatial information about the pavement deformation. The data accuracy was about 3 mm, the deformations were presented in the form of deviation maps between the reference surface and the actual surface. Moreover characteristic pavement surface cross-sections were presented. The in situ measurements of the GPR method were performed and analysed in order to detect non-homogeneity in the density of structural layers of the pavement. Due to the analysis of the permittivity of individual layers, it was possible to detect non-homogeneity areas. The performed GPR measurements were verified by standard invasive tests carried out by drilling boreholes and taking cores from the pavement and testing the compaction and air voids content in asphalt layers. As a result of the measurements made by both methods significant differences in layer compacting factor values were diagnosed. The factor was much smaller in the area directly next to the bridgehead and much larger in the zone located a few meters away. The research showed the occurrence of both design and erection errors as well as those related to the maintenance of engineering structures.
Advanced Antennas Enabled by Electromagnetic Metamaterials
2014-12-01
radiation patterns of a conical horn antenna and three soft horns with various homogeneous metasurface liners. The maximum cross-polarization level was...inhomogencous metasurface liners covering both the flared horn section and the straight waveguide section. The mctahorn is fed by a circular waveguide...with a diameter of 20 mm. (b) The sizes of the metallic patches at each row of the metasurface in the flared horn section. Both the length and width
NASA Technical Reports Server (NTRS)
Al-Saadi, Jassim A.
1993-01-01
A computational simulation of a transonic wind tunnel test section with longitudinally slotted walls is developed and described herein. The nonlinear slot model includes dynamic pressure effects and a plenum pressure constraint, and each slot is treated individually. The solution is performed using a finite-difference method that solves an extended transonic small disturbance equation. The walls serve as the outer boundary conditions in the relaxation technique, and an interaction procedure is used at the slotted walls. Measured boundary pressures are not required to establish the wall conditions but are currently used to assess the accuracy of the simulation. This method can also calculate a free-air solution as well as solutions that employ the classical homogeneous wall conditions. The simulation is used to examine two commercial transport aircraft models at a supercritical Mach number for zero-lift and cruise conditions. Good agreement between measured and calculated wall pressures is obtained for the model geometries and flow conditions examined herein. Some localized disagreement is noted, which is attributed to improper simulation of viscous effects in the slots.
Li, Qiang; Aucamp, Jean P; Tang, Alison; Chatel, Alex; Hoare, Mike
2012-08-01
An ultra scale-down (USD) device that provides insight of how industrial homogenization impacts bioprocess performance is desirable in the biopharmaceutical industry, especially at the early stage of process development where only a small quantity of material is available. In this work, we assess the effectiveness of focused acoustics as the basis of an USD cell disruption method to mimic and study high-pressure, step-wise homogenization of rec Escherichia coli cells for the recovery of an intracellular protein, antibody fragment (Fab'). The release of both Fab' and of overall protein follows first-order reaction kinetics with respect to time of exposure to focused acoustics. The rate constant is directly proportional to applied electrical power input per unit volume. For nearly total protein or Fab' release (>99%), the key physical properties of the disruptate produced by focused acoustics, such as cell debris particle size distribution and apparent viscosity show good agreement with those for homogenates produced by high-pressure homogenization operated to give the same fractional release. The only key difference is observed for partial disruption of cells where focused acoustics yields a disruptate of lower viscosity than homogenization, evidently due to a greater extent of polynucleic acids degradation. Verification of this USD approach to cell disruption by high-pressure homogenization is achieved using USD centrifugation to demonstrate the same sedimentation characteristics of disruptates prepared using both the scaled-down focused acoustic and the pilot-scale homogenization methods for the same fraction of protein release. Copyright © 2012 Wiley Periodicals, Inc.
Jiang, Chao; Yuan, Yuan; Liu, Libing; Hou, Jingyi; Jin, Yan; Huang, Luqi
2015-11-05
A label-free, homogenous and sensitive one-step method for the molecular authentication of medicinal snakes has been developed by combining a rapid PCR technique with water-soluble cationic conjugated polyelectrolytes (CCPs). Three medicinal snake materials (Deinagkistrodon acutus, Zaocys dhumnades and Bungarus multicinctus; a total of 35 specimens) and 48 snake specimens with similar morphologies and textures were clearly distinguished by the naked eye by utilizing a CCP-based assay in a high-throughput manner. The identification of medicinal snakes in patented Chinese drugs was successfully performed using this detection system. In contrast to previous fluorescence-labeled oligonucleotide detection and direct DNA stain hybridization assays, this method does not require designing dye-labeled primers, and unfavorable dimer fluorescence is avoided in this homogenous method.
Rational design of capillary-driven flows for paper-based microfluidics.
Elizalde, Emanuel; Urteaga, Raúl; Berli, Claudio L A
2015-05-21
The design of paper-based assays that integrate passive pumping requires a precise programming of the fluid transport, which has to be encoded in the geometrical shape of the substrate. This requirement becomes critical in multiple-step processes, where fluid handling must be accurate and reproducible for each operation. The present work theoretically investigates the capillary imbibition in paper-like substrates to better understand fluid transport in terms of the macroscopic geometry of the flow domain. A fluid dynamic model was derived for homogeneous porous substrates with arbitrary cross-sectional shapes, which allows one to determine the cross-sectional profile required for a prescribed fluid velocity or mass transport rate. An extension of the model to slit microchannels is also demonstrated. Calculations were validated by experiments with prototypes fabricated in our lab. The proposed method constitutes a valuable tool for the rational design of paper-based assays.
Method for preparing hydrous iron oxide gels and spherules
Collins, Jack L.; Lauf, Robert J.; Anderson, Kimberly K.
2003-07-29
The present invention is directed to methods for preparing hydrous iron oxide spherules, hydrous iron oxide gels such as gel slabs, films, capillary and electrophoresis gels, iron monohydrogen phosphate spherules, hydrous iron oxide spherules having suspendable particles homogeneously embedded within to form composite sorbents and catalysts, iron monohydrogen phosphate spherules having suspendable particles of at least one different sorbent homogeneously embedded within to form a composite sorbent, iron oxide spherules having suspendable particles homogeneously embedded within to form a composite of hydrous iron oxide fiber materials, iron oxide fiber materials, hydrous iron oxide fiber materials having suspendable particles homogeneously embedded within to form a composite, iron oxide fiber materials having suspendable particles homogeneously embedded within to form a composite, dielectric spherules of barium, strontium, and lead ferrites and mixtures thereof, and composite catalytic spherules of barium or strontium ferrite embedded with oxides of Mg, Zn, Pb, Ce and mixtures thereof. These variations of hydrous iron oxide spherules and gel forms prepared by the gel-sphere, internal gelation process offer more useful forms of inorganic ion exchangers, catalysts, getters, dielectrics, and ceramics.
Akbas, Hatice Zehra; Aydin, Zeki; Yilmaz, Onur; Turgut, Selvin
2017-01-01
The effects of the homogenization process on the structures and dielectric properties of pure and Nb-doped BaTiO 3 ceramics have been investigated using an ultrasonic homogenization and conventional mechanical methods. The reagents were homogenized using an ultrasonic processor with high-intensity ultrasonic waves and using a compact mixer-shaker. The components and crystal types of the powders were determined by Fourier-transform infrared spectroscopy (FTIR) and X-ray diffraction (XRD) analyses. The complex permittivity (ε ' , ε″) and AC conductivity (σ') of the samples were analyzed in a wide frequency range of 20Hz to 2MHz at room temperature. The structures and dielectric properties of pure and Nb-doped BaTiO 3 ceramics strongly depend on the homogenization process in a solid-state reaction method. Using an ultrasonic processor with high-intensity ultrasonic waves based on acoustic cavitation phenomena can make a significant improvement in producing high-purity BaTiO 3 ceramics without carbonate impurities with a small dielectric loss. Copyright © 2016 Elsevier B.V. All rights reserved.
Wei, Guoguang; Zhang, Alei; Chen, Kequan; Ouyang, Pingkai
2017-09-01
This study presents an efficient pretreatment of crayfish shell using high pressure homogenization that enables N-acetyl-d-glucosamine (GlcNAc) production by chitinase. Firstly, the chitinase from Serratia proteamaculans NJ303 was screened for its ability to degrade crayfish shell and produce GlcNAc as the sole product. Secondly, high pressure homogenization, which caused the crayfish shell to adopt a fluffy netted structure that was characterized by Scanning electron microscope (SEM), Fourier transform infrared spectrometer (FT-IR), X-ray diffraction (XRD), was evaluated as the best pretreatment method. In addition, the optimal conditions of high pressure homogenization of crayfish shell were determined to be five cycles at a pressure of 400bar, which achieved a yield of 3.9g/L of GlcNAc from 25g/L of crayfish shell in a batch enzymatic reaction over 1.5h. The results showed high pressure homogenization might be an efficient method for direct utilization of crayfish shell for enzymatic production of GlcNAc. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Alexander, Donita; DePaola, Angelo; Young, Ronald B.
1998-01-01
The disease cholera, caused by Vibrio cholerae, has been associated with consumption of contaminated seafood, including raw oysters. Detection of V. cholerae in foods typically involves blending the oysters, diluting the homogenate in alkaline peptone water (APW), overnight enrichment, and isolation on selective agar. Unfortunately, the oyster homogenate must be diluted to large volumes because lower dilutions inhibit the growth of V. cholerae. The goals of this study were to develop an alternative to large dilutions and to evaluate the basis for the inhibition observed in lower dilutions of oyster homogenates. Centrifugation of oyster homogenates at 10,000 x g for 15 min, followed by enrichment of the resulting pellet in APW, was found to eliminate the inhibition of V. cholerae growth. Inhibition appears not to be due to competing microflora but to a component(s) released when V. cholerae grows in the presence of oyster homogenate. The inhibitory component(s) kills the V. cholerae after the cell concentration reaches > 10(exp 8) cells/mL, rather than initially preventing their growth. The pH also declines from 8.0 to 5.5 during this period; however, the pH decline by itself appears not to cause V. cholerae death. Seven strains of V. cholerae (01 and non-01) and two strains of V. vulnificus were susceptible to the inhibitory agent(s). However, other Vibrio and non-Vibrio species tested were not inhibited by the oyster homogenates. Based on digestion of oyster homogenates with pronase, trypsin and lipase, the inhibitory reaction involves a protein(s). In a preliminary trial with oyster homogenate seeded with 1 cfu/g of V. cholerae, the modified centrifugation technique detected a slightly higher percentage of samples at a 1:10 dilution than the standard FDA Bacteriological Analytical Method (BAM) detected in uncentrifuged oyster homogenate at a 1:100 dilution. V. cholerae in seeded samples could also be detected more frequently by the modified centrifugation method than by PCR at a 1:10 dilution.
Ali, Nora A; Mourad, Hebat-Allah M; ElSayed, Hany M; El-Soudani, Magdy; Amer, Hassanein H; Daoud, Ramez M
2016-11-01
The interference is the most important problem in LTE or LTE-Advanced networks. In this paper, the interference was investigated in terms of the downlink signal to interference and noise ratio (SINR). In order to compare the different frequency reuse methods that were developed to enhance the SINR, it would be helpful to have a generalized expression to study the performance of the different methods. Therefore, this paper introduces general expressions for the SINR in homogeneous and in heterogeneous networks. In homogeneous networks, the expression was applied for the most common types of frequency reuse techniques: soft frequency reuse (SFR) and fractional frequency reuse (FFR). The expression was examined by comparing it with previously developed ones in the literature and the comparison showed that the expression is valid for any type of frequency reuse scheme and any network topology. Furthermore, the expression was extended to include the heterogeneous network; the expression includes the problem of co-tier and cross-tier interference in heterogeneous networks (HetNet) and it was examined by the same method of the homogeneous one.
Method of assessing heterogeneity in images
Jacob, Richard E.; Carson, James P.
2016-08-23
A method of assessing heterogeneity in images is disclosed. 3D images of an object are acquired. The acquired images may be filtered and masked. Iterative decomposition is performed on the masked images to obtain image subdivisions that are relatively homogeneous. Comparative analysis, such as variogram analysis or correlogram analysis, is performed of the decomposed images to determine spatial relationships between regions of the images that are relatively homogeneous.
ERIC Educational Resources Information Center
Quinn, Terry; Rai, Sanjay
2012-01-01
The method of variation of parameters can be found in most undergraduate textbooks on differential equations. The method leads to solutions of the non-homogeneous equation of the form y = u[subscript 1]y[subscript 1] + u[subscript 2]y[subscript 2], a sum of function products using solutions to the homogeneous equation y[subscript 1] and…
NASA Astrophysics Data System (ADS)
Morozov, M. I.; Kungl, H.; Hoffmann, M. J.
2011-03-01
Li-, Ta-, and Mn-modified (K,Na)NbO3 ceramics with various compositional homogeneity have been prepared by conventional and precursor methods. The homogeneous ceramic has demonstrated a sharper peak in temperature dependent piezoelectric response. The dielectric and piezoelectric properties of the homogeneous ceramics have been characterized at the experimental subcoercive electric fields near the temperature of the orthorhombic-tetragonal phase transition with respect to poling in both phases. Poling in the tetragonal phase is shown to enhance the low-signal dielectric and piezoelectric properties in the orthorhombic phase.
Homogenization models for thin rigid structured surfaces and films.
Marigo, Jean-Jacques; Maurel, Agnès
2016-07-01
A homogenization method for thin microstructured surfaces and films is presented. In both cases, sound hard materials are considered, associated with Neumann boundary conditions and the wave equation in the time domain is examined. For a structured surface, a boundary condition is obtained on an equivalent flat wall, which links the acoustic velocity to its normal and tangential derivatives (of the Myers type). For a structured film, jump conditions are obtained for the acoustic pressure and the normal velocity across an equivalent interface (of the Ventcels type). This interface homogenization is based on a matched asymptotic expansion technique, and differs slightly from the classical homogenization, which is known to fail for small structuration thicknesses. In order to get insight into what causes this failure, a two-step homogenization is proposed, mixing classical homogenization and matched asymptotic expansion. Results of the two homogenizations are analyzed in light of the associated elementary problems, which correspond to problems of fluid mechanics, namely, potential flows around rigid obstacles.
NASA Astrophysics Data System (ADS)
El Moumen, A.; Tarfaoui, M.; Lafdi, K.
2018-06-01
Elastic properties of laminate composites based Carbone Nanotubes (CNTs), used in military applications, were estimated using homogenization techniques and compared to the experimental data. The composite consists of three phases: T300 6k carbon fibers fabric with 5HS (satin) weave, baseline pure Epoxy matrix and CNTs added with 0.5%, 1%, 2% and 4%. Two step homogenization methods based RVE model were employed. The objective of this paper is to determine the elastic properties of structure starting from the knowledge of those of constituents (CNTs, Epoxy and carbon fibers fabric). It is assumed that the composites have a geometric periodicity and the homogenization model can be represented by a representative volume element (RVE). For multi-scale analysis, finite element modeling of unit cell based two step homogenization method is used. The first step gives the properties of thin film made of epoxy and CNTs and the second is used for homogenization of laminate composite. The fabric unit cell is chosen using a set of microscopic observation and then identified by its ability to enclose the characteristic periodic repeat in the fabric weave. The unit cell model of 5-Harness satin weave fabric textile composite is identified for numerical approach and their dimensions are chosen based on some microstructural measurements. Finally, a good comparison was obtained between the predicted elastic properties using numerical homogenization approach and the obtained experimental data with experimental tests.
NASA Astrophysics Data System (ADS)
El Moumen, A.; Tarfaoui, M.; Lafdi, K.
2017-08-01
Elastic properties of laminate composites based Carbone Nanotubes (CNTs), used in military applications, were estimated using homogenization techniques and compared to the experimental data. The composite consists of three phases: T300 6k carbon fibers fabric with 5HS (satin) weave, baseline pure Epoxy matrix and CNTs added with 0.5%, 1%, 2% and 4%. Two step homogenization methods based RVE model were employed. The objective of this paper is to determine the elastic properties of structure starting from the knowledge of those of constituents (CNTs, Epoxy and carbon fibers fabric). It is assumed that the composites have a geometric periodicity and the homogenization model can be represented by a representative volume element (RVE). For multi-scale analysis, finite element modeling of unit cell based two step homogenization method is used. The first step gives the properties of thin film made of epoxy and CNTs and the second is used for homogenization of laminate composite. The fabric unit cell is chosen using a set of microscopic observation and then identified by its ability to enclose the characteristic periodic repeat in the fabric weave. The unit cell model of 5-Harness satin weave fabric textile composite is identified for numerical approach and their dimensions are chosen based on some microstructural measurements. Finally, a good comparison was obtained between the predicted elastic properties using numerical homogenization approach and the obtained experimental data with experimental tests.
ANALYSIS OF FISH HOMOGENATES FOR PERFLUORINATED COMPOUNDS
Perfluorinated compounds (PFCs) which include PFOS and PFOA are widely distributed in wildlife. Whole fish homogenates were analyzed for PFCs from the upper Mississippi, the Missouri and the Ohio rivers. Methods development, validation data, and preliminary study results will b...
Super-nodal methods for space-time kinetics
NASA Astrophysics Data System (ADS)
Mertyurek, Ugur
The purpose of this research has been to develop an advanced Super-Nodal method to reduce the run time of 3-D core neutronics models, such as in the NESTLE reactor core simulator and FORMOSA nuclear fuel management optimization codes. Computational performance of the neutronics model is increased by reducing the number of spatial nodes used in the core modeling. However, as the number of spatial nodes decreases, the error in the solution increases. The Super-Nodal method reduces the error associated with the use of coarse nodes in the analyses by providing a new set of cross sections and ADFs (Assembly Discontinuity Factors) for the new nodalization. These so called homogenization parameters are obtained by employing consistent collapsing technique. During this research a new type of singularity, namely "fundamental mode singularity", is addressed in the ANM (Analytical Nodal Method) solution. The "Coordinate Shifting" approach is developed as a method to address this singularity. Also, the "Buckling Shifting" approach is developed as an alternative and more accurate method to address the zero buckling singularity, which is a more common and well known singularity problem in the ANM solution. In the course of addressing the treatment of these singularities, an effort was made to provide better and more robust results from the Super-Nodal method by developing several new methods for determining the transverse leakage and collapsed diffusion coefficient, which generally are the two main approximations in the ANM methodology. Unfortunately, the proposed new transverse leakage and diffusion coefficient approximations failed to provide a consistent improvement to the current methodology. However, improvement in the Super-Nodal solution is achieved by updating the homogenization parameters at several time points during a transient. The update is achieved by employing a refinement technique similar to pin-power reconstruction. A simple error analysis based on the relative residual in the 3-D few group diffusion equation at the fine mesh level is also introduced in this work.
Sol-gel methods for synthesis of aluminosilicates for dental applications.
Cestari, Alexandre
2016-12-01
Amorphous aluminosilicates glasses containing fluorine, phosphorus and calcium are used as a component of the glass ionomer dental cement. This cement is used as a restorative, basis or filling material, but presents lower mechanical resistance than resin-modified materials. The Sol-Gel method is a possible route for preparation of glasses with lower temperature and energy consumption, with higher homogeneity and with uniform and nanometric particles, compared to the industrial methods Glass ionomer cements with uniform, homogeneous and nanometric particles can present higher mechanical resistance than commercial ionomers. The aim of this work was to adapt the Sol-Gel methods to produce new aluminosilicate glass particles by non-hydrolytic, hydrolytic acid and hydrolytic basic routes, to improve glass ionomer cements characteristics. Three materials were synthesized with the same composition, to evaluate the properties of the glasses produced from the different methods, because multicomponent oxides are difficult to prepare with homogeneity. The objective was to develop a new route to produce new glass particles for ionomer cements with possible higher resistance. The particles were characterized by thermal analysis (TG, DTA, DSC), transmission electron microscopy (TEM), X-ray diffraction (XRD), infrared spectroscopy (FTIR) and scanning electron microscopy coupled with energy dispersive spectroscopy (SEM-EDS). The glasses were tested with polyacrylic acid to form the glass ionomer cement by the setting reaction. It was possible to produce distinct materials for dental applications and a sample presented superior characteristics (homogeneity, nanometric particles, and homogenous elemental distribution) than commercial glasses for ionomer cements. The new route for glass production can possible improve the mechanical resistance of the ionomer cements. Copyright © 2016 Elsevier Ltd. All rights reserved.
Van der Vorst, Sébastien; Dekairelle, Anne-France; Irenge, Léonid; Hamoir, Marc; Robert, Annie; Gala, Jean-Luc
2009-01-01
This study compared automated vs. manual tissue grinding in terms of RNA yield obtained from oral mucosa biopsies. A total of 20 patients undergoing uvulectomy for sleep-related disorders and 10 patients undergoing biopsy for head and neck squamous cell carcinoma were enrolled in the study. Samples were collected, snap-frozen in liquid nitrogen, and divided into two parts of similar weight. Sample grinding was performed on one sample from each pair, either manually or using an automated cell disruptor. The performance and efficacy of each homogenization approach was compared in terms of total RNA yield (spectrophotometry, fluorometry), mRNA quantity [densitometry of specific TP53 amplicons and TP53 quantitative reverse-transcribed real-time PCR (qRT-PCR)], and mRNA quality (functional analysis of separated alleles in yeast). Although spectrophotometry and fluorometry results were comparable for both homogenization methods, TP53 expression values obtained by amplicon densitometry and qRT-PCR were significantly and consistently better after automated homogenization (p<0.005) for both uvula and tumor samples. Functional analysis of separated alleles in yeast results was better with the automated technique for tumor samples. Automated tissue homogenization appears to be a versatile, quick, and reliable method of cell disruption and is especially useful in the case of small malignant samples, which show unreliable results when processed by manual homogenization.
The single scattering properties of soot aggregates with concentric core-shell spherical monomers
NASA Astrophysics Data System (ADS)
Wu, Yu; Cheng, Tianhai; Gu, Xingfa; Zheng, Lijuan; Chen, Hao; Xu, Hui
2014-03-01
Anthropogenic soot aerosols are shown as complex, fractal-like aggregated structures with high light absorption efficiency. In atmospheric environment, soot monomers may tend to acquire a weakly absorbing coating, such as an organic coating, which introduces further complexity to the optical properties of the aggregates. The single scattering properties of soot aggregates can be significantly influenced by the coated status of these kinds of aerosols. In this article, the monomers of fractal soot aggregates are modelled as semi-external mixtures (physical contact) with constant radius of soot core and variable sizes of the coating for specific soot volume fractions. The single scattering properties of these coated soot particles, such as phase function, the cross sections of extinction and absorption, single scattering albedo (SSA) and asymmetry parameter (ASY), are calculated using the numerically exact superposition T-matrix method. The random-orientation averaging results have shown that the single scattering properties of these coated soot aggregates are significantly different from the single volume-equivalent core-shell sphere approximation using the Mie theory and the homogeneous aggregates with uncoated monomers using the effective medium theory, such as Maxwell-Garnett and Bruggemann approximations, which overestimate backscattering of coated soot. It is found that the SSA and cross sections of extinction and absorption are increased for soot aggregates with thicker weakly absorbing coating on the monomers. Especially, the SSA values of these simulated aggregates with less soot core volume fractions are remarkably (~50% for core volume fraction of soot aggregates of 0.5, ~100% for a core volume fraction of 0.2, at 0.67 μm) larger than for uncoated soot particles without consideration of coating. Moreover, the cross sections of extinction and absorption are underestimated by the computation of equivalent homogeneous fractal aggregate approximation (within 5% for the T-matrix method and 10-25% for the Rayleigh-Debye-Gans approximation due to different soot volume fractions). Further understanding of the optical properties of these coated soot aggregates would be helpful for both environment monitoring and climate studies.
Zhou, Wei; Feng, Chuqiao; Liu, Xinghong; Liu, Shuhua; Zhang, Chao; Yuan, Wei
2016-01-01
This work is a contrastive investigation of numerical simulations to improve the comprehension of thermo-structural coupled phenomena of mass concrete structures during construction. The finite element (FE) analysis of thermo-structural behaviors is used to investigate the applicability of supersulfated cement (SSC) in mass concrete structures. A multi-scale framework based on a homogenization scheme is adopted in the parameter studies to describe the nonlinear concrete behaviors. Based on the experimental data of hydration heat evolution rate and quantity of SSC and fly ash Portland cement, the hydration properties of various cements are studied. Simulations are run on a concrete dam section with a conventional method and a chemo-thermo-mechanical coupled method. The results show that SSC is more suitable for mass concrete structures from the standpoint of temperature control and crack prevention. PMID:28773517
Zhou, Wei; Feng, Chuqiao; Liu, Xinghong; Liu, Shuhua; Zhang, Chao; Yuan, Wei
2016-05-20
This work is a contrastive investigation of numerical simulations to improve the comprehension of thermo-structural coupled phenomena of mass concrete structures during construction. The finite element (FE) analysis of thermo-structural behaviors is used to investigate the applicability of supersulfated cement (SSC) in mass concrete structures. A multi-scale framework based on a homogenization scheme is adopted in the parameter studies to describe the nonlinear concrete behaviors. Based on the experimental data of hydration heat evolution rate and quantity of SSC and fly ash Portland cement, the hydration properties of various cements are studied. Simulations are run on a concrete dam section with a conventional method and a chemo-thermo-mechanical coupled method. The results show that SSC is more suitable for mass concrete structures from the standpoint of temperature control and crack prevention.
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.
2018-06-01
The necessity to find the global optimum of multiextremal functions arises in many applied problems where finding local solutions is insufficient. One of the desirable properties of global optimization methods is strong homogeneity meaning that a method produces the same sequences of points where the objective function is evaluated independently both of multiplication of the function by a scaling constant and of adding a shifting constant. In this paper, several aspects of global optimization using strongly homogeneous methods are considered. First, it is shown that even if a method possesses this property theoretically, numerically very small and large scaling constants can lead to ill-conditioning of the scaled problem. Second, a new class of global optimization problems where the objective function can have not only finite but also infinite or infinitesimal Lipschitz constants is introduced. Third, the strong homogeneity of several Lipschitz global optimization algorithms is studied in the framework of the Infinity Computing paradigm allowing one to work numerically with a variety of infinities and infinitesimals. Fourth, it is proved that a class of efficient univariate methods enjoys this property for finite, infinite and infinitesimal scaling and shifting constants. Finally, it is shown that in certain cases the usage of numerical infinities and infinitesimals can avoid ill-conditioning produced by scaling. Numerical experiments illustrating theoretical results are described.
Yang, Chan; Xu, Bing; Zhang, Zhi-Qiang; Wang, Xin; Shi, Xin-Yuan; Fu, Jing; Qiao, Yan-Jiang
2016-10-01
Blending uniformity is essential to ensure the homogeneity of Chinese medicine formula particles within each batch. This study was based on the blending process of ebony spray dried powder and dextrin(the proportion of dextrin was 10%),in which the analysis of near infrared (NIR) diffuse reflectance spectra was collected from six different sampling points in combination with moving window F test method in order to assess the blending uniformity of the blending process.The method was validated by the changes of citric acid content determined by the HPLC. The results of moving window F test method showed that the ebony spray dried powder and dextrin was homogeneous during 200-300 r and was segregated during 300-400 r. An advantage of this method is that the threshold value is defined statistically, not empirically and thus does not suffer from threshold ambiguities in common with the moving block standard deviatiun (MBSD). And this method could be employed to monitor other blending process of Chinese medicine powders on line. Copyright© by the Chinese Pharmaceutical Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Chris, E-mail: cyuan@uwm.edu; Wang, Endong; Zhai, Qiang
Temporal homogeneity of inventory data is one of the major problems in life cycle assessment (LCA). Addressing temporal homogeneity of life cycle inventory data is important in reducing the uncertainties and improving the reliability of LCA results. This paper attempts to present a critical review and discussion on the fundamental issues of temporal homogeneity in conventional LCA and propose a theoretical framework for temporal discounting in LCA. Theoretical perspectives for temporal discounting in life cycle inventory analysis are discussed first based on the key elements of a scientific mechanism for temporal discounting. Then generic procedures for performing temporal discounting inmore » LCA is derived and proposed based on the nature of the LCA method and the identified key elements of a scientific temporal discounting method. A five-step framework is proposed and reported in details based on the technical methods and procedures needed to perform a temporal discounting in life cycle inventory analysis. Challenges and possible solutions are also identified and discussed for the technical procedure and scientific accomplishment of each step within the framework. - Highlights: • A critical review for temporal homogeneity problem of life cycle inventory data • A theoretical framework for performing temporal discounting on inventory data • Methods provided to accomplish each step of the temporal discounting framework.« less
NASA Technical Reports Server (NTRS)
Neugebauer, G. T.; Wilcox, W. R.
1990-01-01
Azulene-doped naphtalene was directionally solidified using the vertical Bridgman-Stockbarger technique. Doping homogeneity and convection are determined as a function of the temperature profile in the furnace and the freezing rate. Convective velocities are two orders of magnitude lower when the temperature increases with height. The cross sectional variation in azulene concentration tends to be asymmetric. Neither rotation of the ampoule nor deliberate introduction of thermal asymmetries during solidification had a significant influence on cross sectional variations in doping. It is predicted that slow directional solidification under microgravity conditions can produce greater inhomogeneities than on earth. Thus when low freezing rates are necessary in order to avoid constitutional supercooling, it may be necessary to combine microgravity and magnetic fields in order to achieve homogeneous crystals.
Carbon analyses of IDP's sectioned in sulfur and supported on beryllium films
NASA Technical Reports Server (NTRS)
Bradley, J. P.; Keller, L.; Thomas, K. L.; Vanderwood, T. B.; Brownlee, D. E.
1993-01-01
Carbon is the only major element in interplanetary dust whose abundance, distribution and chemical state are not well understood. Information about carbon could clarify the relationship between the various classes of IDP's, conventional meteorites, and sources (e.g., comets vs. asteroids). To date, the most reliable estimates of C abundance in Interplanetary Dust Particles (IDP's) have been obtained by analyzing particles on thick-flat Be substrates using thin-window energy-dispersive spectroscopy in the SEM. These estimates of C abundance are valid only if C is homogeneously distributed, because detected C x-rays originate from the outer 0.1 micrometers of the particle. An alternative and potentially more accurate method of measuring C abundances is to analyze multiple thin sections (each less than 0.1 less than 0.1 micrometers thick) of IDP's. These efforts however, have been stymied because of a lack of a suitable non-carbonaceous embedding medium and the availability of C-free conductive substrates. We have embedded and thin-sectioned IDP's in glassy sulfur, and transferred the thin sections to Be support films approximately 25 nm thick. The sections were then analyzed in a 200 KeV analytical TEM. S sublimes rapidly under vacuum in the TEM, leaving non-embedded sections supported on Be. Apart from quantitative C (and O) analyses, S sectioning dramatically expands the range of analytical measurements that can be performed on a single IDP.
Hebaz, Salah-Eddine; Benmeddour, Farouk; Moulin, Emmanuel; Assaad, Jamal
2018-01-01
The development of reliable guided waves inspection systems is conditioned by an accurate knowledge of their dispersive properties. The semi-analytical finite element method has been proven to be very practical for modeling wave propagation in arbitrary cross-section waveguides. However, when it comes to computations on complex geometries to a given accuracy, it still has a major drawback: the high consumption of resources. Recently, discontinuous Galerkin finite element method (DG-FEM) has been found advantageous over the standard finite element method when applied as well in the frequency domain. In this work, a high-order method for the computation of Lamb mode characteristics in plates is proposed. The problem is discretised using a class of DG-FEM, namely, the interior penalty methods family. The analytical validation is performed through the homogeneous isotropic case with traction-free boundary conditions. Afterwards, functionally graded material plates are analysed and a numerical example is presented. It was found that the obtained results are in good agreement with those found in the literature.
Comparing the index-flood and multiple-regression methods using L-moments
NASA Astrophysics Data System (ADS)
Malekinezhad, H.; Nachtnebel, H. P.; Klik, A.
In arid and semi-arid regions, the length of records is usually too short to ensure reliable quantile estimates. Comparing index-flood and multiple-regression analyses based on L-moments was the main objective of this study. Factor analysis was applied to determine main influencing variables on flood magnitude. Ward’s cluster and L-moments approaches were applied to several sites in the Namak-Lake basin in central Iran to delineate homogeneous regions based on site characteristics. Homogeneity test was done using L-moments-based measures. Several distributions were fitted to the regional flood data and index-flood and multiple-regression methods as two regional flood frequency methods were compared. The results of factor analysis showed that length of main waterway, compactness coefficient, mean annual precipitation, and mean annual temperature were the main variables affecting flood magnitude. The study area was divided into three regions based on the Ward’s method of clustering approach. The homogeneity test based on L-moments showed that all three regions were acceptably homogeneous. Five distributions were fitted to the annual peak flood data of three homogeneous regions. Using the L-moment ratios and the Z-statistic criteria, GEV distribution was identified as the most robust distribution among five candidate distributions for all the proposed sub-regions of the study area, and in general, it was concluded that the generalised extreme value distribution was the best-fit distribution for every three regions. The relative root mean square error (RRMSE) measure was applied for evaluating the performance of the index-flood and multiple-regression methods in comparison with the curve fitting (plotting position) method. In general, index-flood method gives more reliable estimations for various flood magnitudes of different recurrence intervals. Therefore, this method should be adopted as regional flood frequency method for the study area and the Namak-Lake basin in central Iran. To estimate floods of various return periods for gauged catchments in the study area, the mean annual peak flood of the catchments may be multiplied by corresponding values of the growth factors, and computed using the GEV distribution.
Liao, Ching-Hsing; Fett, William F
2003-05-15
Three major foodborne outbreaks of salmonellosis in 1998 and 1999 were linked to the consumption of raw alfalfa sprouts. In this report, an improved method is described for isolation of Salmonella from alfalfa seed lots, which had been implicated in these outbreaks. From each seed lot, eight samples each containing 25 g of seed were tested for the presence of Salmonella by the US FDA Bacteriological Analytical Manual (BAM) procedure and by a modified method applying two successive pre-enrichment steps. Depending on the seed lot, one to four out of eight samples tested positive for Salmonella by the standard procedure and two to seven out of eight samples tested positive by the modified method. Thus, the use of two consecutive pre-enrichment steps led to a higher detection rate than a single pre-enrichment step. This result indirectly suggested that Salmonella cells on contaminated seeds might be injured and failed to fully resuscitate in pre-enrichment broth containing seed components during the first 24 h of incubation. Responses of heat-injured Salmonella cells grown in buffered peptone water (BPW) and in three alfalfa seed homogenates were investigated. For preparation of seed homogenates, 25 g of seeds were homogenized in 200 ml of BPW using a laboratory Stomacher and subsequently held at 37 degrees C for 24 h prior to centrifugation and filtration. While untreated cells grew at about the same rate in BPW and in seed homogenates, heat-injured cells (52 degrees C, 10 min) required approximately 0.5 to 4.0 h longer to resuscitate in seed homogenates than in BPW. This result suggests that the alfalfa seed components or fermented metabolites from native bacteria hinder the repair and growth of heat-injured cells. This study also shows that an additional pre-enrichment step increases the frequency of isolation of Salmonella from naturally contaminated seeds, possibly by alleviating the toxic effect of seed homogenates on repair or growth of injured cells.
Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa; Ebert, Martin
2012-04-23
Light scattering by light absorbing carbon (LAC) aggregates encapsulated into sulfate shells is computed by use of the discrete dipole method. Computations are performed for a UV, visible, and IR wavelength, different particle sizes, and volume fractions. Reference computations are compared to three classes of simplified model particles that have been proposed for climate modeling purposes. Neither model matches the reference results sufficiently well. Remarkably, more realistic core-shell geometries fall behind homogeneous mixture models. An extended model based on a core-shell-shell geometry is proposed and tested. Good agreement is found for total optical cross sections and the asymmetry parameter. © 2012 Optical Society of America
Chen, Zi-Qi; Du, Ming-Ying; Zhao, You-Jin; Huang, Xiao-Qi; Li, Jing; Lui, Su; Hu, Jun-Mei; Sun, Huai-Qiang; Liu, Jia; Kemp, Graham J.; Gong, Qi-Yong
2015-01-01
Background Published meta-analyses of resting-state regional cerebral blood flow (rCBF) studies of major depressive disorder (MDD) have included patients receiving antidepressants, which might affect brain activity and thus bias the results. To our knowledge, no meta-analysis has investigated regional homogeneity changes in medication-free patients with MDD. Moreover, an association between regional homogeneity and rCBF has been demonstrated in some brain regions in healthy controls. We sought to explore to what extent resting-state rCBF and regional homogeneity changes co-occur in the depressed brain without the potential confound of medication. Methods Using the effect-size signed differential mapping method, we conducted 2 meta-analyses of rCBF and regional homogeneity studies of medication-free patients with MDD. Results Our systematic search identified 14 rCBF studies and 9 regional homogeneity studies. We identified conjoint decreases in resting-state rCBF and regional homogeneity in the insula and superior temporal gyrus in medication-free patients with MDD compared with controls. Other changes included altered resting-state rCBF in the precuneus and in the frontal–limbic–thalamic–striatal neural circuit as well as altered regional homogeneity in the uncus and parahippocampal gyrus. Meta-regression revealed that the percentage of female patients with MDD was negatively associated with resting-state rCBF in the right anterior cingulate cortex and that the age of patients with MDD was negatively associated with rCBF in the left insula and with regional homogeneity in the left uncus. Limitations The analysis techniques, patient characteristics and clinical variables of the included studies were heterogeneous. Conclusion The conjoint alterations of rCBF and regional homogeneity in the insula and superior temporal gyrus may be core neuropathological changes in medication-free patients with MDD and serve as a specific region of interest for further studies on MDD. PMID:25853283
Assessing Dietary Exposure to Pyrethroid Insecticides by LCIMS/MS of Food Composites
Method Commercially-obtained vegetables, chips, cereal, meat, and other solid food products were homogenized together to create composited control matrices at 1%, 5%, and 100/0 fat content. Lyophilized homogenates were spiked with 7 pyrethroids, 6 degradation products, bisphen...
NASA Astrophysics Data System (ADS)
Hakoda, Christopher; Rose, Joseph; Shokouhi, Parisa; Lissenden, Clifford
2018-04-01
Dispersion curves are essential to any guided-wave-related project. The Semi-Analytical Finite Element (SAFE) method has become the conventional way to compute dispersion curves for homogeneous waveguides. However, only recently has a general SAFE formulation for commercial and open-source software become available, meaning that until now SAFE analyses have been variable and more time consuming than desirable. Likewise, the Floquet boundary conditions enable analysis of waveguides with periodicity and have been an integral part of the development of metamaterials. In fact, we have found the use of Floquet boundary conditions to be an extremely powerful tool for homogeneous waveguides, too. The nuances of using periodic boundary conditions for homogeneous waveguides that do not exhibit periodicity are discussed. Comparisons between this method and SAFE are made for selected homogeneous waveguide applications. The COMSOL Multiphysics software is used for the results shown, but any standard finite element software that can implement Floquet periodicity (user-defined or built-in) should suffice. Finally, we identify a number of complex waveguides for which dispersion curves can be found with relative ease by using the periodicity inherent to the Floquet boundary conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skontorp, A.; Wang, S.S.; Shibuya, Y.
1994-12-31
In this paper, a homogenization theory is developed to determine high-temperature effective viscoelastic constitutive equations for fiber-reinforced polymer composites. The homogenization theory approximates the microstructure of a fiber composite, and determine simultaneously effective macroscopic constitutive properties of the composite and the associated microscopic strain and stress in the heterogeneous material. The time-temperature dependent homogenization theory requires that the viscoelastic constituent properties of the matrix phase at elevated temperatures, the governing equations for the composites, and the boundary conditions of the problem be Laplace transformed to a conjugate problem. The homogenized effective properties in the transformed domain are determined, using amore » two-scale asymptotic expansion of field variables and an averaging procedure. Field solutions in the unit cell are determined from basic and first-order governing equations with the aid of a boundary integral method (BIM). Effective viscoelastic constitutive properties of the composite at elevated temperatures are determined by an inverse transformation, as are the microscopic stress and deformation in the composite. Using this method, interactions among fibers and between the fibers and the matrix can be evaluated explicitly, resulting in accurate solutions for composites with high-volume fraction of reinforcing fibers. Examples are given for the case of a carbon-fiber reinforced thermoplastic polyamide composite in an elevated temperature environment. The homogenization predictions are in good agreement with experimental data available for the composite.« less
Kim, Sung-Hou [Moraga, CA; Kim, Rosalind [Moraga, CA; Jancarik, Jamila [Walnut Creek, CA
2012-01-31
An optimum solubility screen in which a panel of buffers and many additives are provided in order to obtain the most homogeneous and monodisperse protein condition for protein crystallization. The present methods are useful for proteins that aggregate and cannot be concentrated prior to setting up crystallization screens. A high-throughput method using the hanging-drop method and vapor diffusion equilibrium and a panel of twenty-four buffers is further provided. Using the present methods, 14 poorly behaving proteins have been screened, resulting in 11 of the proteins having highly improved dynamic light scattering results allowing concentration of the proteins, and 9 were crystallized.
Benchmarking homogenization algorithms for monthly data
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.
2013-09-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.
Homogenized description and retrieval method of nonlinear metasurfaces
NASA Astrophysics Data System (ADS)
Liu, Xiaojun; Larouche, Stéphane; Smith, David R.
2018-03-01
A patterned, plasmonic metasurface can strongly scatter incident light, functioning as an extremely low-profile lens, filter, reflector or other optical device. When the metasurface is patterned uniformly, its linear optical properties can be expressed using effective surface electric and magnetic polarizabilities obtained through a homogenization procedure. The homogenized description of a nonlinear metasurface, however, presents challenges both because of the inherent anisotropy of the medium as well as the much larger set of potential wave interactions available, making it challenging to assign effective nonlinear parameters to the otherwise inhomogeneous layer of metamaterial elements. Here we show that a homogenization procedure can be developed to describe nonlinear metasurfaces, which derive their nonlinear response from the enhanced local fields arising within the structured plasmonic elements. With the proposed homogenization procedure, we are able to assign effective nonlinear surface polarization densities to a nonlinear metasurface, and link these densities to the effective nonlinear surface susceptibilities and averaged macroscopic pumping fields across the metasurface. These effective nonlinear surface polarization densities are further linked to macroscopic nonlinear fields through the generalized sheet transition conditions (GSTCs). By inverting the GSTCs, the effective nonlinear surface susceptibilities of the metasurfaces can be solved for, leading to a generalized retrieval method for nonlinear metasurfaces. The application of the homogenization procedure and the GSTCs are demonstrated by retrieving the nonlinear susceptibilities of a SiO2 nonlinear slab. As an example, we investigate a nonlinear metasurface which presents nonlinear magnetoelectric coupling in near infrared regime. The method is expected to apply to any patterned metasurface whose thickness is much smaller than the wavelengths of operation, with inclusions of arbitrary geometry and material composition, across the electromagnetic spectrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dellinger, B.; Graham, J.L.; Berman, J.M.
1994-05-01
Application of concentrated solar energy has been proposed to be a viable waste disposal option. Specifically, this concept of solar induced high-temperature photochemistry is based on the synergistic contribution of concentrated infrared (IR) radiation, which acts as an intense heating source, and near ultraviolet and visible (UV-VIS) radiation, which can induce destructive photochemical processes. Some significant advances have been made in the theoretical framework of high-temperature photochemical processes (Section 2) and development of experimental techniques for their study (Section 3). Basic thermal/photolytic studies have addressed the effect of temperature on the photochemical destruction of pure compounds (Section 4). Detailed studiesmore » of the destruction of reaction by-products have been conducted on selected waste molecules (Section 5). Some very limited results are available on the destruction of mixtures (Section 6). Fundamental spectroscopic studies have been recently initiated (Section 7). The results to date have been used to conduct some relatively simple scale-up studies of the solar detoxification process. More recent work has focused on destruction of compounds that do not directly absorb solar radiation. Research efforts have focused on homogeneous as well as heterogeneous methods of initiating destructive reaction pathways (Section 9). Although many conclusions at this point must be considered tentative due to lack of basic research, a clearer picture of the overall process is emerging (Section 10). However, much research remains to be performed and most follow several veins, including photochemical, spectroscopic, combustion kinetic, and engineering scale-up (Section 11).« less
The role of resveratrol on full - Thickness uterine wound healing in rats.
Sayin, Oya; Micili, Serap Cilaker; Goker, Asli; Kamaci, Gonca; Ergur, Bekir Ugur; Yilmaz, Osman; Guner Akdogan, Gul
2017-10-01
Healing of the uterus after cesarean section and myomectomy operation is clinically important. In this study, we aimed to investigate the effects of resveratrol (3,5,4'-o-trihydroxystilbene) on the wound healing process of the uterus in rats treated with resveratrol following full thickness injury of the uterus. Twenty-one female wistar albino rats were divided randomly into three groups (1) control group with no intervention (2) injury group with uterine full thickness injury (3) resveratrol group with uterine full thickness injury and treated with resveratrol. Resveratrol was injected by oral gavage at the doses of 0.5 mg/kg/day for 30 days following uterine full thickness injury. Vascular endothelial growth factor (VEGF) and platelet-derived growth factor (PDGF) distributions were assessed using the immunohistochemical methods in tissue and ELISA methods in the tissue homogenate. Glutathione peroxidase (GPx) and superoxide dismutase (SOD) activities were evaluated with colorimetric method and malondialdehyde (MDA) levels also were measured using high performance liquid chromatography in the tissue homogenate. The effects of resveratrol on the uterine histology also were evaluated histologically with the light microscopy. Histological evaluation and immunohistochemical evaluations showed that treatment with a resveratrol significantly increased the thickness of the uterine wall and VEGF expression and decreased expression PDGF during wound healing. Biochemically, GPx and SOD activities were increased significantly after treatment with resveratrol. Additionally, resveratrol administration decreased MDA levels. These results showed that the antioxidant effects of resveratrol has been shown to have a positive influence on wound healing of the uterus. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Krämer, Florian; Gratz, Micha; Tschöpe, Andreas
2016-07-01
The magnetic field-dependent optical transmission of dilute Ni nanorod aqueous suspensions was investigated. A series of four samples of nanorods were synthesized using the AAO template method and processed to stable colloids. The distributions of their length and diameter were characterized by analysis of TEM images and revealed average diameters of ˜25 nm and different lengths in the range of 60 nm-1100 nm. The collinear magnetic and optical anisotropy was studied by static field-dependent transmission measurements of linearly polarized light parallel and perpendicular to the magnetic field direction. The experimental results were modelled assuming the field-dependent orientation distribution function of a superparamagnetic ensemble for the uniaxial ferromagnetic nanorods in liquid dispersion and extinction cross sections for longitudinal and transversal optical polarization derived from different approaches, including the electrostatic approximation and the separation of variables method, both applied to spheroidal particles, as well as finite element method simulations of spheroids and capped cylindrical particles. The extinction cross sections were compared to reveal the differences associated with the approximations of homogeneous polarization and/or particle shape. The consequences of these approximations for the quantitative analysis of magnetic field-dependent optical transmission measurements were investigated and a reliable protocol derived. Furthermore, the changes in optical cross sections induced by electromagnetic interaction between two nanorods in parallel end-to-end and side-by-side configuration as a function of their separation were studied.
Type of homogenization and fat loss during continuous infusion of human milk.
García-Lara, Nadia Raquel; Escuder-Vieco, Diana; Alonso Díaz, Clara; Vázquez Román, Sara; De la Cruz-Bértolo, Javier; Pallás-Alonso, Carmen Rosa
2014-11-01
Substantial fat loss may occur during continuous feeding of human milk (HM). A decrease of fat loss has been described following homogenization. Well-established methods of homogenization of HM for routine use in the neonatal intensive care unit (NICU) would be desirable. We compared the loss of fat based on the use of 3 different methods for homogenizing thawed HM during continuous feeding. Sixteen frozen donor HM samples were thawed, homogenized with ultrasound and separated into 3 aliquots ("baseline agitation," "hourly agitation," and "ultrasound"), and then frozen for 48 hours. Aliquots were thawed again and a baseline agitation was applied. Subsequently, aliquots baseline agitation and hourly agitation were drawn into a syringe, while ultrasound was applied to aliquot ultrasound before it was drawn into a syringe. The syringes were loaded into a pump (2 mL/h; 4 hours). At hourly intervals the hourly agitation infusion was stopped, the syringe was disconnected and gently shaken. During infusion, samples from the 3 groups were collected hourly for analysis of fat and caloric content. The 3 groups of homogenization showed similar fat content at the beginning of the infusion. For fat, mean (SD) hourly changes of -0.03 (0.01), -0.09 (0.01), and -0.09 (0.01) g/dL were observed for the hourly agitation, baseline agitation, and ultrasound groups, respectively. The decrease was smaller for the hourly agitation group (P < .001). When thawed HM is continuously infused, a smaller fat loss is observed when syringes are agitated hourly versus when ultrasound or a baseline homogenization is used. © The Author(s) 2014.
Uysal, Bora; Beyzadeoğlu, Murat; Sager, Ömer; Dinçoğlan, Ferrat; Demiral, Selçuk; Gamsız, Hakan; Sürenkök, Serdar; Oysul, Kaan
2013-01-01
Objective: The purpose of this dosimetric study is the targeted dose homogeneity and critical organ dose comparison of 7-field Intensity Modulated Radiotherapy (IMRT) and 3-D 4-field conformal radiotherapy. Study Design: Cross sectional study. Material and Methods: Twenty patients with low and moderate risk prostate cancer treated at Gülhane Military Medical School Radiation Oncology Department between January 2009 and December 2009 are included in this study. Two seperate dosimetric plans both for 7-field IMRT and 3D-CRT have been generated for each patient to comparatively evaluate the dosimetric status of both techniques and all the patients received 7-field IMRT. Results: Dose-comparative evaluation of two techniques revealed the superiority of IMRT technique with statistically significantly lower femoral head doses along with reduced critical organ dose-volume parameters of bladder V60 (the volume receiving 60 Gy) and rectal V40 (the volume receiving 40 Gy) and V60. Conclusion: It can be concluded that IMRT is an effective definitive management tool for prostate cancer with improved critical organ sparing and excellent dose homogenization in target organs of prostate and seminal vesicles. PMID:25207069
Homogeneous Biosensing Based on Magnetic Particle Labels
Schrittwieser, Stefan; Pelaz, Beatriz; Parak, Wolfgang J.; Lentijo-Mozo, Sergio; Soulantica, Katerina; Dieckhoff, Jan; Ludwig, Frank; Guenther, Annegret; Tschöpe, Andreas; Schotter, Joerg
2016-01-01
The growing availability of biomarker panels for molecular diagnostics is leading to an increasing need for fast and sensitive biosensing technologies that are applicable to point-of-care testing. In that regard, homogeneous measurement principles are especially relevant as they usually do not require extensive sample preparation procedures, thus reducing the total analysis time and maximizing ease-of-use. In this review, we focus on homogeneous biosensors for the in vitro detection of biomarkers. Within this broad range of biosensors, we concentrate on methods that apply magnetic particle labels. The advantage of such methods lies in the added possibility to manipulate the particle labels by applied magnetic fields, which can be exploited, for example, to decrease incubation times or to enhance the signal-to-noise-ratio of the measurement signal by applying frequency-selective detection. In our review, we discriminate the corresponding methods based on the nature of the acquired measurement signal, which can either be based on magnetic or optical detection. The underlying measurement principles of the different techniques are discussed, and biosensing examples for all techniques are reported, thereby demonstrating the broad applicability of homogeneous in vitro biosensing based on magnetic particle label actuation. PMID:27275824
Construction of Optimal-Path Maps for Homogeneous-Cost-Region Path-Planning Problems
1989-09-01
of Artificial Inteligence , 9%,4. 24. Kirkpatrick, S., Gelatt Jr., C. D., and Vecchi, M. P., "Optinization by Sinmulated Ani- nealing", Science, Vol...studied in depth by researchers in such fields as artificial intelligence, robot;cs, and computa- tional geometry. Most methods require homogeneous...the results of the research. 10 U. L SLEVANT RESEARCH A. APPLICABLE CONCEPTS FROM ARTIFICIAL INTELLIGENCE 1. Search Methods One of the central
Fourier-Accelerated Nodal Solvers (FANS) for homogenization problems
NASA Astrophysics Data System (ADS)
Leuschner, Matthias; Fritzen, Felix
2017-11-01
Fourier-based homogenization schemes are useful to analyze heterogeneous microstructures represented by 2D or 3D image data. These iterative schemes involve discrete periodic convolutions with global ansatz functions (mostly fundamental solutions). The convolutions are efficiently computed using the fast Fourier transform. FANS operates on nodal variables on regular grids and converges to finite element solutions. Compared to established Fourier-based methods, the number of convolutions is reduced by FANS. Additionally, fast iterations are possible by assembling the stiffness matrix. Due to the related memory requirement, the method is best suited for medium-sized problems. A comparative study involving established Fourier-based homogenization schemes is conducted for a thermal benchmark problem with a closed-form solution. Detailed technical and algorithmic descriptions are given for all methods considered in the comparison. Furthermore, many numerical examples focusing on convergence properties for both thermal and mechanical problems, including also plasticity, are presented.
MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields
NASA Astrophysics Data System (ADS)
Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria
2015-08-01
We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and fractional homogeneity-degree, to obtain valid estimates of the source parameters in a consistent theoretical framework, so overcoming the limitations imposed by global-homogeneity to widespread methods, such as Euler deconvolution.
Malinsky, Michelle Duval; Jacoby, Cliffton B; Reagen, William K
2011-01-10
We report herein a simple protein precipitation extraction-liquid chromatography tandem mass spectrometry (LC/MS/MS) method, validation, and application for the analysis of perfluorinated carboxylic acids (C7-C12), perfluorinated sulfonic acids (C4, C6, and C8), and perfluorooctane sulfonamide (FOSA) in fish fillet tissue. The method combines a rapid homogenization and protein precipitation tissue extraction procedure using stable-isotope internal standard (IS) calibration. Method validation in bluegill (Lepomis macrochirus) fillet tissue evaluated the following: (1) method accuracy and precision in both extracted matrix-matched calibration and solvent (unextracted) calibration, (2) quantitation of mixed branched and linear isomers of perfluorooctanoate (PFOA) and perfluorooctanesulfonate (PFOS) with linear isomer calibration, (3) quantitation of low level (ppb) perfluorinated compounds (PFCs) in the presence of high level (ppm) PFOS, and (4) specificity from matrix interferences. Both calibration techniques produced method accuracy of at least 100±13% with a precision (%RSD) ≤18% for all target analytes. Method accuracy and precision results for fillet samples from nine different fish species taken from the Mississippi River in 2008 and 2009 are also presented. Copyright © 2010 Elsevier B.V. All rights reserved.
RELIABLE COMPUTATION OF HOMOGENEOUS AZEOTROPES. (R824731)
It is important to determine the existence and composition of homogeneous azeotropes in the analysis of phase behavior and in the synthesis and design of separation systems, from both theoretical and practical standpoints. A new method for reliably locating an...
An infrared small target detection method based on multiscale local homogeneity measure
NASA Astrophysics Data System (ADS)
Nie, Jinyan; Qu, Shaocheng; Wei, Yantao; Zhang, Liming; Deng, Lizhen
2018-05-01
Infrared (IR) small target detection plays an important role in the field of image detection area owing to its intrinsic characteristics. This paper presents a multiscale local homogeneity measure (MLHM) for infrared small target detection, which can enhance the performance of IR small target detection system. Firstly, intra-patch homogeneity of the target itself and the inter-patch heterogeneity between target and the local background regions are integrated to enhance the significant of small target. Secondly, a multiscale measure based on local regions is proposed to obtain the most appropriate response. Finally, an adaptive threshold method is applied to small target segmentation. Experimental results on three different scenarios indicate that the MLHM has good performance under the interference of strong noise.
Theoretical and experimental research on laser-beam homogenization based on metal gauze
NASA Astrophysics Data System (ADS)
Liu, Libao; Zhang, Shanshan; Wang, Ling; Zhang, Yanchao; Tian, Zhaoshuo
2018-03-01
Method of homogenization of CO2 laser heating by means of metal gauze is researched theoretically and experimentally. Distribution of light-field of expanded beam passing through metal gauze was numerically calculated with diffractive optical theory and the conclusion is that method is effective, with comparing the results to the situation without metal gauze. Experimentally, using the 30W DC discharge laser as source and enlarging beam by concave lens, with and without metal gauze, beam intensity distributions in thermal paper were compared, meanwhile the experiments based on thermal imager were performed. The experimental result was compatible with theoretical calculation, and all these show that the homogeneity of CO2 laser heating could be enhanced by metal gauze.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burns, Joseph R.; Petrovic, Bojan; Chandler, David
Additive manufacturing is under investigation as a novel method of fabricating the control elements (CEs) of the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory with greater simplicity, eliminating numerous highly complex fabrication steps and thereby offering potential for significant savings in cost, time, and effort. This process yields a unique CE design with lumped absorbers, a departure from traditionally manufactured CEs with uniformly distributed absorbing material. Here, this study undertakes a neutronics analysis of the impact of additively manufactured CEs on the HFIR core physics, seeking preliminary assessment of the feasibility of their practical use. The resultsmore » of the MCNP transport simulations reveal changes in the HFIR reactor physics arising from geometric and nuclear effects. Absorber lumping in the discrete CEs yields a large volume of unpoisoned material that is not present in the homogeneous design, in turn yielding increases in free thermal flux in the CE absorbing regions and their immediate vicinity. The availability of additional free thermal neutrons in the core yields an increase in fission rate density in the fuel closest to the CEs and a corresponding increase in neutron multiplication on the order of 100 pcm. The absorption behavior exhibited by the discrete CEs is markedly different from the homogeneous CEs due to several competing effects. Self-shielding arising from absorber lumping acts to reduce the effective absorption cross section of the discrete CEs, but this effect is offset by geometric and spectral effects. The operational performance of the discrete CEs is found to be comparable to the homogeneous CEs, with only limited deficiencies in reactivity worth that are expected to be operationally recoverable via limited adjustment of the CE positions and withdrawal rate. On the whole, these results indicate that the discrete CEs perform reasonably similarly to the homogeneous CEs and appear feasible for application in HFIR. In conclusion, the physical phenomena identified in this study provide valuable background for follow-up design studies.« less
Burns, Joseph R.; Petrovic, Bojan; Chandler, David; ...
2018-02-22
Additive manufacturing is under investigation as a novel method of fabricating the control elements (CEs) of the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory with greater simplicity, eliminating numerous highly complex fabrication steps and thereby offering potential for significant savings in cost, time, and effort. This process yields a unique CE design with lumped absorbers, a departure from traditionally manufactured CEs with uniformly distributed absorbing material. Here, this study undertakes a neutronics analysis of the impact of additively manufactured CEs on the HFIR core physics, seeking preliminary assessment of the feasibility of their practical use. The resultsmore » of the MCNP transport simulations reveal changes in the HFIR reactor physics arising from geometric and nuclear effects. Absorber lumping in the discrete CEs yields a large volume of unpoisoned material that is not present in the homogeneous design, in turn yielding increases in free thermal flux in the CE absorbing regions and their immediate vicinity. The availability of additional free thermal neutrons in the core yields an increase in fission rate density in the fuel closest to the CEs and a corresponding increase in neutron multiplication on the order of 100 pcm. The absorption behavior exhibited by the discrete CEs is markedly different from the homogeneous CEs due to several competing effects. Self-shielding arising from absorber lumping acts to reduce the effective absorption cross section of the discrete CEs, but this effect is offset by geometric and spectral effects. The operational performance of the discrete CEs is found to be comparable to the homogeneous CEs, with only limited deficiencies in reactivity worth that are expected to be operationally recoverable via limited adjustment of the CE positions and withdrawal rate. On the whole, these results indicate that the discrete CEs perform reasonably similarly to the homogeneous CEs and appear feasible for application in HFIR. In conclusion, the physical phenomena identified in this study provide valuable background for follow-up design studies.« less
Morakul, Boontida; Suksiriworapong, Jiraphong; Leanpolchareanchai, Jiraporn; Junyaprasert, Varaporn Buraphacheep
2013-11-30
Nanocrystals is one of effective technologies used to improve solubility and dissolution behavior of poorly soluble drugs. Clarithromycin is classified in BCS class II having low bioavailability due to very low dissolution behavior. The main purpose of this study was to investigate an efficiency of clarithromycin nanocrystals preparation by precipitation-lyophilization-homogenization (PLH) combination method in comparison with high pressure homogenization (HPH) method. The factors influencing particle size reduction and physical stability were assessed. The results showed that the PLH technique provided an effective and rapid reduction of particle size of nanocrystals to 460 ± 10 nm with homogeneity size distribution after only the fifth cycle of homogenization, whereas the same size was attained after 30 cycles by the HPH method. The smallest nanocrystals were achieved by using the combination of poloxamer 407 (2%, w/v) and SLS (0.1%, w/v) as stabilizers. This combination could prevent the particle aggregation over 3-month storage at 4 °C. The results from SEM showed that the clarithromycin nanocrystals were in cubic-shaped similar to its initial particle morphology. The DSC thermogram and X-ray diffraction pattern of nanocrystals were not different from the original drug except for intensity of peaks which indicated the presenting of nanocrystals in the crystalline state and/or partial amorphous form. In addition, the dissolution of the clarithromycin nanocrystals was dramatically increased as compared to the coarse clarithromycin. Copyright © 2013 Elsevier B.V. All rights reserved.
Isaac, Giorgis; Waldebäck, Monica; Eriksson, Ulla; Odham, Göran; Markides, Karin E
2005-07-13
The reliability and efficiency of pressurized fluid extraction (PFE) technique for the extraction of total lipid content from cod and the effect of sample treatment on the extraction efficiency have been evaluated. The results were compared with two liquid-liquid extraction methods, traditional and modified methods according to Jensen. Optimum conditions were found to be with 2-propanol/n-hexane (65:35, v/v) as a first and n-hexane/diethyl ether (90:10, v/v) as a second solvent, 115 degrees C, and 10 min of static time. PFE extracts were cleaned up using the same procedure as in the methods according to Jensen. When total lipid yields obtained from homogenized cod muscle using PFE were compared yields obtained with original and modified Jensen methods, PFE gave significantly higher yields, approximately 10% higher (t test, P < 0.05). Infrared and NMR spectroscopy suggested that the additional material that inflates the gravimetric results is rather homogeneous and is primarily consists of phospholipid with headgroups of inositidic and/or glycosidic nature. The comparative study demonstrated that PFE is an alternative suitable technique to extract total lipid content from homogenized cod (lean fish) and herring (fat fish) muscle showing a precision comparable to that obtained with the traditional and modified Jensen methods. Despite the necessary cleanup step, PFE showed important advantages in the solvent consumption was cut by approximately 50% and automated extraction was possible.
Mechanical Homogenization Increases Bacterial Homogeneity in Sputum
Stokell, Joshua R.; Khan, Ammad
2014-01-01
Sputum obtained from patients with cystic fibrosis (CF) is highly viscous and often heterogeneous in bacterial distribution. Adding dithiothreitol (DTT) is the standard method for liquefaction prior to processing sputum for molecular detection assays. To determine if DTT treatment homogenizes the bacterial distribution within sputum, we measured the difference in mean total bacterial abundance and abundance of Burkholderia multivorans between aliquots of DTT-treated sputum samples with and without a mechanical homogenization (MH) step using a high-speed dispersing element. Additionally, we measured the effect of MH on bacterial abundance. We found a significant difference between the mean bacterial abundances in aliquots that were subjected to only DTT treatment and those of the aliquots which included an MH step (all bacteria, P = 0.04; B. multivorans, P = 0.05). There was no significant effect of MH on bacterial abundance in sputum. Although our results are from a single CF patient, they indicate that mechanical homogenization increases the homogeneity of bacteria in sputum. PMID:24759710
Homogenization of Large-Scale Movement Models in Ecology
Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.
2011-01-01
A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.
Capoor, Manu N.; Ruzicka, Filip; Machackova, Tana; Jancalek, Radim; Smrcka, Martin; Schmitz, Jonathan E.; Hermanova, Marketa; Sana, Jiri; Michu, Elleni; Baird, John C.; Ahmed, Fahad S.; Maca, Karel; Lipina, Radim; Alamin, Todd F.; Coscia, Michael F.; Stonemetz, Jerry L.; Witham, Timothy; Ehrlich, Garth D.; Gokaslan, Ziya L.; Mavrommatis, Konstantinos; Birkenmaier, Christof; Fischetti, Vincent A.; Slaby, Ondrej
2016-01-01
Background The relationship between intervertebral disc degeneration and chronic infection by Propionibacterium acnes is controversial with contradictory evidence available in the literature. Previous studies investigating these relationships were under-powered and fraught with methodical differences; moreover, they have not taken into consideration P. acnes’ ability to form biofilms or attempted to quantitate the bioburden with regard to determining bacterial counts/genome equivalents as criteria to differentiate true infection from contamination. The aim of this prospective cross-sectional study was to determine the prevalence of P. acnes in patients undergoing lumbar disc microdiscectomy. Methods and Findings The sample consisted of 290 adult patients undergoing lumbar microdiscectomy for symptomatic lumbar disc herniation. An intraoperative biopsy and pre-operative clinical data were taken in all cases. One biopsy fragment was homogenized and used for quantitative anaerobic culture and a second was frozen and used for real-time PCR-based quantification of P. acnes genomes. P. acnes was identified in 115 cases (40%), coagulase-negative staphylococci in 31 cases (11%) and alpha-hemolytic streptococci in 8 cases (3%). P. acnes counts ranged from 100 to 9000 CFU/ml with a median of 400 CFU/ml. The prevalence of intervertebral discs with abundant P. acnes (≥ 1x103 CFU/ml) was 11% (39 cases). There was significant correlation between the bacterial counts obtained by culture and the number of P. acnes genomes detected by real-time PCR (r = 0.4363, p<0.0001). Conclusions In a large series of patients, the prevalence of discs with abundant P. acnes was 11%. We believe, disc tissue homogenization releases P. acnes from the biofilm so that they can then potentially be cultured, reducing the rate of false-negative cultures. Further, quantification study revealing significant bioburden based on both culture and real-time PCR minimize the likelihood that observed findings are due to contamination and supports the hypothesis P. acnes acts as a pathogen in these cases of degenerative disc disease. PMID:27536784
21 CFR 1.24 - Exemptions from required label statements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... skimmed milk, vitamin D milk and milk products, fortified milk and milk products, homogenized milk... food package shall be exempt from regulations of section 403 (e)(1), (g)(2), (i)(2), (k), and (q) of...
21 CFR 1.24 - Exemptions from required label statements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... skimmed milk, vitamin D milk and milk products, fortified milk and milk products, homogenized milk... food package shall be exempt from regulations of section 403 (e)(1), (g)(2), (i)(2), (k), and (q) of...
21 CFR 1.24 - Exemptions from required label statements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... skimmed milk, vitamin D milk and milk products, fortified milk and milk products, homogenized milk... food package shall be exempt from regulations of section 403 (e)(1), (g)(2), (i)(2), (k), and (q) of...
Investigation of methods for hydroclimatic data homogenization
NASA Astrophysics Data System (ADS)
Steirou, E.; Koutsoyiannis, D.
2012-04-01
We investigate the methods used for the adjustment of inhomogeneities of temperature time series covering the last 100 years. Based on a systematic study of scientific literature, we classify and evaluate the observed inhomogeneities in historical and modern time series, as well as their adjustment methods. It turns out that these methods are mainly statistical, not well justified by experiments and are rarely supported by metadata. In many of the cases studied the proposed corrections are not even statistically significant. From the global database GHCN-Monthly Version 2, we examine all stations containing both raw and adjusted data that satisfy certain criteria of continuity and distribution over the globe. In the United States of America, because of the large number of available stations, stations were chosen after a suitable sampling. In total we analyzed 181 stations globally. For these stations we calculated the differences between the adjusted and non-adjusted linear 100-year trends. It was found that in the two thirds of the cases, the homogenization procedure increased the positive or decreased the negative temperature trends. One of the most common homogenization methods, 'SNHT for single shifts', was applied to synthetic time series with selected statistical characteristics, occasionally with offsets. The method was satisfactory when applied to independent data normally distributed, but not in data with long-term persistence. The above results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4°C and 0.7°C, where these two values are the estimates derived from raw and adjusted data, respectively.
Homogeneity and internal defects detect of infrared Se-based chalcogenide glass
NASA Astrophysics Data System (ADS)
Li, Zupana; Wu, Ligang; Lin, Changgui; Song, Bao'an; Wang, Xunsi; Shen, Xiang; Dai, Shixunb
2011-10-01
Ge-Sb-Se chalcogenide glasses is a kind of excellent infrared optical material, which has been enviromental friendly and widely used in infrared thermal imaging systems. However, due to the opaque feature of Se-based glasses in visible spectral region, it's difficult to measure their homogeneity and internal defect as the common oxide ones. In this study, a measurement was proposed to observe the homogeneity and internal defect of these glasses based on near-IR imaging technique and an effective measurement system was also constructed. The testing result indicated the method can gives the information of homogeneity and internal defect of infrared Se-based chalcogenide glass clearly and intuitionally.
Nebbak, A; El Hamzaoui, B; Berenger, J-M; Bitam, I; Raoult, D; Almeras, L; Parola, P
2017-12-01
Ticks and fleas are vectors for numerous human and animal pathogens. Controlling them, which is important in combating such diseases, requires accurate identification, to distinguish between vector and non-vector species. Recently, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was applied to the rapid identification of arthropods. The growth of this promising tool, however, requires guidelines to be established. To this end, standardization protocols were applied to species of Rhipicephalus sanguineus (Ixodida: Ixodidae) Latreille and Ctenocephalides felis felis (Siphonaptera: Pulicidae) Bouché, including the automation of sample homogenization using two homogenizer devices, and varied sample preservation modes for a period of 1-6 months. The MS spectra were then compared with those obtained from manual pestle grinding, the standard homogenization method. Both automated methods generated intense, reproducible MS spectra from fresh specimens. Frozen storage methods appeared to represent the best preservation mode, for up to 6 months, while storage in ethanol is also possible, with some caveats for tick specimens. Carnoy's buffer, however, was shown to be less compatible with MS analysis for the purpose of identifying ticks or fleas. These standard protocols for MALDI-TOF MS arthropod identification should be complemented by additional MS spectrum quality controls, to generalize their use in monitoring arthropods of medical interest. © 2017 The Royal Entomological Society.
NASA Technical Reports Server (NTRS)
Illarionov, A.; Kallman, T.; Mccray, R.; Ross, R.
1979-01-01
A method is described for calculating the spectrum that results from the Compton scattering of a monochromatic source of X-rays by low-temperature electrons, both for initial-value relaxation problems and for steady-state spatial diffusion problems. The method gives an exact solution of the inital-value problem for evolution of the spectrum in an infinite homogeneous medium if Klein-Nishina corrections to the Thomson cross section are neglected. This, together with approximate solutions for problems in which Klein-Nishina corrections are significant and/or spatial diffusion occurs, shows spectral structure near the original photon wavelength that may be used to infer physical conditions in cosmic X-ray sources. Explicit results, shown for examples of time relaxation in an infinite medium and spatial diffusion through a uniform sphere, are compared with results obtained by Monte Carlo calculations and by solving the appropriate Fokker-Planck equation.
NASA Astrophysics Data System (ADS)
Singh, Harendra
2018-04-01
The key purpose of this article is to introduce an efficient computational method for the approximate solution of the homogeneous as well as non-homogeneous nonlinear Lane-Emden type equations. Using proposed computational method given nonlinear equation is converted into a set of nonlinear algebraic equations whose solution gives the approximate solution to the Lane-Emden type equation. Various nonlinear cases of Lane-Emden type equations like standard Lane-Emden equation, the isothermal gas spheres equation and white-dwarf equation are discussed. Results are compared with some well-known numerical methods and it is observed that our results are more accurate.
Di Marzo, Larissa; Cree, Patrick; Barbano, David M
2016-11-01
Our objective was to develop partial least square models using data from Fourier transform mid-infrared (MIR) spectra to predict the particle size distributions d(0.5) and d(0.9), surface volume mean diameter D[3,2], and volume moment mean diameter D[4,3] of milk fat globules and validate the models. The goal of the study was to produce a method built into the MIR milk analyzer that could be used to warn the instrument operator that the homogenizer is near failure and needs to be replaced to ensure quality of results. Five homogenizers with different homogenization efficiency were used to homogenize pasteurized modified unhomogenized milks and farm raw bulk milks. Homogenized milks were collected from the homogenizer outlet and then run through an MIR milk analyzer without an in-line homogenizer to collect a MIR spectrum. A separate portion of each homogenized milk was analyzed with a laser light-scattering particle size analyzer to obtain reference values. The study was replicated 3 times with 3 independent sets of modified milks and bulk tank farm milks. Validation of the models was done with a set of 34 milks that were not used in the model development. Partial least square regression models were developed and validated for predicting the following milk fat globule particle size distribution parameters from MIR spectra: d(0.5) and d(0.9), surface volume mean diameter D[3,2], and volume moment mean diameter D[4,3]. The basis for the ability to model particle size distribution of milk fat emulsions was hypothesized to be the result of the partial least square modeling detecting absorbance shifts in MIR spectra of milk fat due to the Christiansen effect. The independent sample validation of particle size prediction methods found more variation in d(0.9) and D[4,3] predictions than the d(0.5) and D[3,2] predictions relative to laser light-scattering reference values, and this may be due to variation in particle size among different pump strokes. The accuracy of the d(0.9) prediction for routine quality assurance, to determine if a homogenizer within an MIR milk analyzer was near the failure level [i.e., d(0.9) >1.7µm] and needed to be replaced, is fit-for-purpose. The daily average particle size performance [i.e., d(0.9)] of a homogenizer based on the mean for the day could be used for monitoring homogenizer performance. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Karumendu, L U; Ven, R van de; Kerr, M J; Lanza, M; Hopkins, D L
2009-08-01
The impact of homogenization speed on Particle Size (PS) results was examined using samples from the M.longissimus thoracis et lumborum (LL) of 40 lambs. One gram duplicate samples from meat aged for 1 and 5days were homogenized at five different speeds; 11,000, 13,000, 16,000, 19,000 and 22,000rpm. In addition to this LL samples from 30 different lamb carcases also aged for 1 and 5days were used to study the comparison between PS and myofibrillar fragmentation index (MFI) values. In this case, 1g duplicate samples (n=30) were homogenized at 16,000rpm and the other half (0.5g samples) at 11,000rpm (n=30). The homogenates were then subjected to respective combinations of treatments which included either PS analysis or the determination of MFI, both with or without three cycles of centrifugation. All 140 samples of LL included 65g blocks for subsequent shear force (SF) testing. Homogenization at 16,000rpm provided the greatest ability to detect ageing differences for particle size between samples aged for 1 and 5days. Particle size at the 25% quantile provided the best result for detecting differences due to ageing. It was observed that as ageing increased the mean PS decreased and was significantly (P<0.001) less for 5days aged samples compared to 1day aged samples, while MFI values significantly increased (P<0.001) as ageing period increased. When comparing the PS and MFI methods it became apparent that, as opposed to the MFI method, there was a greater coefficient of variation for the PS method which warranted a quality assurance system. Given this requirement and examination of the mean, standard deviation and the 25% quantile for PS data it was concluded that three cycles of centrifugation were not necessary and this also applied to the MFI method. There were significant correlations (P<0.001) within the same lamb loin sample aged for a given period between mean MFI and mean PS (-0.53), mean MFI and mean SF (-0.38) and mean PS and mean SF (0.23). It was concluded that PS analysis offers significant potential for streamlining determination of myofibrillar degradation when samples are measured after homogenization at 16,000rpm with no centrifugation.
NASA Astrophysics Data System (ADS)
Kumar, Rakesh; Levin, Deborah A.
2011-03-01
In the present work, we have simulated the homogeneous condensation of carbon dioxide and ethanol using the Bhatnagar-Gross-Krook based approach. In an earlier work of Gallagher-Rogers et al. [J. Thermophys. Heat Transfer 22, 695 (2008)], it was found that it was not possible to simulate condensation experiments of Wegener et al. [Phys. Fluids 15, 1869 (1972)] using the direct simulation Monte Carlo method. Therefore, in this work, we have used the statistical Bhatnagar-Gross-Krook approach, which was found to be numerically more efficient than direct simulation Monte Carlo method in our previous studies [Kumar et al., AIAA J. 48, 1531 (2010)], to model homogeneous condensation of two small polyatomic systems, carbon dioxide and ethanol. A new weighting scheme is developed in the Bhatnagar-Gross-Krook framework to reduce the computational load associated with the study of homogeneous condensation flows. The solutions obtained by the use of the new scheme are compared with those obtained by the baseline Bhatnagar-Gross-Krook condensation model (without the species weighting scheme) for the condensing flow of carbon dioxide in the stagnation pressure range of 1-5 bars. Use of the new weighting scheme in the present work makes the simulation of homogeneous condensation of ethanol possible. We obtain good agreement between our simulated predictions for homogeneous condensation of ethanol and experiments in terms of the point of condensation onset and the distribution of mass fraction of ethanol condensed along the nozzle centerline.
Isolation of biologically active nanomaterial (inclusion bodies) from bacterial cells
2010-01-01
Background In recent years bacterial inclusion bodies (IBs) were recognised as highly pure deposits of active proteins inside bacterial cells. Such active nanoparticles are very interesting for further downstream protein isolation, as well as for many other applications in nanomedicine, cosmetic, chemical and pharmaceutical industry. To prepare large quantities of a high quality product, the whole bioprocess has to be optimised. This includes not only the cultivation of the bacterial culture, but also the isolation step itself, which can be of critical importance for the production process. To determine the most appropriate method for the isolation of biologically active nanoparticles, three methods for bacterial cell disruption were analyzed. Results In this study, enzymatic lysis and two mechanical methods, high-pressure homogenization and sonication, were compared. During enzymatic lysis the enzyme lysozyme was found to attach to the surface of IBs, and it could not be removed by simple washing. As this represents an additional impurity in the engineered nanoparticles, we concluded that enzymatic lysis is not the most suitable method for IBs isolation. During sonication proteins are released (lost) from the surface of IBs and thus the surface of IBs appears more porous when compared to the other two methods. We also found that the acoustic output power needed to isolate the IBs from bacterial cells actually damages proteins structures, thereby causing a reduction in biological activity. High-pressure homogenization also caused some damage to IBs, however the protein loss from the IBs was negligible. Furthermore, homogenization had no side-effects on protein biological activity. Conclusions The study shows that among the three methods tested, homogenization is the most appropriate method for the isolation of active nanoparticles from bacterial cells. PMID:20831775
Isolation of biologically active nanomaterial (inclusion bodies) from bacterial cells.
Peternel, Spela; Komel, Radovan
2010-09-10
In recent years bacterial inclusion bodies (IBs) were recognised as highly pure deposits of active proteins inside bacterial cells. Such active nanoparticles are very interesting for further downstream protein isolation, as well as for many other applications in nanomedicine, cosmetic, chemical and pharmaceutical industry.To prepare large quantities of a high quality product, the whole bioprocess has to be optimised. This includes not only the cultivation of the bacterial culture, but also the isolation step itself, which can be of critical importance for the production process.To determine the most appropriate method for the isolation of biologically active nanoparticles, three methods for bacterial cell disruption were analyzed. In this study, enzymatic lysis and two mechanical methods, high-pressure homogenization and sonication, were compared.During enzymatic lysis the enzyme lysozyme was found to attach to the surface of IBs, and it could not be removed by simple washing. As this represents an additional impurity in the engineered nanoparticles, we concluded that enzymatic lysis is not the most suitable method for IBs isolation.During sonication proteins are released (lost) from the surface of IBs and thus the surface of IBs appears more porous when compared to the other two methods. We also found that the acoustic output power needed to isolate the IBs from bacterial cells actually damages proteins structures, thereby causing a reduction in biological activity.High-pressure homogenization also caused some damage to IBs, however the protein loss from the IBs was negligible. Furthermore, homogenization had no side-effects on protein biological activity. The study shows that among the three methods tested, homogenization is the most appropriate method for the isolation of active nanoparticles from bacterial cells.
A new method for depth profiling reconstruction in confocal microscopy
NASA Astrophysics Data System (ADS)
Esposito, Rosario; Scherillo, Giuseppe; Mensitieri, Giuseppe
2018-05-01
Confocal microscopy is commonly used to reconstruct depth profiles of chemical species in multicomponent systems and to image nuclear and cellular details in human tissues via image intensity measurements of optical sections. However, the performance of this technique is reduced by inherent effects related to wave diffraction phenomena, refractive index mismatch and finite beam spot size. All these effects distort the optical wave and cause an image to be captured of a small volume around the desired illuminated focal point within the specimen rather than an image of the focal point itself. The size of this small volume increases with depth, thus causing a further loss of resolution and distortion of the profile. Recently, we proposed a theoretical model that accounts for the above wave distortion and allows for a correct reconstruction of the depth profiles for homogeneous samples. In this paper, this theoretical approach has been adapted for describing the profiles measured from non-homogeneous distributions of emitters inside the investigated samples. The intensity image is built by summing the intensities collected from each of the emitters planes belonging to the illuminated volume, weighed by the emitters concentration. The true distribution of the emitters concentration is recovered by a new approach that implements this theoretical model in a numerical algorithm based on the Maximum Entropy Method. Comparisons with experimental data and numerical simulations show that this new approach is able to recover the real unknown concentration distribution from experimental profiles with an accuracy better than 3%.
Two-scale homogenization to determine effective parameters of thin metallic-structured films
Marigo, Jean-Jacques
2016-01-01
We present a homogenization method based on matched asymptotic expansion technique to derive effective transmission conditions of thin structured films. The method leads unambiguously to effective parameters of the interface which define jump conditions or boundary conditions at an equivalent zero thickness interface. The homogenized interface model is presented in the context of electromagnetic waves for metallic inclusions associated with Neumann or Dirichlet boundary conditions for transverse electric or transverse magnetic wave polarization. By comparison with full-wave simulations, the model is shown to be valid for thin interfaces up to thicknesses close to the wavelength. We also compare our effective conditions with the two-sided impedance conditions obtained in transmission line theory and to the so-called generalized sheet transition conditions. PMID:27616916
Colloidal nanocrystals and method of making
Kahen, Keith
2015-10-06
A tight confinement nanocrystal comprises a homogeneous center region having a first composition and a smoothly varying region having a second composition wherein a confining potential barrier monotonically increases and then monotonically decreases as the smoothly varying region extends from the surface of the homogeneous center region to an outer surface of the nanocrystal. A method of producing the nanocrystal comprises forming a first solution by combining a solvent and at most two nanocrystal precursors; heating the first solution to a nucleation temperature; adding to the first solution, a second solution having a solvent, at least one additional and different precursor to form the homogeneous center region and at most an initial portion of the smoothly varying region; and lowering the solution temperature to a growth temperature to complete growth of the smoothly varying region.
Dec, John E [Livermore, CA; Sjoberg, Carl-Magnus G [Livermore, CA
2006-10-31
A method for slowing the heat-release rate in homogeneous charge compression ignition ("HCCI") engines that allows operation without excessive knock at higher engine loads than are possible with conventional HCCI. This method comprises injecting a fuel charge in a manner that creates a stratified fuel charge in the engine cylinder to provide a range of fuel concentrations in the in-cylinder gases (typically with enough oxygen for complete combustion) using a fuel with two-stage ignition fuel having appropriate cool-flame chemistry so that regions of different fuel concentrations autoignite sequentially.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Yinchang, E-mail: ycdu@mail.ustc.edu.cn; Max-Planck Institute for Extraterrestrial Physics, D-85748 Garching; Li, Yangfang
In this paper, we propose a method to get more homogeneous plasma in the geometrically asymmetric capacitive coupled plasma (CCP) discharge. The dielectric barrier discharge (DBD) is used for the auxiliary discharge system to improve the homogeneity of the geometrically asymmetric CCP discharge. The single Langmuir probe measurement shows that the DBD can increase the electron density in the low density volume, where the DBD electrodes are mounted, when the pressure is higher than 5 Pa. By this manner, we are able to improve the homogeneity of the plasma production and increase the overall density in the target volume. At last,more » the finite element simulation results show that the DC bias, applied to the DBD electrodes, can increase the homogeneity of the electron density in the CCP discharge. The simulation results show a good agreement with the experiment results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Ramazan Sonat; Hummel, Andrew John; Hiruta, Hikaru
The deterministic full core simulators require homogenized group constants covering the operating and transient conditions over the entire lifetime. Traditionally, the homogenized group constants are generated using lattice physics code over an assembly or block in the case of prismatic high temperature reactors (HTR). For the case of strong absorbers that causes strong local depressions on the flux profile require special techniques during homogenization over a large volume. Fuel blocks with burnable poisons or control rod blocks are example of such cases. Over past several decades, there have been a tremendous number of studies performed for improving the accuracy ofmore » full-core calculations through the homogenization procedure. However, those studies were mostly performed for light water reactor (LWR) analyses, thus, may not be directly applicable to advanced thermal reactors such as HTRs. This report presents the application of SuPer-Homogenization correction method to a hypothetical HTR core.« less
Cavegn, Martin; Douglas, Ryan; Akkermans, Guy; Kuentz, Martin
2011-08-01
There are currently no adequate process analyzers for nanoparticulate viscosity enhancers. This article aims to evaluate ultrasonic resonator technology as a monitoring tool for homogenization of nanoparticulate gels. Aqueous dispersions of colloidal microcrystalline cellulose (MCC) and a mixture of clay particles with xanthan gum were compared with colloidal silicon dioxide in oil. The processing was conducted using a laboratory-scale homogenizing vessel. The study investigated first the homogenization kinetics of the different systems to focus then on process factors in the case of colloidal MCC. Moreover, rheological properties were analyzed offline to assess the structure of the resulting gels. Results showed the suitability of ultrasound velocimetry to monitor the homogenization process. The obtained data were fitted using a novel heuristic model. It was possible to identify characteristic homogenization times for each formulation. The subsequent study of the process factors demonstrated that ultrasonic process analysis was equally sensitive as offline rheological measurements in detecting subtle manufacturing changes. It can be concluded that the ultrasonic method was able to successfully assess homogenization of nanoparticulate viscosity enhancers. This novel technique can become a vital tool for development and production of pharmaceutical suspensions in the future. Copyright © 2011 Wiley-Liss, Inc.
Topology and layout optimization of discrete and continuum structures
NASA Technical Reports Server (NTRS)
Bendsoe, Martin P.; Kikuchi, Noboru
1993-01-01
The basic features of the ground structure method for truss structure an continuum problems are described. Problems with a large number of potential structural elements are considered using the compliance of the structure as the objective function. The design problem is the minimization of compliance for a given structural weight, and the design variables for truss problems are the cross-sectional areas of the individual truss members, while for continuum problems they are the variable densities of material in each of the elements of the FEM discretization. It is shown how homogenization theory can be applied to provide a relation between material density and the effective material properties of a periodic medium with a known microstructure of material and voids.
PHYSICAL PROPERTIES OF ZIRCONIUM NITRIDE IN THE HOMOGENEITY REGION (in Ukrainian)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samsonov, G.V.; Verkhoglyadova, T.S.
1962-01-01
The x-ray method was used to determine the homogeneity region of zirconium nitride as 40 to 50 at.% (9.5 to 13.3% by weight) of nitrogen. It is also shown that part of the ionic bond in the zirconium nitride lattice increases with a decrease in the nitrogen content in this region, this increase being higher than in the homogeneity region of titunium nitride due to the smaller degree of unfilling of the electron d-shell of the zirconium atom in comparison with that of the titanium atom. (auth)
Topology optimization based design of unilateral NMR for generating a remote homogeneous field.
Wang, Qi; Gao, Renjing; Liu, Shutian
2017-06-01
This paper presents a topology optimization based design method for the design of unilateral nuclear magnetic resonance (NMR), with which a remote homogeneous field can be obtained. The topology optimization is actualized by seeking out the optimal layout of ferromagnetic materials within a given design domain. The design objective is defined as generating a sensitive magnetic field with optimal homogeneity and maximal field strength within a required region of interest (ROI). The sensitivity of the objective function with respect to the design variables is derived and the method for solving the optimization problem is presented. A design example is provided to illustrate the utility of the design method, specifically the ability to improve the quality of the magnetic field over the required ROI by determining the optimal structural topology for the ferromagnetic poles. Both in simulations and experiments, the sensitive region of the magnetic field achieves about 2 times larger than that of the reference design, validating validates the feasibility of the design method. Copyright © 2017. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Luo, Yajun; Zhang, Zhifeng; Li, Bao; Gao, Mingwei; Qiu, Yang; He, Min
2017-12-01
To obtain a large-sized, high-quality aluminum alloy billet, an advanced uniform direct chill (UDC) casting method was developed by combining annular electromagnetic stirring (A-EMS) with intercooling in the sump. The 7005 alloy was chosen to investigate the effect of UDC on grain refinement and homogeneity during normal direct chill (NDC) casting. It was concluded that the microstructure consisting of both primary α-Al phase and secondary phases becomes finer and more homogeneous for the billets prepared with UDC casting compared to those prepared with NDC casting, and the forced cooling from both the inner and outer melt under A-EMS has a measurable effect on grain refinement and homogeneity.
USDA-ARS?s Scientific Manuscript database
The RVR Meander platform for computing long-term meandering-channel migration is presented, together with a method for planform migration based on the modeling of the streambank erosion processes of hydraulic erosion and mass failure. An application to a real-world river, with assumption of homogene...
Homogeneous Grouping in the Context of High-Stakes Testing: Does It Improve Reading Achievement?
ERIC Educational Resources Information Center
Salcedo-Gonzalez, Trena
2012-01-01
As accountability reform intensifies, urban school districts strive to meet No Child Left Behind mandates to avoid severe penalties. This study investigated the resurgence of homogeneous grouping methods as a means to increase reading achievement and meet English Language Arts Adequate Yearly Progress requirements. Specifically, this study…
F-Test Alternatives to Fisher's Exact Test and to the Chi-Square Test of Homogeneity in 2x2 Tables.
ERIC Educational Resources Information Center
Overall, John E.; Starbuck, Robert R.
1983-01-01
An alternative to Fisher's exact test and the chi-square test for homogeneity in two-by-two tables is developed. The method provides for Type I error rates which are closer to the stated alpha level than either of the alternatives. (JKS)
A method to eliminate wetting during the homogenization of HgCdTe
NASA Technical Reports Server (NTRS)
Su, Ching-Hua; Lehoczky, S. L.; Szofran, F. R.
1986-01-01
Adhesion of HgCdTe samples to fused silica ampoule walls, or 'wetting', during the homogenization process was eliminated by adopting a slower heating rate. The idea is to decrease Cd activity in the sample so as to reduce the rate of reaction between Cd and the silica wall.
Optimal lattice-structured materials
Messner, Mark C.
2016-07-09
This paper describes a method for optimizing the mesostructure of lattice-structured materials. These materials are periodic arrays of slender members resembling efficient, lightweight macroscale structures like bridges and frame buildings. Current additive manufacturing technologies can assemble lattice structures with length scales ranging from nanometers to millimeters. Previous work demonstrates that lattice materials have excellent stiffness- and strength-to-weight scaling, outperforming natural materials. However, there are currently no methods for producing optimal mesostructures that consider the full space of possible 3D lattice topologies. The inverse homogenization approach for optimizing the periodic structure of lattice materials requires a parameterized, homogenized material model describingmore » the response of an arbitrary structure. This work develops such a model, starting with a method for describing the long-wavelength, macroscale deformation of an arbitrary lattice. The work combines the homogenized model with a parameterized description of the total design space to generate a parameterized model. Finally, the work describes an optimization method capable of producing optimal mesostructures. Several examples demonstrate the optimization method. One of these examples produces an elastically isotropic, maximally stiff structure, here called the isotruss, that arguably outperforms the anisotropic octet truss topology.« less
Hayat, T.; Hussain, Zakir; Alsaedi, A.; Farooq, M.
2016-01-01
This article examines the effects of homogeneous-heterogeneous reactions and Newtonian heating in magnetohydrodynamic (MHD) flow of Powell-Eyring fluid by a stretching cylinder. The nonlinear partial differential equations of momentum, energy and concentration are reduced to the nonlinear ordinary differential equations. Convergent solutions of momentum, energy and reaction equations are developed by using homotopy analysis method (HAM). This method is very efficient for development of series solutions of highly nonlinear differential equations. It does not depend on any small or large parameter like the other methods i. e., perturbation method, δ—perturbation expansion method etc. We get more accurate result as we increase the order of approximations. Effects of different parameters on the velocity, temperature and concentration distributions are sketched and discussed. Comparison of present study with the previous published work is also made in the limiting sense. Numerical values of skin friction coefficient and Nusselt number are also computed and analyzed. It is noticed that the flow accelerates for large values of Powell-Eyring fluid parameter. Further temperature profile decreases and concentration profile increases when Powell-Eyring fluid parameter enhances. Concentration distribution is decreasing function of homogeneous reaction parameter while opposite influence of heterogeneous reaction parameter appears. PMID:27280883
Hayat, T; Hussain, Zakir; Alsaedi, A; Farooq, M
2016-01-01
This article examines the effects of homogeneous-heterogeneous reactions and Newtonian heating in magnetohydrodynamic (MHD) flow of Powell-Eyring fluid by a stretching cylinder. The nonlinear partial differential equations of momentum, energy and concentration are reduced to the nonlinear ordinary differential equations. Convergent solutions of momentum, energy and reaction equations are developed by using homotopy analysis method (HAM). This method is very efficient for development of series solutions of highly nonlinear differential equations. It does not depend on any small or large parameter like the other methods i. e., perturbation method, δ-perturbation expansion method etc. We get more accurate result as we increase the order of approximations. Effects of different parameters on the velocity, temperature and concentration distributions are sketched and discussed. Comparison of present study with the previous published work is also made in the limiting sense. Numerical values of skin friction coefficient and Nusselt number are also computed and analyzed. It is noticed that the flow accelerates for large values of Powell-Eyring fluid parameter. Further temperature profile decreases and concentration profile increases when Powell-Eyring fluid parameter enhances. Concentration distribution is decreasing function of homogeneous reaction parameter while opposite influence of heterogeneous reaction parameter appears.
NASA Astrophysics Data System (ADS)
O'Shea, Bethany; Jankowski, Jerzy
2006-12-01
The major ion composition of Great Artesian Basin groundwater in the lower Namoi River valley is relatively homogeneous in chemical composition. Traditional graphical techniques have been combined with multivariate statistical methods to determine whether subtle differences in the chemical composition of these waters can be delineated. Hierarchical cluster analysis and principal components analysis were successful in delineating minor variations within the groundwaters of the study area that were not visually identified in the graphical techniques applied. Hydrochemical interpretation allowed geochemical processes to be identified in each statistically defined water type and illustrated how these groundwaters differ from one another. Three main geochemical processes were identified in the groundwaters: ion exchange, precipitation, and mixing between waters from different sources. Both statistical methods delineated an anomalous sample suspected of being influenced by magmatic CO2 input. The use of statistical methods to complement traditional graphical techniques for waters appearing homogeneous is emphasized for all investigations of this type. Copyright
NASA Astrophysics Data System (ADS)
Nguyen, Van-Dung; Wu, Ling; Noels, Ludovic
2017-03-01
This work provides a unified treatment of arbitrary kinds of microscopic boundary conditions usually considered in the multi-scale computational homogenization method for nonlinear multi-physics problems. An efficient procedure is developed to enforce the multi-point linear constraints arising from the microscopic boundary condition either by the direct constraint elimination or by the Lagrange multiplier elimination methods. The macroscopic tangent operators are computed in an efficient way from a multiple right hand sides linear system whose left hand side matrix is the stiffness matrix of the microscopic linearized system at the converged solution. The number of vectors at the right hand side is equal to the number of the macroscopic kinematic variables used to formulate the microscopic boundary condition. As the resolution of the microscopic linearized system often follows a direct factorization procedure, the computation of the macroscopic tangent operators is then performed using this factorized matrix at a reduced computational time.
Sun, Yinnan; Yang, Kui; Cao, Qin; Sun, Jinde; Xia, Yu; Wang, Yinhang; Li, Wei; Ma, Chunhui; Liu, Shouxin
2017-07-11
A homogenate-assisted vacuum-powered bubble extraction (HVBE) method using ethanol was applied for extraction of flavonoids from Phyllostachys pubescens (P. pubescens) leaves. The mechanisms of homogenate-assisted extraction and vacuum-powered bubble generation were discussed in detail. Furthermore, a method for the rapid determination of flavonoids by HPLC was established. HVBE followed by HPLC was successfully applied for the extraction and quantification of four flavonoids in P. pubescens , including orientin, isoorientin, vitexin, and isovitexin. This method provides a fast and effective means for the preparation and determination of plant active components. Moreover, the on-line antioxidant capacity, including scavenging positive ion and negative ion free radical capacity of different fractions from the bamboo flavonoid extract was evaluated. Results showed that the scavenging DPPH ˙ free radical capacity of vitexin and isovitexin was larger than that of isoorientin and orientin. On the contrary, the scavenging ABTS⁺ ˙ free radical capacity of isoorientin and orientin was larger than that of vitexin and isovitexin.
Crack Growth Mechanisms under Anti-Plane Shear in Composite Laminates
NASA Astrophysics Data System (ADS)
Horner, Allison Lynne
The research conducted for this dissertation focuses on determining the mechanisms associated with crack growth in polymer matrix composite laminates subjected to anti-plane shear (mode III) loading. For mode III split-beam test methods were proposed, and initial evaluations were conducted. A single test method was selected for further evaluation. Using this test method, it was determined that the apparent mode III delamination toughness, GIIIc , depended on geometry, which indicated a true material property was not being measured. Transverse sectioning and optical microscopy revealed an array of transverse matrix cracks, or echelon cracks, oriented at approximately 45° and intersecting the plane of the delamination. Subsequent investigations found the echelon array formed prior to the onset of planar delamination advance and that growth of the planar delamination is always coupled to echelon array formation in these specimens. The evolution of the fracture surfaces formed by the echelon array and planar delamination were studied, and it was found that the development was similar to crack growth in homogenous materials subjected to mode III or mixed mode I-III loading, although the composite laminate architecture constrained the fracture surface development differently than homogenous materials. It was also found that, for split-beam specimens such as those used herein, applying an anti-plane shear load results in twisting of the specimen's uncracked region which gives rise to a mixed-mode I-III load condition. This twisting has been related to the apparent mode III toughness as well as the orientation of the transverse matrix cracks. A finite element model was then developed to study the mechanisms of initial echelon array formation. From this, it is shown that an echelon array will develop, but will become self-limiting prior to the onset of planar delamination growth.
NASA Astrophysics Data System (ADS)
Yankovskii, A. P.
2017-09-01
The creep of homogenous and hybrid composite beams of an irregular laminar fibrous structure is investigated. The beams consist of thin walls and flanges (load-carrying layers). The walls may be reinforced longitudinally or crosswise in the plane, and the load-carrying layers are reinforced in the longitudinal direction. The mechanical behavior of phase materials is described by the Rabotnov nonlinear hereditary theory of creep taking into account their possible different resistance to tension and compression. On the basis of hypotheses of the Timoshenko theory, with using the method of time steps, a problem is formulated for the inelastic bending deformation of such beams with account of the weakened resistance of their walls to the transverse shear. It is shown that, at discrete instants of time, the mechanical behavior of such structures can formally be described by the governing relations for composite beams made of nonlinear elastic anisotropic materials with a known initial stress state. The method of successive iterations, similar to the method of variable parameters of elasticity, is used to linearize the boundary-value problem at each instant of time. The bending deformation is investigated for homogeneous and reinforced cantilever and simply supported beams in creep under the action of a uniformly distributed transverse load. The cross sections of the beams considered are I-shaped. It is found that the use of the classical theory for such beams leads to the prediction of indefensibly underestimated flexibility, especially in long-term loading. It is shown that, in beams with reinforced load-carrying layers, the creep mainly develops due to the shear strains of walls. It is found that, in short- and long-term loadings of composite beams, the reinforcement structures rational by the criterion of minimum flexibility are different.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.
2014-11-01
Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purposemore » of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain dose estimates. This allowed direct comparisons between measured and simulated dose values under each condition of phantom, location, and scan to be made. Results: For FTC scans, the percent root mean square (RMS) difference between measurements and simulations was within 5% across all phantoms. For TCM scans, the percent RMS of the difference between measured and simulated values when using detailed TCM and z-axis-only TCM simulations was 4.5% and 13.2%, respectively. For the anthropomorphic phantom, the difference between TCM measurements and detailed TCM and z-axis-only TCM simulations was 1.2% and 8.9%, respectively. For FTC measurements and simulations, the percent RMS of the difference was 5.0%. Conclusions: This work demonstrated that the Monte Carlo model developed provided good agreement between measured and simulated values under both simple and complex geometries including an anthropomorphic phantom. This work also showed the increased dose differences for z-axis-only TCM simulations, where considerable modulation in the x–y plane was present due to the shape of the rectangular water phantom. Results from this investigation highlight details that need to be included in Monte Carlo simulations of TCM CT scans in order to yield accurate, clinically viable assessments of patient dosimetry.« less
Gagne, Nolan L; Cutright, Daniel R; Rivard, Mark J
2012-09-01
To improve tumor dose conformity and homogeneity for COMS plaque brachytherapy by investigating the dosimetric effects of varying component source ring radionuclides and source strengths. The MCNP5 Monte Carlo (MC) radiation transport code was used to simulate plaque heterogeneity-corrected dose distributions for individually-activated source rings of 14, 16 and 18 mm diameter COMS plaques, populated with (103)Pd, (125)I and (131)Cs sources. Ellipsoidal tumors were contoured for each plaque size and MATLAB programming was developed to generate tumor dose distributions for all possible ring weighting and radionuclide permutations for a given plaque size and source strength resolution, assuming a 75 Gy apical prescription dose. These dose distributions were analyzed for conformity and homogeneity and compared to reference dose distributions from uniformly-loaded (125)I plaques. The most conformal and homogeneous dose distributions were reproduced within a reference eye environment to assess organ-at-risk (OAR) doses in the Pinnacle(3) treatment planning system (TPS). The gamma-index analysis method was used to quantitatively compare MC and TPS-generated dose distributions. Concentrating > 97% of the total source strength in a single or pair of central (103)Pd seeds produced the most conformal dose distributions, with tumor basal doses a factor of 2-3 higher and OAR doses a factor of 2-3 lower than those of corresponding uniformly-loaded (125)I plaques. Concentrating 82-86% of the total source strength in peripherally-loaded (131)Cs seeds produced the most homogeneous dose distributions, with tumor basal doses 17-25% lower and OAR doses typically 20% higher than those of corresponding uniformly-loaded (125)I plaques. Gamma-index analysis found > 99% agreement between MC and TPS dose distributions. A method was developed to select intra-plaque ring radionuclide compositions and source strengths to deliver more conformal and homogeneous tumor dose distributions than uniformly-loaded (125)I plaques. This method may support coordinated investigations of an appropriate clinical target for eye plaque brachytherapy.
Limit analysis of multi-layered plates. Part I: The homogenized Love-Kirchhoff model
NASA Astrophysics Data System (ADS)
Dallot, Julien; Sab, Karam
The purpose of this paper is to determine Gphom, the overall homogenized Love-Kirchhoff strength domain of a rigid perfectly plastic multi-layered plate, and to study the relationship between the 3D and the homogenized Love-Kirchhoff plate limit analysis problems. In the Love-Kirchhoff model, the generalized stresses are the in-plane (membrane) and the out-of-plane (flexural) stress field resultants. The homogenization method proposed by Bourgeois [1997. Modélisation numérique des panneaux structuraux légers. Ph.D. Thesis, University Aix-Marseille] and Sab [2003. Yield design of thin periodic plates by a homogenization technique and an application to masonry wall. C. R. Méc. 331, 641-646] for in-plane periodic rigid perfectly plastic plates is justified using the asymptotic expansion method. For laminated plates, an explicit parametric representation of the yield surface ∂Gphom is given thanks to the π-function (the plastic dissipation power density function) that describes the local strength domain at each point of the plate. This representation also provides a localization method for the determination of the 3D stress components corresponding to every generalized stress belonging to ∂Gphom. For a laminated plate described with a yield function of the form F(x3,σ)=σu(x3)F^(σ), where σu is a positive even function of the out-of-plane coordinate x3 and F^ is a convex function of the local stress σ, two effective constants and a normalization procedure are introduced. A symmetric sandwich plate consisting of two Von-Mises materials ( σu=σ1u in the skins and σu=σ2u in the core) is studied. It is found that, for small enough contrast ratios ( r=σ1u/σ2u≤5), the normalized strength domain G^phom is close to the one corresponding to a homogeneous Von-Mises plate [Ilyushin, A.-A., 1956. Plasticité. Eyrolles, Paris].
Numerical developments for short-pulsed Near Infra-Red laser spectroscopy. Part I: direct treatment
NASA Astrophysics Data System (ADS)
Boulanger, Joan; Charette, André
2005-03-01
This two part study is devoted to the numerical treatment of short-pulsed laser near infra-red spectroscopy. The overall goal is to address the possibility of numerical inverse treatment based on a recently developed direct model to solve the transient radiative transfer equation. This model has been constructed in order to incorporate the last improvements in short-pulsed laser interaction with semi-transparent media and combine a discrete ordinates computing of the implicit source term appearing in the radiative transfer equation with an explicit treatment of the transport of the light intensity using advection schemes, a method encountered in reactive flow dynamics. The incident collimated beam is analytically solved through Bouger Beer Lambert extinction law. In this first part, the direct model is extended to fully non-homogeneous materials and tested with two different spatial schemes in order to be adapted to the inversion methods presented in the following second part. As a first point, fundamental methods and schemes used in the direct model are presented. Then, tests are conducted by comparison with numerical simulations given as references. In a third and last part, multi-dimensional extensions of the code are provided. This allows presentation of numerical results of short pulses propagation in 1, 2 and 3D homogeneous and non-homogeneous materials given some parametrical studies on medium properties and pulse shape. For comparison, an integral method adapted to non-homogeneous media irradiated by a pulsed laser beam is also developed for the 3D case.
Non-periodic homogenization of 3-D elastic media for the seismic wave equation
NASA Astrophysics Data System (ADS)
Cupillard, Paul; Capdeville, Yann
2018-05-01
Because seismic waves have a limited frequency spectrum, the velocity structure of the Earth that can be extracted from seismic records has a limited resolution. As a consequence, one obtains smooth images from waveform inversion, although the Earth holds discontinuities and small scales of various natures. Within the last decade, the non-periodic homogenization method shed light on how seismic waves interact with small geological heterogeneities and `see' upscaled properties. This theory enables us to compute long-wave equivalent density and elastic coefficients of any media, with no constraint on the size, the shape and the contrast of the heterogeneities. In particular, the homogenization leads to the apparent, structure-induced anisotropy. In this paper, we implement this method in 3-D and show 3-D tests for the very first time. The non-periodic homogenization relies on an asymptotic expansion of the displacement and the stress involved in the elastic wave equation. Limiting ourselves to the order 0, we show that the practical computation of an upscaled elastic tensor basically requires (i) to solve an elastostatic problem and (ii) to low-pass filter the strain and the stress associated with the obtained solution. The elastostatic problem consists in finding the displacements due to local unit strains acting in all directions within the medium to upscale. This is solved using a parallel, highly optimized finite-element code. As for the filtering, we rely on the finite-element quadrature to perform the convolution in the space domain. We end up with an efficient numerical tool that we apply on various 3-D models to test the accuracy and the benefit of the homogenization. In the case of a finely layered model, our method agrees with results derived from Backus. In a more challenging model composed by a million of small cubes, waveforms computed in the homogenized medium fit reference waveforms very well. Both direct phases and complex diffracted waves are accurately retrieved in the upscaled model, although it is smooth. Finally, our upscaling method is applied to a realistic geological model. The obtained homogenized medium holds structure-induced anisotropy. Moreover, full seismic wavefields in this medium can be simulated with a coarse mesh (no matter what the numerical solver is), which significantly reduces computation costs usually associated with discontinuities and small heterogeneities. These three tests show that the non-periodic homogenization is both accurate and tractable in large 3-D cases, which opens the path to the correct account of the effect of small-scale features on seismic wave propagation for various applications and to a deeper understanding of the apparent anisotropy.
Braak, E; Braak, H; Mandelkow, E M
1994-01-01
Frontal sections of the temporal lobe including the transentorhinal/entorhinal region, amygdala, and/or hippocampus from human adult brains are studied for cytoskeleton changes using immunostaining with the antibodies AT8 and Alz-50 and selective silver impregnation methods for neurofibrillary changes of the Alzheimer type. For the purpose of correlation, the two methods are carried out one after the other on the same section. Layer pre-alpha in the transentorhinal/entorhinal region harbours nerve cells which are among the first nerve cells in the entire brain to show the development of neurofibrillary changes. This presents the opportunity for study of both early events in the destruction of the cytoskeleton in individual neurons, and to relate changes which occur in the neuronal processes in the absence of alterations in their immediate surroundings to those happening in the soma. Immunoreactions with the AT8 antibody in particular reveal a clear sequence of changes in the neuronal cytoskeleton. Group 1 neurons present initial cytoskeleton changes in that the soma, dendrites, and axon are completely marked by granular AT8 immunoreactive material. These neurons appear quite normal and turn out to be devoid of argyrophilic material when observed in silver-stained sections. Group 2 neurons show changes in the cellular processes. The terminal tuft of the apical dendrite is replaced by tortuous varicose fibres and coarse granules. The distal portions of the dendrites are curved and show appendages and thickened portions. Intensely homogeneously immunostained rod-like inclusions are encountered in these thickened portions and in the soma. A number of these rod-like inclusions are visible after silver staining, as well. Group 3 neurons display even more pronounced alterations of their distal--most dendritic portions. The intermediate dendritic parts lose immunoreactivity, but the soma is homogeneously immunostained. Silver staining reveals in most of the distal dendritic parts neuropil threads, and in the soma a classic neurofibrillary tangle. Group 4 structures are marked by accumulations of coarse AT8-immunoreactive granules. Silver staining provides evidence that the fibrillary material has become an extraneuronal, "early" ghost tangle. Finally, group 5 structures present "late" ghost tangles in silver-stained sections but fail to demonstrate AT8 immunoreactivity. It is suggested that the altered tau protein shown by the antibody AT8 represents an early cytoskeleton change which eventually leads to the formation of argyrophilic neurofibrillary tangles and neuropil threads.
Method for preparing homogeneous single crystal ternary III-V alloys
Ciszek, Theodore F.
1991-01-01
A method for producing homogeneous, single-crystal III-V ternary alloys of high crystal perfection using a floating crucible system in which the outer crucible holds a ternary alloy of the composition desired to be produced in the crystal and an inner floating crucible having a narrow, melt-passing channel in its bottom wall holds a small quantity of melt of a pseudo-binary liquidus composition that would freeze into the desired crystal composition. The alloy of the floating crucilbe is maintained at a predetermined lower temperature than the alloy of the outer crucible, and a single crystal of the desired homogeneous alloy is pulled out of the floating crucible melt, as melt from the outer crucible flows into a bottom channel of the floating crucible at a rate that corresponds to the rate of growth of the crystal.
NASA Astrophysics Data System (ADS)
Asinari, P.
2011-03-01
Boltzmann equation is one the most powerful paradigms for explaining transport phenomena in fluids. Since early fifties, it received a lot of attention due to aerodynamic requirements for high altitude vehicles, vacuum technology requirements and nowadays, micro-electro-mechanical systems (MEMs). Because of the intrinsic mathematical complexity of the problem, Boltzmann himself started his work by considering first the case when the distribution function does not depend on space (homogeneous case), but only on time and the magnitude of the molecular velocity (isotropic collisional integral). The interest with regards to the homogeneous isotropic Boltzmann equation goes beyond simple dilute gases. In the so-called econophysics, a Boltzmann type model is sometimes introduced for studying the distribution of wealth in a simple market. Another recent application of the homogeneous isotropic Boltzmann equation is given by opinion formation modeling in quantitative sociology, also called socio-dynamics or sociophysics. The present work [1] aims to improve the deterministic method for solving homogenous isotropic Boltzmann equation proposed by Aristov [2] by two ideas: (a) the homogeneous isotropic problem is reformulated first in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium).
High throughput film dosimetry in homogeneous and heterogeneous media for a small animal irradiator
Wack, L.; Ngwa, W.; Tryggestad, E.; Tsiamas, P.; Berbeco, R.; Ng, S.K.; Hesser, J.
2013-01-01
Purpose We have established a high-throughput Gafchromic film dosimetry protocol for narrow kilo-voltage beams in homogeneous and heterogeneous media for small-animal radiotherapy applications. The kV beam characterization is based on extensive Gafchromic film dosimetry data acquired in homogeneous and heterogeneous media. An empirical model is used for parameterization of depth and off-axis dependence of measured data. Methods We have modified previously published methods of film dosimetry to suit the specific tasks of the study. Unlike film protocols used in previous studies, our protocol employs simultaneous multichannel scanning and analysis of up to nine Gafchromic films per scan. A scanner and background correction were implemented to improve accuracy of the measurements. Measurements were taken in homogeneous and inhomogeneous phantoms at 220 kVp and a field size of 5 × 5 mm2. The results were compared against Monte Carlo simulations. Results Dose differences caused by variations in background signal were effectively removed by the corrections applied. Measurements in homogeneous phantoms were used to empirically characterize beam data in homogeneous and heterogeneous media. Film measurements in inhomogeneous phantoms and their empirical parameterization differed by about 2%–3%. The model differed from MC by about 1% (water, lung) to 7% (bone). Good agreement was found for measured and modelled off-axis ratios. Conclusions EBT2 films are a valuable tool for characterization of narrow kV beams, though care must be taken to eliminate disturbances caused by varying background signals. The usefulness of the empirical beam model in interpretation and parameterization of film data was demonstrated. PMID:23510532
Li, Yong; Zhao, Xiuhua; Zu, Yuangang; Zhang, Yin
2015-07-25
The aim of this study was to develop an alternative, more bio-available, better tolerated paclitaxel nanosuspension (PTXNS) for intravenous injection in comparison with commercially available Taxol(®) formulation. In this study, PTXNS was prepared by emulsification method through combination of high speed homogenizer and high pressure homogenization, followed by lyophilization process for intravenous administration. The main production parameters including volume ratio of organic phase in water and organic phase (Vo:Vw+o), concentration of PTX, content of PTX and emulsification time (Et), homogenization pressure (HP) and passes (Ps) for high pressure homogenization were optimized and their effects on mean particle size (MPS) and particle size distribution (PSD) of PTXNS were investigated. The characteristics of PTXNS, such as, surface morphology, physical status of paclitaxel (PTX) in PTXNS, redispersibility of PTXNS in purified water, in vitro dissolution study and bioavailability in vivo were all investigated. The PTXNS obtained under optimum conditions had an MPS of 186.8 nm and a zeta potential (ZP) of -6.87 mV. The PTX content in PTXNS was approximately 3.42%. Moreover, the residual amount of chloroform was lower than the International Conference on Harmonization limit (60 ppm) for solvents. The dissolution study indicated PTXNS had merits including effect to fast at the side of raw PTX and sustained-dissolution character compared with Taxol(®) formulation. Moreover, the bioavailability of PTXNS increased 14.38 and 3.51 times respectively compared with raw PTX and Taxol(®) formulation. Copyright © 2015 Elsevier B.V. All rights reserved.
Single-Shot Optical Sectioning Using Two-Color Probes in HiLo Fluorescence Microscopy
Muro, Eleonora; Vermeulen, Pierre; Ioannou, Andriani; Skourides, Paris; Dubertret, Benoit; Fragola, Alexandra; Loriette, Vincent
2011-01-01
We describe a wide-field fluorescence microscope setup which combines HiLo microscopy technique with the use of a two-color fluorescent probe. It allows one-shot fluorescence optical sectioning of thick biological moving sample which is illuminated simultaneously with a flat and a structured pattern at two different wavelengths. Both homogenous and structured fluorescence images are spectrally separated at detection and combined similarly with the HiLo microscopy technique. We present optically sectioned full-field images of Xenopus laevis embryos acquired at 25 images/s frame rate. PMID:21641327
Yuan, Dengpeng; Dong, Ying; Liu, Yujin; Li, Tianjian
2015-01-01
A high-sensitivity Mach-Zehnder interferometer (MZI) biochemical sensing platform based on Silicon-in-insulator (SOI) rib waveguide with large cross section is proposed in this paper. Based on the analyses of the evanescent field intensity, the mode polarization and cross section dimensions of the SOI rib waveguide are optimized through finite difference method (FDM) simulation. To realize high-resolution MZI read-out configuration based on the SOI rib waveguide, medium-filled trenches are employed and their performances are simulated through two-dimensional finite-difference-time domain (2D-FDTD) method. With the fundamental EH-polarized mode of the SOI rib waveguide with a total rib height of 10 μm, an outside rib height of 5 μm and a rib width of 2.5 μm at the operating wavelength of 1550 nm, when the length of the sensitive window in the MZI configuration is 10 mm, a homogeneous sensitivity of 7296.6%/refractive index unit (RIU) is obtained. Supposing the resolutions of the photoelectric detectors connected to the output ports are 0.2%, the MZI sensor can achieve a detection limit of 2.74 × 10−6 RIU. Due to high coupling efficiency of SOI rib waveguide with large cross section with standard single-mode glass optical fiber, the proposed MZI sensing platform can be conveniently integrated with optical fiber communication systems and (opto-) electronic systems, and therefore has the potential to realize remote sensing, in situ real-time detecting, and possible applications in the internet of things. PMID:26343678
NASA Technical Reports Server (NTRS)
Bracalente, E. M.; Sweet, J. L.
1984-01-01
The normalized radar cross section (NRCS) signature of the Amazon rain forest was SEASAT scatterometer data. Statistics of the measured (NRCS) values were determined from multiple orbit passes for three local time periods. Plots of mean normalized radar cross section, dB against incidence angle as a function of beam and polarization show that less than 0.3 dB relative bias exists between all beams over a range of incidence angle from 30 deg to 53 deg. The backscattered measurements analyzed show the Amazon rain forest to be relatively homogeneous, azimuthally isotropic and insensitive to polarization. The return from the rain forest target appears relatively consistent and stable, except for the small diurnal variation (0.75 dB) that occurs at sunrise. Because of the relative stability of the rain forest target and the scatterometer instrument, the response of versus incidence angle was able to detect errors in the estimated yaw altitude angle. Also, small instrument gain biases in some of the processing channels were detected. This led to the development of an improved NRCS algorithm, which uses a more accurate method for estimating the system noise power.
Ryland, Bradford L.; Stahl, Shannon S.
2014-01-01
Alcohol and amine oxidations are common reactions in laboratory and industrial synthesis of organic molecules. Aerobic oxidation methods have long been sought for these transformations, but few practical methods exist that offer advantages over traditional oxidation methods. Recently developed homogeneous Cu/TEMPO (TEMPO = 2,2,6,6-tetramethylpiperidinyl-N-oxyl) and related catalyst systems appear to fill this void. The reactions exhibit high levels of chemoselectivity and broad functional-group tolerance, and they often operate efficiently at room temperature with ambient air as the oxidant. These advances, together with their historical context and recent applications, are highlighted in this minireview. PMID:25044821
Rapid Solid-State Metathesis Routes to Nanostructured Silicon-Germainum
NASA Technical Reports Server (NTRS)
Rodriguez, Marc (Inventor); Kaner, Richard B. (Inventor); Bux, Sabah K. (Inventor); Fleurial, Jean-Pierre (Inventor)
2014-01-01
Methods for producing nanostructured silicon and silicon-germanium via solid state metathesis (SSM). The method of forming nanostructured silicon comprises the steps of combining a stoichiometric mixture of silicon tetraiodide (SiI4) and an alkaline earth metal silicide into a homogeneous powder, and initating the reaction between the silicon tetraiodide (SiI4) with the alkaline earth metal silicide. The method of forming nanostructured silicon-germanium comprises the steps of combining a stoichiometric mixture of silicon tetraiodide (SiI4) and a germanium based precursor into a homogeneous powder, and initiating the reaction between the silicon tetraiodide (SiI4) with the germanium based precursors.
Boundary value problems for multi-term fractional differential equations
NASA Astrophysics Data System (ADS)
Daftardar-Gejji, Varsha; Bhalekar, Sachin
2008-09-01
Multi-term fractional diffusion-wave equation along with the homogeneous/non-homogeneous boundary conditions has been solved using the method of separation of variables. It is observed that, unlike in the one term case, solution of multi-term fractional diffusion-wave equation is not necessarily non-negative, and hence does not represent anomalous diffusion of any kind.
Imaging radar observations of frozen Arctic lakes
NASA Technical Reports Server (NTRS)
Elachi, C.; Bryan, M. L.; Weeks, W. F.
1976-01-01
A synthetic aperture imaging L-band radar flown aboard the NASA CV-990 remotely sensed a number of ice-covered lakes about 48 km northwest of Bethel, Alaska. The image obtained is a high resolution, two-dimensional representation of the surface backscatter cross section, and large differences in backscatter returns are observed: homogeneous low returns, homogeneous high returns and/or low returns near lake borders, and high returns from central areas. It is suggested that a low return indicates that the lake is frozen completely to the bottom, while a high return indicates the presence of fresh water between the ice cover and the lake bed.
Early neonatal mortality in twin pregnancy: Findings from 60 low- and middle-income countries
Bellizzi, Saverio; Sobel, Howard; Betran, Ana Pilar; Temmerman, Marleen
2018-01-01
Background Around the world, the incidence of multiple pregnancies reaches its peak in the Central African countries and often represents an increased risk of death for women and children because of higher rates of obstetrical complications and poor management skills in those countries. We sought to assess the association between twins and early neonatal mortality compared with singleton pregnancies. We also assessed the role of skilled birth attendant and mode of delivery on early neonatal mortality in twin pregnancies. Methods We conducted a secondary analysis of individual level data from 60 nationally-representative Demographic and Health Surveys including 521 867 singleton and 14 312 twin births. We investigated the occurrence of deaths within the first week of life in twins compared to singletons and the effect of place and attendance at birth; also, the role of caesarean sections against vaginal births was examined, globally and after countries stratification per caesarean sections rates. A multi-level logistic regression was used accounting for homogeneity within country, and homogeneity within twin pairs. Results Early neonatal mortality among twins was significantly higher when compared to singleton neonates (adjusted odds ratio (aOR) 7.6; 95% confidence interval (CI) = 7.0-8.3) in these 60 countries. Early neonatal mortality was also higher among twins than singletons when adjusting for birth weight in a subgroup analysis of those countries with data on birth weight (n = 20; less than 20% of missing values) (aOR = 2.8; 95% CI = 2.2-3.5). For countries with high rates (>15%) of caesarean sections (CS), twins delivered vaginally in health facility had a statistically significant (aOR = 4.8; 95% CI = 2.4-9.4) increased risk of early neonatal mortality compared to twins delivered through caesarean sections. Home twin births without SBA was associated with increased mortality compared with delivering at home with SBA (aOR = 1.3; 95% CI = 1.0-1.8) and with vaginal birth in health facility (aOR = 1.7; 95% CI = 1.4-2.0). Conclusions Institutional deliveries and increased access of caesarian sections may be considered for twin pregnancies in low- and middle- income countries to decrease early adverse neonatal outcomes. PMID:29423189
The finite body triangulation: algorithms, subgraphs, homogeneity estimation and application.
Carson, Cantwell G; Levine, Jonathan S
2016-09-01
The concept of a finite body Dirichlet tessellation has been extended to that of a finite body Delaunay 'triangulation' to provide a more meaningful description of the spatial distribution of nonspherical secondary phase bodies in 2- and 3-dimensional images. A finite body triangulation (FBT) consists of a network of minimum edge-to-edge distances between adjacent objects in a microstructure. From this is also obtained the characteristic object chords formed by the intersection of the object boundary with the finite body tessellation. These two sets of distances form the basis of a parsimonious homogeneity estimation. The characteristics of the spatial distribution are then evaluated with respect to the distances between objects and the distances within them. Quantitative analysis shows that more physically representative distributions can be obtained by selecting subgraphs, such as the relative neighbourhood graph and the minimum spanning tree, from the finite body tessellation. To demonstrate their potential, we apply these methods to 3-dimensional X-ray computed tomographic images of foamed cement and their 2-dimensional cross sections. The Python computer code used to estimate the FBT is made available. Other applications for the algorithm - such as porous media transport and crack-tip propagation - are also discussed. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
de Castro, Alberto; Birkenfeld, Judith; Maceo, Bianca; Manns, Fabrice; Arrieta, Esdras; Parel, Jean-Marie; Marcos, Susana
2013-01-01
Purpose. To estimate changes in surface shape and gradient refractive index (GRIN) profile in primate lenses as a function of accommodation. To quantify the contribution of surface shape and GRIN to spherical aberration changes with accommodation. Methods. Crystalline lenses from 15 cynomolgus monkeys were studied in vitro under different levels of accommodation produced by a stretching system. Lens shape was obtained from optical coherence tomography (OCT) cross-sectional images. The GRIN was reconstructed with a search algorithm using the optical path measured from OCT images and the measured back focal length. The spherical aberration of the lens was estimated as a function of accommodation using the reconstructed GRIN and a homogeneous refractive index. Results. The lens anterior and posterior radii of curvature decreased with increasing lens power. Both surfaces exhibited negative asphericities in the unaccommodated state. The anterior surface conic constant shifted toward less negative values with accommodation, while the value of the posterior remained constant. GRIN parameters remained constant with accommodation. The lens spherical aberration with GRIN distribution was negative and higher in magnitude than that with a homogeneous equivalent refractive index (by 29% and 53% in the unaccommodated and fully accommodated states, respectively). Spherical aberration with the equivalent refractive index shifted with accommodation toward negative values (−0.070 μm/diopter [D]), but the reconstructed GRIN shifted it farther (−0.124 μm/D). Conclusions. When compared with the lens with the homogeneous equivalent refractive index, the reconstructed GRIN lens has more negative spherical aberration and a larger shift toward more negative values with accommodation. PMID:23927893
Burkhardt, Lia; Simon, Ronald; Steurer, Stefan; Burdak-Rothkamm, Susanne; Jacobsen, Frank; Sauter, Guido; Krech, Till
2015-01-01
Background and Aims Amplification of the fibroblast growth factor receptor 1 (FGFR1) is believed to predict response to multi-kinase inhibitors targeting FGFR1. Esophageal cancer is an aggressive disease, for which novel targeted therapies are highly warranted. Methods This study was designed to investigate the prevalence and clinical significance of FGFR1 amplification in a tissue microarray containing 346 adenocarcinomas and 254 squamous cell carcinomas of the esophagus, using dual-labeling fluorescence in situ hybridization (FISH) analysis. Results FGFR1 amplification, defined as a ratio of FGFR1:centromere 8 copy numbers ≥ 2.0, was more frequently seen in squamous cell carcinoma (8.9% of 202 interpretable cases) than in adenocarcinoma (1.6% of 308; p<0.0001). There was no association between FGFR1 amplification and tumor phenotype or clinical outcome. To study potential heterogeneity of FGFR1 amplification, all available tumor blocks from 23 FGFR1 amplified tumors were analyzed on conventional large sections. This analysis revealed complete homogeneity of FGFR1 amplification in 20 (86.9%) primary tumors and in all available lymph node metastases. Remarkably, FGFR1 amplification was also seen in dysplasia adjacent to tumor in 6 of 9 patients with FGFR1 amplified primary cancers. Conclusions In conclusion, FGFR1 amplification occurs in a relevant subgroup of carcinomas of the esophagus and may play a particular role for development of squamous cell cancers. The high homogeneity of FGFR1 amplification suggests that patients with FGFR1 amplified esophageal cancers may particularly benefit from anti-FGFR1 therapies and prompt for clinical studies in this tumor type. PMID:26555375
Gerbig, Stefanie; Brunn, Hubertus E; Spengler, Bernhard; Schulz, Sabine
2015-09-01
Distribution of pesticides both on the surface of leaves and in cross sections of plant stem and leaves was investigated using desorption electrospray ionization mass spectrometry imaging (DESI-MSI) with a spatial resolution of 50-100 μm. Two commercially available insecticide sprays containing different contact pesticides were applied onto leaves of Cotoneaster horizontalis, and the distributions of all active ingredients were directly analyzed. The first spray contained pyrethrins and rapeseed oil, both known as natural insecticides. Each component showed an inhomogeneous spreading throughout the leaf, based on substance polarity and solubility. The second spray contained the synthetic insecticides imidacloprid and methiocarb. Imidacloprid accumulated on the border of the leaf, while methiocarb was distributed more homogenously. In order to investigate the incorporation of a systemically acting pesticide into Kalanchoe blossfeldiana, a commercially available insecticide tablet containing dimethoate was spiked to the soil of the plant. Cross sections of the stem and leaf were obtained 25 and 60 days after application. Dimethoate was mainly detected in the transport system of the plant after 25 days, while it was found to be homogenously distributed in a leaf section after 60 days.
Genetic progress in homogeneous regions of wheat cultivation in Rio Grande do Sul State, Brazil.
Follmann, D N; Cargnelutti Filho, A; Lúcio, A D; de Souza, V Q; Caraffa, M; Wartha, C A
2017-03-30
The State of Rio Grande do Sul (RS) stands out as the largest wheat producer in Brazil. Wheat is the most emphasized winter cereal in RS, attracting public and private investments directed to wheat genetic breeding. The study of genetic progress should be performed routinely at breeding programs to study the behavior of cultivars developed for homogeneous regions of cultivation. The objectives of this study were: 1) to evaluate the genetic progress of wheat grain yield in RS; 2) to evaluate the influence of cultivar competition trial stratification in homogeneous regions of cultivation on the study of genetic progress. Grain yield data of 122 wheat cultivars evaluated in 137 trials arranged in randomized block design with three or four replications were used. Field trials were carried out in 23 locations in RS divided into two homogeneous regions during the period from 2002 to 2013. Genetic progress for RS and homogeneous regions was studied utilizing the method proposed by Vencovsky. Annual genetic progress for wheat grain yield during the period of 12 years in the State of RS was 2.86%, oscillating between homogeneous regions of cultivation. The difference of annual genetic progress in region 1 (1.82%) in relation to region 2 (4.38%) justifies the study of genetic progress by homogeneous regions of cultivation.
Multiscale global identification of porous structures
NASA Astrophysics Data System (ADS)
Hatłas, Marcin; Beluch, Witold
2018-01-01
The paper is devoted to the evolutionary identification of the material constants of porous structures based on measurements conducted on a macro scale. Numerical homogenization with the RVE concept is used to determine the equivalent properties of a macroscopically homogeneous material. Finite element method software is applied to solve the boundary-value problem in both scales. Global optimization methods in form of evolutionary algorithm are employed to solve the identification task. Modal analysis is performed to collect the data necessary for the identification. A numerical example presenting the effectiveness of proposed attitude is attached.
Method for removing trace pollutants from aqueous solutions
Silver, Gary L.
1986-01-01
A method of substantially removing a trace metallic contaminant from a liquid containing the same comprises, adding an oxidizing agent to a liquid containing a trace amount of a metallic contaminant of a concentration of up to about 10.sup.-1 ppm, the oxidizing agent being one which oxidizes the contaminant to form an oxidized product which is insoluble in the liquid and precipitates therefrom, and the conditions of the addition being selected to ensure that the precipitation of the oxidized product is homogeneous, and separating the homogeneously precipitated product from the liquid.
Method of making metal oxide ceramic powders by using a combustible amino acid compound
Pederson, L.R.; Chick, L.A.; Exarhos, G.J.
1992-05-19
This invention is directed to the formation of homogeneous, aqueous precursor mixtures of at least one substantially soluble metal salt and a substantially soluble, combustible co-reactant compound, typically an amino acid. This produces, upon evaporation, a substantially homogeneous intermediate material having a total solids level which would support combustion. The homogeneous intermediate material essentially comprises highly dispersed or solvated metal constituents and the co-reactant compound. The intermediate material is quite flammable. A metal oxide powder results on ignition of the intermediate product which combusts same to produce the product powder.
Method of making metal oxide ceramic powders by using a combustible amino acid compound
Pederson, Larry R.; Chick, Lawrence A.; Exarhos, Gregory J.
1992-01-01
This invention is directed to the formation of homogeneous, aqueous precursor mixtures of at least one substantially soluble metal salt and a substantially soluble, combustible co-reactant compound, typically an amino acid. This produces, upon evaporation, a substantially homogeneous intermediate material having a total solids level which would support combustion. The homogeneous intermediate material essentially comprises highly dispersed or solvated metal constituents and the co-reactant compound. The intermediate material is quite flammable. A metal oxide powder results on ignition of the intermediate product which combusts same to produce the product powder.
A novel content-based active contour model for brain tumor segmentation.
Sachdeva, Jainy; Kumar, Vinod; Gupta, Indra; Khandelwal, Niranjan; Ahuja, Chirag Kamal
2012-06-01
Brain tumor segmentation is a crucial step in surgical and treatment planning. Intensity-based active contour models such as gradient vector flow (GVF), magneto static active contour (MAC) and fluid vector flow (FVF) have been proposed to segment homogeneous objects/tumors in medical images. In this study, extensive experiments are done to analyze the performance of intensity-based techniques for homogeneous tumors on brain magnetic resonance (MR) images. The analysis shows that the state-of-art methods fail to segment homogeneous tumors against similar background or when these tumors show partial diversity toward the background. They also have preconvergence problem in case of false edges/saddle points. However, the presence of weak edges and diffused edges (due to edema around the tumor) leads to oversegmentation by intensity-based techniques. Therefore, the proposed method content-based active contour (CBAC) uses both intensity and texture information present within the active contour to overcome above-stated problems capturing large range in an image. It also proposes a novel use of Gray-Level Co-occurrence Matrix to define texture space for tumor segmentation. The effectiveness of this method is tested on two different real data sets (55 patients - more than 600 images) containing five different types of homogeneous, heterogeneous, diffused tumors and synthetic images (non-MR benchmark images). Remarkable results are obtained in segmenting homogeneous tumors of uniform intensity, complex content heterogeneous, diffused tumors on MR images (T1-weighted, postcontrast T1-weighted and T2-weighted) and synthetic images (non-MR benchmark images of varying intensity, texture, noise content and false edges). Further, tumor volume is efficiently extracted from 2-dimensional slices and is named as 2.5-dimensional segmentation. Copyright © 2012 Elsevier Inc. All rights reserved.
Reznick, A Z; Rosenfelder, L; Shpund, S; Gershon, D
1985-01-01
A method has been developed that enables us to identify intracellular degradation intermediates of fructose-bisphosphate aldolase B (D-fructose-1,6-bisphosphate D-glyceraldehyde-3-phosphate-lyase, EC 4.1.2.13). This method is based on the use of antibody against thoroughly denatured purified aldolase. This antibody has been shown to recognize only denatured molecules, and it did not interact with "native" enzyme. supernatants (24,000 X g for 30 min) of liver and kidney homogenates were incubated with antiserum to denatured enzyme. The antigen-antibody precipitates thus formed were subjected to NaDodSO4/PAGE, followed by electrotransfer to nitrocellulose paper and immunodecoration with antiserum to denatured enzyme and 125I-labeled protein A. Seven peptides with molecular weights ranging from 38,000 (that of the intact subunit) to 18,000, which cross-reacted antigenically with denatured fructose-bisphosphate aldolase, could be identified in liver. The longest three peptides were also present in kidney. The possibility that these peptides were artifacts of homogenization was ruled out as follows: 125I-labeled tagged purified native aldolase was added to the buffer prior to liver homogenization. The homogenates were than subjected to NaDodSO4/PAGE followed by autoradiography, and the labeled enzyme was shown to remain intact. This method is suggested for general use in the search for degradation products of other cellular proteins. Images PMID:3898080
Moody, J.A.; Meade, R.H.
1994-01-01
The efficacy of the method is evaluated by comparing the particle size distributions of sediment collected by the discharge-weighted pumping method with the particle size distributions of sediment collected by depth integration and separated by gravitational settling. The pumping method was found to undersample the suspended sand sized particles (>63 ??m) but to collect a representative sample of the suspended silt and clay sized particles (<63??m). The success of the discharge-weighted pumping method depends on how homogeneously the silt and clay sized particles (<63 ??m) are distributed in the vertical direction in the river. The degree of homogeneity depends on the composition and degree of aggregation of the suspended sediment particles. -from Authors
NASA Astrophysics Data System (ADS)
Srivastava, D. C.
2016-12-01
A Genetic Algorithm Method for Direct estimation of paleostress states from heterogeneous fault-slip observationsDeepak C. Srivastava, Prithvi Thakur and Pravin K. GuptaDepartment of Earth Sciences, Indian Institute of Technology Roorkee, Roorkee 247667, India. Abstract Paleostress estimation from a group of heterogeneous fault-slip observations entails first the classification of the observations into homogeneous fault sets and then a separate inversion of each homogeneous set. This study combines these two issues into a nonlinear inverse problem and proposes a heuristic search method that inverts the heterogeneous fault-slip observations. The method estimates different paleostress states in a group of heterogeneous fault-slip observations and classifies it into homogeneous sets as a byproduct. It uses the genetic algorithm operators, elitism, selection, encoding, crossover and mutation. These processes translate into a guided search that finds successively fitter solutions and operate iteratively until the termination criteria is met and the globally fittest stress tensors are obtained. We explain the basic steps of the algorithm on a working example and demonstrate validity of the method on several synthetic and a natural group of heterogeneous fault-slip observations. The method is independent of any user-defined bias or any entrapment of solution in a local optimum. It succeeds even in the difficult situations where other classification methods are found to fail.
Method to study the effect of blend flowability on the homogeneity of acetaminophen.
Llusá, Marcos; Pingali, Kalyana; Muzzio, Fernando J
2013-02-01
Excipient selection is key to product development because it affects their processability and physical properties, which ultimately affect the quality attributes of the pharmaceutical product. To study how the flowability of lubricated formulations affects acetaminophen (APAP) homogeneity. The formulations studied here contain one of two types of cellulose (Avicel 102 or Ceollus KG-802), one of three grades of Mallinckrodt APAP (fine, semi-fine, or micronized), lactose (Fast-Flo) and magnesium stearate. These components are mixed in a 300-liter bin blender. Blend flowability is assessed with the Gravitational Displacement Rheometer. APAP homogeneity is assessed with off-line NIR. Excluding blends dominated by segregation, there is a trend between APAP homogeneity and blend flow index. Blend flowability is affected by the type of microcrystalline cellulose and by the APAP grade. The preliminary results suggest that the methodology used in this paper is adequate to study of the effect of blend flow index on APAP homogeneity.
Brcka, Jozef; Faguet, Jacques; Zhang, Guigen
2017-01-01
Dielectrophoretic (DEP) phenomena have been explored to great success for various applications like particle sorting and separation. To elucidate the underlying mechanism and quantify the DEP force experienced by particles, the point-dipole and Maxwell Stress Tensor (MST) methods are commonly used. However, both methods exhibit their own limitations. For example, the point-dipole method is unable to fully capture the essence of particle-particle interactions and the MST method is not suitable for particles of non-homogeneous property. Moreover, both methods fare poorly when it comes to explaining DEP phenomena such as the dependence of crossover frequency on medium conductivity. To address these limitations, the authors have developed a new method, termed volumetric-integration method, with the aid of computational implementation, to reexamine the DEP phenomena, elucidate the governing mechanism, and quantify the DEP force. The effect of an electric double layer (EDL) on particles' crossover behavior is dealt with through consideration of the EDL structure along with surface ionic/molecular adsorption, unlike in other methods, where the EDL is accounted for through simply assigning a surface conductance value to the particles. For validation, by comparing with literature experimental data, the authors show that the new method can quantify the DEP force on not only homogeneous particles but also non-homogeneous ones, and predict particle-particle interactions fairly accurately. Moreover, the authors also show that the predicted dependence of crossover frequency on medium conductivity and particle size agrees very well with experimental measurements. PMID:28396710
Swelling-induced and controlled curving in layered gel beams
Lucantonio, A.; Nardinocchi, P.; Pezzulla, M.
2014-01-01
We describe swelling-driven curving in originally straight and non-homogeneous beams. We present and verify a structural model of swollen beams, based on a new point of view adopted to describe swelling-induced deformation processes in bilayered gel beams, that is based on the split of the swelling-induced deformation of the beam at equilibrium into two components, both depending on the elastic properties of the gel. The method allows us to: (i) determine beam stretching and curving, once assigned the characteristics of the solvent bath and of the non-homogeneous beam, and (ii) estimate the characteristics of non-homogeneous flat gel beams in such a way as to obtain, under free-swelling conditions, three-dimensional shapes. The study was pursued by means of analytical, semi-analytical and numerical tools; excellent agreement of the outcomes of the different techniques was found, thus confirming the strength of the method. PMID:25383031
Deflection load characteristics of laser-welded orthodontic wires.
Watanabe, Etsuko; Stigall, Garrett; Elshahawy, Waleed; Watanabe, Ikuya
2012-07-01
To compare the deflection load characteristics of homogeneous and heterogeneous joints made by laser welding using various types of orthodontic wires. Four kinds of straight orthodontic rectangular wires (0.017 inch × 0.025 inch) were used: stainless-steel (SS), cobalt-chromium-nickel (Co-Cr-Ni), beta-titanium alloy (β-Ti), and nickel-titanium (Ni-Ti). Homogeneous and heterogeneous end-to-end joints (12 mm long each) were made by Nd:YAG laser welding. Two types of welding methods were used: two-point welding and four-point welding. Nonwelded wires were also used as a control. Deflection load (N) was measured by conducting the three-point bending test. The data (n = 5) were statistically analyzed using analysis of variance/Tukey test (P < .05). The deflection loads for control wires measured were as follows: SS: 21.7 ± 0.8 N; Co-Cr-Ni: 20.0 ± 0.3 N; β-Ti: 13.9 ± 1.3 N; and Ni-Ti: 6.6 ± 0.4 N. All of the homogeneously welded specimens showed lower deflection loads compared to corresponding control wires and exhibited higher deflection loads compared to heterogeneously welded combinations. For homogeneous combinations, Co-Cr-Ni/Co-Cr-Ni showed a significantly (P < .05) higher deflection load than those of the remaining homogeneously welded groups. In heterogeneous combinations, SS/Co-Cr-Ni and β-Ti/Ni-Ti showed higher deflection loads than those of the remaining heterogeneously welded combinations (significantly higher for SS/Co-Cr-Ni). Significance (P < .01) was shown for the interaction between the two factors (materials combination and welding method). However, no significant difference in deflection load was found between four-point and two-point welding in each homogeneous or heterogeneous combination. Heterogeneously laser-welded SS/Co-Cr-Ni and β-Ti/Ni-Ti wires provide a deflection load that is comparable to that of homogeneously welded orthodontic wires.
NASA Astrophysics Data System (ADS)
Makó, Éva; Kovács, András; Ható, Zoltán; Kristóf, Tamás
2015-12-01
Recent experimental and simulation findings with kaolinite-methanol intercalation complexes raised the question of the existence of more stable structures in wet and dry state, which has not been fully cleared up yet. Experimental and molecular simulation analyses were used to investigate different types of kaolinite-methanol complexes, revealing their real structures. Cost-efficient homogenization methods were applied to synthesize the kaolinite-dimethyl sulfoxide and kaolinite-urea pre-intercalation complexes of the kaolinite-methanol ones. The tested homogenization method required an order of magnitude lower amount of reagents than the generally applied solution method. The influence of the type of pre-intercalated molecules and of the wetting or drying (at room temperature and at 150 °C) procedure on the intercalation was characterized experimentally by X-ray diffraction and thermal analysis. Consistent with the suggestion from the present simulations, 1.12-nm and 0.83-nm stable kaolinite-methanol complexes were identified. For these complexes, our molecular simulations predict either single-layered structures of mobile methanol/water molecules or non-intercalated structures of methoxy-functionalized kaolinite. We found that the methoxy-modified kaolinite can easily be intercalated by liquid methanol.
M-Adapting Low Order Mimetic Finite Differences for Dielectric Interface Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGregor, Duncan A.; Gyrya, Vitaliy; Manzini, Gianmarco
2016-03-07
We consider a problem of reducing numerical dispersion for electromagnetic wave in the domain with two materials separated by a at interface in 2D with a factor of two di erence in wave speed. The computational mesh in the homogeneous parts of the domain away from the interface consists of square elements. Here the method construction is based on m-adaptation construction in homogeneous domain that leads to fourth-order numerical dispersion (vs. second order in non-optimized method). The size of the elements in two domains also di ers by a factor of two, so as to preserve the same value ofmore » Courant number in each. Near the interface where two meshes merge the mesh with larger elements consists of degenerate pentagons. We demonstrate that prior to m-adaptation the accuracy of the method falls from second to rst due to breaking of symmetry in the mesh. Next we develop m-adaptation framework for the interface region and devise an optimization criteria. We prove that for the interface problem m-adaptation cannot produce increase in method accuracy. This is in contrast to homogeneous medium where m-adaptation can increase accuracy by two orders.« less
Assessment of protein set coherence using functional annotations
Chagoyen, Monica; Carazo, Jose M; Pascual-Montano, Alberto
2008-01-01
Background Analysis of large-scale experimental datasets frequently produces one or more sets of proteins that are subsequently mined for functional interpretation and validation. To this end, a number of computational methods have been devised that rely on the analysis of functional annotations. Although current methods provide valuable information (e.g. significantly enriched annotations, pairwise functional similarities), they do not specifically measure the degree of homogeneity of a protein set. Results In this work we present a method that scores the degree of functional homogeneity, or coherence, of a set of proteins on the basis of the global similarity of their functional annotations. The method uses statistical hypothesis testing to assess the significance of the set in the context of the functional space of a reference set. As such, it can be used as a first step in the validation of sets expected to be homogeneous prior to further functional interpretation. Conclusion We evaluate our method by analysing known biologically relevant sets as well as random ones. The known relevant sets comprise macromolecular complexes, cellular components and pathways described for Saccharomyces cerevisiae, which are mostly significantly coherent. Finally, we illustrate the usefulness of our approach for validating 'functional modules' obtained from computational analysis of protein-protein interaction networks. Matlab code and supplementary data are available at PMID:18937846
Shaw, Anugrah; Abbi, Ruchika
2004-01-01
Penetration of liquid pesticides through textile materials is a criterion for determining the performance of protective clothing used by pesticide handlers. The pipette method is frequently used to apply liquid pesticides onto textile materials to measure penetration. Typically, analytical techniques such as Gas Chromatography (GC) are used to measure percentage penetration. These techniques are labor intensive and costly. A simpler gravimetric method was developed, and tests were conducted to compare the gravimetric and GC methods of analysis. Three types of pesticide formulations and 4 fabrics were used for the study. Diluted pesticide formulations were pipetted onto the test specimens and percentage penetration was measured using the 2 methods. For homogeneous formulation, the results of the two methods were fairly comparable. However, due to the filtering action of the textile materials, there were differences in the percentage penetration between the 2 methods for formulations that were not homogeneous.
Full Core TREAT Kinetics Demonstration Using Rattlesnake/BISON Coupling Within MAMMOTH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortensi, Javier; DeHart, Mark D.; Gleicher, Frederick N.
2015-08-01
This report summarizes key aspects of research in evaluation of modeling needs for TREAT transient simulation. Using a measured TREAT critical measurement and a transient for a small, experimentally simplified core, Rattlesnake and MAMMOTH simulations are performed building from simple infinite media to a full core model. Cross sections processing methods are evaluated, various homogenization approaches are assessed and the neutronic behavior of the core studied to determine key modeling aspects. The simulation of the minimum critical core with the diffusion solver shows very good agreement with the reference Monte Carlo simulation and the experiment. The full core transient simulationmore » with thermal feedback shows a significantly lower power peak compared to the documented experimental measurement, which is not unexpected in the early stages of model development.« less
NASA Astrophysics Data System (ADS)
Huang, Shi-Hao; Wang, Shiang-Jiu; Tseng, Snow H.
2015-03-01
Optical coherence tomography (OCT) provides high resolution, cross-sectional image of internal microstructure of biological tissue. We use the Finite-Difference Time-Domain method (FDTD) to analyze the data acquired by OCT, which can help us reconstruct the refractive index of the biological tissue. We calculate the refractive index tomography and try to match the simulation with the data acquired by OCT. Specifically, we try to reconstruct the structure of melanin, which has complex refractive indices and is the key component of human pigment system. The results indicate that better reconstruction can be achieved for homogenous sample, whereas the reconstruction is degraded for samples with fine structure or with complex interface. Simulation reconstruction shows structures of the Melanin that may be useful for biomedical optics applications.
Spatially homogeneous rotating world models.
NASA Technical Reports Server (NTRS)
Ozsvath, I.
1971-01-01
The mathematical problem encountered when looking for the simplest expanding and rotating model of the universe without the compactness condition for the space sections is formulated. The Lagrangian function is derived for four different rotating universes simultaneously. These models correspond in a certain sense to Godel's (1950) ?symmetric case.'
Homogeneity of lithium distribution in cylinder-type Li-ion batteries
Senyshyn, A.; Mühlbauer, M. J.; Dolotko, O.; Hofmann, M.; Ehrenberg, H.
2015-01-01
Spatially-resolved neutron powder diffraction with a gauge volume of 2 × 2 × 20 mm3 has been applied as an in situ method to probe the lithium concentration in the graphite anode of different Li-ion cells of 18650-type in charged state. Structural studies performed in combination with electrochemical measurements and X-ray computed tomography under real cell operating conditions unambiguously revealed non-homogeneity of the lithium distribution in the graphite anode. Deviations from a homogeneous behaviour have been found in both radial and axial directions of 18650-type cells and were discussed in the frame of cell geometry and electrical connection of electrodes, which might play a crucial role in the homogeneity of the lithium distribution in the active materials within each electrode. PMID:26681110
Ilyin, S E; Plata-Salamán, C R
2000-02-15
Homogenization of tissue samples is a common first step in the majority of current protocols for RNA, DNA, and protein isolation. This report describes a simple device for centrifugation-mediated homogenization of tissue samples. The method presented is applicable to RNA, DNA, and protein isolation, and we show examples where high quality total cell RNA, DNA, and protein were obtained from brain and other tissue samples. The advantages of the approach presented include: (1) a significant reduction in time investment relative to hand-driven or individual motorized-driven pestle homogenization; (2) easy construction of the device from inexpensive parts available in any laboratory; (3) high replicability in the processing; and (4) the capacity for the parallel processing of multiple tissue samples, thus allowing higher efficiency, reliability, and standardization.
Finite-time consensus for controlled dynamical systems in network
NASA Astrophysics Data System (ADS)
Zoghlami, Naim; Mlayeh, Rhouma; Beji, Lotfi; Abichou, Azgal
2018-04-01
The key challenges in networked dynamical systems are the component heterogeneities, nonlinearities, and the high dimension of the formulated vector of state variables. In this paper, the emphasise is put on two classes of systems in network include most controlled driftless systems as well as systems with drift. For each model structure that defines homogeneous and heterogeneous multi-system behaviour, we derive protocols leading to finite-time consensus. For each model evolving in networks forming a homogeneous or heterogeneous multi-system, protocols integrating sufficient conditions are derived leading to finite-time consensus. Likewise, for the networking topology, we make use of fixed directed and undirected graphs. To prove our approaches, finite-time stability theory and Lyapunov methods are considered. As illustrative examples, the homogeneous multi-unicycle kinematics and the homogeneous/heterogeneous multi-second order dynamics in networks are studied.
Partitioning of the degradation space for OCR training
NASA Astrophysics Data System (ADS)
Barney Smith, Elisa H.; Andersen, Tim
2006-01-01
Generally speaking optical character recognition algorithms tend to perform better when presented with homogeneous data. This paper studies a method that is designed to increase the homogeneity of training data, based on an understanding of the types of degradations that occur during the printing and scanning process, and how these degradations affect the homogeneity of the data. While it has been shown that dividing the degradation space by edge spread improves recognition accuracy over dividing the degradation space by threshold or point spread function width alone, the challenge is in deciding how many partitions and at what value of edge spread the divisions should be made. Clustering of different types of character features, fonts, sizes, resolutions and noise levels shows that edge spread is indeed shown to be a strong indicator of the homogeneity of character data clusters.
NASA Astrophysics Data System (ADS)
Perton, Mathieu; Contreras-Zazueta, Marcial A.; Sánchez-Sesma, Francisco J.
2016-06-01
A new implementation of indirect boundary element method allows simulating the elastic wave propagation in complex configurations made of embedded regions that are homogeneous with irregular boundaries or flat layers. In an older implementation, each layer of a flat layered region would have been treated as a separated homogeneous region without taking into account the flat boundary information. For both types of regions, the scattered field results from fictitious sources positioned along their boundaries. For the homogeneous regions, the fictitious sources emit as in a full-space and the wave field is given by analytical Green's functions. For flat layered regions, fictitious sources emit as in an unbounded flat layered region and the wave field is given by Green's functions obtained from the discrete wavenumber (DWN) method. The new implementation allows then reducing the length of the discretized boundaries but DWN Green's functions require much more computation time than the full-space Green's functions. Several optimization steps are then implemented and commented. Validations are presented for 2-D and 3-D problems. Higher efficiency is achieved in 3-D.
HOMPRA Europe - A gridded precipitation data set from European homogenized time series
NASA Astrophysics Data System (ADS)
Rustemeier, Elke; Kapala, Alice; Meyer-Christoffer, Anja; Finger, Peter; Schneider, Udo; Venema, Victor; Ziese, Markus; Simmer, Clemens; Becker, Andreas
2017-04-01
Reliable monitoring data are essential for robust analyses of climate variability and, in particular, long-term trends. In this regard, a gridded, homogenized data set of monthly precipitation totals - HOMPRA Europe (HOMogenized PRecipitation Analysis of European in-situ data)- is presented. The data base consists of 5373 homogenized monthly time series, a carefully selected subset held by the Global Precipitation Climatology Centre (GPCC). The chosen series cover the period 1951-2005 and contain less than 10% missing values. Due to the large number of data, an automatic algorithm had to be developed for the homogenization of these precipitation series. In principal, the algorithm is based on three steps: * Selection of overlapping station networks in the same precipitation regime, based on rank correlation and Ward's method of minimal variance. Since the underlying time series should be as homogeneous as possible, the station selection is carried out by deterministic first derivation in order to reduce artificial influences. * The natural variability and trends were temporally removed by means of highly correlated neighboring time series to detect artificial break-points in the annual totals. This ensures that only artificial changes can be detected. The method is based on the algorithm of Caussinus and Mestre (2004). * In the last step, the detected breaks are corrected monthly by means of a multiple linear regression (Mestre, 2003). Due to the automation of the homogenization, the validation of the algorithm is essential. Therefore, the method was tested on artificial data sets. Additionally the sensitivity of the method was tested by varying the neighborhood series. If available in digitized form, the station history was also used to search for systematic errors in the jump detection. Finally, the actual HOMPRA Europe product is produced by interpolation of the homogenized series onto a 1° grid using one of the interpolation schems operationally at GPCC (Becker et al., 2013 and Schamm et al., 2014). Caussinus, H., und O. Mestre, 2004: Detection and correction of artificial shifts in climate series, Journal of the Royal, Statistical Society. Series C (Applied Statistics), 53(3), 405-425. Mestre, O., 2003: Correcting climate series using ANOVA technique, Proceedings of the fourth seminar Willmott, C.; Rowe, C. & Philpot, W., 1985: Small-scale climate maps: A sensitivity analysis of some common assumptions associated with grid-point interpolation and contouring The American Carthographer, 12, 5-16 Becker, A.; Finger, P.; Meyer-Christoffer, A.; Rudolf, B.; Schamm, K.; Schneider, U. & Ziese, M., 2013: A description of the global land-surface precipitation data products of the Global Precipitation Climatology Centre with sample applications including centennial (trend) analysis from 1901-present Earth System Science Data, 5, 71-99 Schamm, K.; Ziese, M.; Becker, A.; Finger, P.; Meyer-Christoffer, A.; Schneider, U.; Schröder, M. & Stender, P., 2014: Global gridded precipitation over land: a description of the new GPCC First Guess Daily product, Earth System Science Data, 6, 49-60
Surface hardening of steels with a strip-shaped beam of a high-power CO{sub 2} laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubovskii, P.E.; Kovsh, I.B.; Strekalova, M.S.
1994-12-01
A comparative analysis was made of the surface hardening of steel 45 by high-power CO{sub 2} laser beams with a rectangular strip-like cross section and a traditional circular cross section. This was done under various conditions. The treatment with the strip-like beam ensured a higher homogeneity of the hardened layer and made it possible to increase the productivity by a factor of 2-4 compared with the treatment by a beam of the same power but with a circular cross section. 6 refs., 5 figs.
NASA Astrophysics Data System (ADS)
Dubovskii, P. E.; Kovsh, Ivan B.; Strekalova, M. S.; Sisakyan, I. N.
1994-12-01
A comparative analysis was made of the surface hardening of steel 45 by high-power CO2 laser beams with a rectangular strip-like cross section and a traditional circular cross section. This was done under various conditions. The treatment with the strip-like beam ensured a higher homogeneity of the hardened layer and made it possible to increase the productivity by a factor of 2-4 compared with the treatment by a beam of the same power but with a circular cross section.
Li, Mingyan; Zuo, Zhentao; Jin, Jin; Xue, Rong; Trakic, Adnan; Weber, Ewald; Liu, Feng; Crozier, Stuart
2014-03-01
Parallel imaging (PI) is widely used for imaging acceleration by means of coil spatial sensitivities associated with phased array coils (PACs). By employing a time-division multiplexing technique, a single-channel rotating radiofrequency coil (RRFC) provides an alternative method to reduce scan time. Strategically combining these two concepts could provide enhanced acceleration and efficiency. In this work, the imaging acceleration ability and homogeneous image reconstruction strategy of 4-element rotating radiofrequency coil array (RRFCA) was numerically investigated and experimental validated at 7T with a homogeneous phantom. Each coil of RRFCA was capable of acquiring a large number of sensitivity profiles, leading to a better acceleration performance illustrated by the improved geometry-maps that have lower maximum values and more uniform distributions compared to 4- and 8-element stationary arrays. A reconstruction algorithm, rotating SENSitivity Encoding (rotating SENSE), was proposed to provide image reconstruction. Additionally, by optimally choosing the angular sampling positions and transmit profiles under the rotating scheme, phantom images could be faithfully reconstructed. The results indicate that, the proposed technique is able to provide homogeneous reconstructions with overall higher and more uniform signal-to-noise ratio (SNR) distributions at high reduction factors. It is hoped that, by employing the high imaging acceleration and homogeneous imaging reconstruction ability of RRFCA, the proposed method will facilitate human imaging for ultra high field MRI. Copyright © 2013 Elsevier Inc. All rights reserved.
Effect of Freezing Time on Macronutrients and Energy Content of Breastmilk
Escuder-Vieco, Diana; García-Algar, Oscar; De la Cruz, Javier; Lora, David; Pallás-Alonso, Carmen
2012-01-01
Abstract Background In neonatal units and human milk banks freezing breastmilk at less than –20°C is the choice for preserving it. Scientific evidence in relation to the loss of nutritional quality during freezing is rare. Our main aim in this study is to determine the effect of freezing time up to 3 months on the content of fat, total nitrogen, lactose, and energy. Our secondary aim is to assess whether ultrasonic homogenization of samples enables a more suitable reading of breastmilk macronutrients with a human milk analyzer (HMA) (MIRIS®, Uppsala, Sweden). Methods Refrigerated breastmilk samples were collected. Each sample was divided into six pairs of aliquots. One pair was analyzed on day 0, and the remaining pairs were frozen and analyzed, one each at 7, 15, 30, 60, and 90 days later. For each pair, one aliquot was homogenized by stirring, and the other by applying ultrasound. Samples were analyzed with the HMA. Results By 3 months from freezing with the two homogenization methods, we observed a relevant and significant decline in the concentration of fat and energy content. The modification of total nitrogen and lactose was not constant and of lower magnitude. The absolute concentration of all macronutrients and calories was greater with ultrasonic homogenization. Conclusions After 3 months from freezing at –20°C, an important decrease in fat and caloric content is observed. Correct homogenization is fundamental for correct nutritional analysis. PMID:22047109
Homogenized modeling methodology for 18650 lithium-ion battery module under large deformation
Tang, Liang; Cheng, Pengle
2017-01-01
Effective lithium-ion battery module modeling has become a bottleneck for full-size electric vehicle crash safety numerical simulation. Modeling every single cell in detail would be costly. However, computational accuracy could be lost if the module is modeled by using a simple bulk material or rigid body. To solve this critical engineering problem, a general method to establish a computational homogenized model for the cylindrical battery module is proposed. A single battery cell model is developed and validated through radial compression and bending experiments. To analyze the homogenized mechanical properties of the module, a representative unit cell (RUC) is extracted with the periodic boundary condition applied on it. An elastic–plastic constitutive model is established to describe the computational homogenized model for the module. Two typical packing modes, i.e., cubic dense packing and hexagonal packing for the homogenized equivalent battery module (EBM) model, are targeted for validation compression tests, as well as the models with detailed single cell description. Further, the homogenized EBM model is confirmed to agree reasonably well with the detailed battery module (DBM) model for different packing modes with a length scale of up to 15 × 15 cells and 12% deformation where the short circuit takes place. The suggested homogenized model for battery module makes way for battery module and pack safety evaluation for full-size electric vehicle crashworthiness analysis. PMID:28746390
Homogenized modeling methodology for 18650 lithium-ion battery module under large deformation.
Tang, Liang; Zhang, Jinjie; Cheng, Pengle
2017-01-01
Effective lithium-ion battery module modeling has become a bottleneck for full-size electric vehicle crash safety numerical simulation. Modeling every single cell in detail would be costly. However, computational accuracy could be lost if the module is modeled by using a simple bulk material or rigid body. To solve this critical engineering problem, a general method to establish a computational homogenized model for the cylindrical battery module is proposed. A single battery cell model is developed and validated through radial compression and bending experiments. To analyze the homogenized mechanical properties of the module, a representative unit cell (RUC) is extracted with the periodic boundary condition applied on it. An elastic-plastic constitutive model is established to describe the computational homogenized model for the module. Two typical packing modes, i.e., cubic dense packing and hexagonal packing for the homogenized equivalent battery module (EBM) model, are targeted for validation compression tests, as well as the models with detailed single cell description. Further, the homogenized EBM model is confirmed to agree reasonably well with the detailed battery module (DBM) model for different packing modes with a length scale of up to 15 × 15 cells and 12% deformation where the short circuit takes place. The suggested homogenized model for battery module makes way for battery module and pack safety evaluation for full-size electric vehicle crashworthiness analysis.
Dong, Bing; Song, Yu; Fan, Wenjia; Zhu, Ying
2010-11-01
To study the homogeneity and stability of arsenic in quality controlling cosmetic samples. Arsenic was determined by atomic fluorescence spectrophotometric method. The t-test and F-test were used to evaluate the significant difference of the within-bottle and between-bottle results with three batches. The RSDs of arsenic obtained in different time were compared with the relative expanding uncertainties to evaluate the stability. Average and variance of within-bottle and between-bottle results of arsenic were not different significantly. The RSDs of Arsenic were less than the relative expanding uncertainties. Quality controlling cosmetic samples containing arsenic were considered homogeneous and stable.
Arc melting and homogenization of ZrC and ZrC + B alloys
NASA Technical Reports Server (NTRS)
Darolia, R.; Archbold, T. F.
1973-01-01
A description is given of the methods used to arc-melt and to homogenize near-stoichiometric ZrC and ZrC-boron alloys, giving attention to the oxygen contamination problem. The starting material for the carbide preparation was ZrC powder with an average particle size of 4.6 micron. Pellets weighing approximately 3 g each were prepared at room temperature from the powder by the use of an isostatic press operated at 50,000 psi. These pellets were individually melted in an arc furnace containing a static atmosphere of purified argon. A graphite resistance furnace was used for the homogenization process.
Yi, Gihwan; Choi, Jun-Ho; Lee, Jong-Hee; Jeong, Unggi; Nam, Min-Hee; Yun, Doh-Won; Eun, Moo-Young
2005-01-01
We describe a rapid and simple procedure for homogenizing leaf samples suitable for mini/midi-scale DNA preparation in rice. The methods used tungsten carbide beads and general vortexer for homogenizing leaf samples. In general, two samples can be ground completely within 11.3+/-1.5 sec at one time. Up to 20 samples can be ground at a time using a vortexer attachment. The yields of the DNA ranged from 2.2 to 7.6 microg from 25-150 mg of young fresh leaf tissue. The quality and quantity of DNA was compatible for most of PCR work and RFLP analysis.
NASA Astrophysics Data System (ADS)
Khan, Urooj; Tuteja, Narendra; Ajami, Hoori; Sharma, Ashish
2014-05-01
While the potential uses and benefits of distributed catchment simulation models is undeniable, their practical usage is often hindered by the computational resources they demand. To reduce the computational time/effort in distributed hydrological modelling, a new approach of modelling over an equivalent cross-section is investigated where topographical and physiographic properties of first-order sub-basins are aggregated to constitute modelling elements. To formulate an equivalent cross-section, a homogenization test is conducted to assess the loss in accuracy when averaging topographic and physiographic variables, i.e. length, slope, soil depth and soil type. The homogenization test indicates that the accuracy lost in weighting the soil type is greatest, therefore it needs to be weighted in a systematic manner to formulate equivalent cross-sections. If the soil type remains the same within the sub-basin, a single equivalent cross-section is formulated for the entire sub-basin. If the soil type follows a specific pattern, i.e. different soil types near the centre of the river, middle of hillslope and ridge line, three equivalent cross-sections (left bank, right bank and head water) are required. If the soil types are complex and do not follow any specific pattern, multiple equivalent cross-sections are required based on the number of soil types. The equivalent cross-sections are formulated for a series of first order sub-basins by implementing different weighting methods of topographic and physiographic variables of landforms within the entire or part of a hillslope. The formulated equivalent cross-sections are then simulated using a 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the weighted area of each equivalent cross-section to calculate the total fluxes from the sub-basins. The simulated fluxes include horizontal flow, transpiration, soil evaporation, deep drainage and soil moisture. To assess the accuracy of equivalent cross-section approach, the sub-basins are also divided into equally spaced multiple hillslope cross-sections. These cross-sections are simulated in a fully distributed settings using the 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the contributing area of each cross-section to get total fluxes from each sub-basin referred as reference fluxes. The equivalent cross-section approach is investigated for seven first order sub-basins of the McLaughlin catchment of the Snowy River, NSW, Australia, and evaluated in Wagga-Wagga experimental catchment. Our results show that the simulated fluxes using an equivalent cross-section approach are very close to the reference fluxes whereas computational time is reduced of the order of ~4 to ~22 times in comparison to the fully distributed settings. The transpiration and soil evaporation are the dominant fluxes and constitute ~85% of actual rainfall. Overall, the accuracy achieved in dominant fluxes is higher than the other fluxes. The simulated soil moistures from equivalent cross-section approach are compared with the in-situ soil moisture observations in the Wagga-Wagga experimental catchment in NSW, and results found to be consistent. Our results illustrate that the equivalent cross-section approach reduces the computational time significantly while maintaining the same order of accuracy in predicting the hydrological fluxes. As a result, this approach provides a great potential for implementation of distributed hydrological models at regional scales.
Rapid methods for extraction and concentration of poliovirus from oyster tissues.
Richards, G P; Goldmintz, D; Green, D L; Babinchak, J A
1982-12-01
A procedure is discussed for the extraction of poliovirus from oyster meats by modification of several enterovirus extraction techniques. The modified method uses meat extract and Cat-Floc, a polycationic electrolyte, for virus extraction and concentration. Virus recovery from inoculated oyster homogenates is 93-120%. Adsorption of viruses to oyster proteins by acidification of homogenates does not affect virus recovery. Elution of viruses from oyster proteins appears more efficient at pH 9.5 than at pH 8.0. This technique is relatively simple, economical and requires only 2.5 h to complete the combined extraction and concentration procedure.
Method of refining cracked oil by using metallic soaps. [desulfurization of cracked oils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masakichi, M.; Marunouchi, K.K.; Yoshimura, T.
1937-04-13
The method of refining cracked oil consists in dissolving oil-soluble heavy metallic soap of oleic acid in a volatile organic solvent which will disperse homogeneously in cracked oil; pouring the solution thus obtained slowly into cracked oil to effect dispersion naturally and homogeneously at room temperature in the cracked oil. This process serves to react the mercaptans in the cracked oil with the heavy metallic soap by a double decomposition reaction and to precipitate the mercaptans as insoluble metallic salts. The remaining liquid is distilled to separate it from the remaining solvent.
Nonlinear equations of motion for the elastic bending and torsion of twisted nonuniform rotor blades
NASA Technical Reports Server (NTRS)
Hodges, D. H.; Dowell, E. H.
1974-01-01
The equations of motion are developed by two complementary methods, Hamilton's principle and the Newtonian method. The resulting equations are valid to second order for long, straight, slender, homogeneous, isotropic beams undergoing moderate displacements. The ordering scheme is based on the restriction that squares of the bending slopes, the torsion deformation, and the chord/radius and thickness/radius ratios are negligible with respect to unity. All remaining nonlinear terms are retained. The equations are valid for beams with mass centroid axis and area centroid (tension) axis offsets from the elastic axis, nonuniform mass and stiffness section properties, variable pretwist, and a small precone angle. The strain-displacement relations are developed from an exact transformation between the deformed and undeformed coordinate systems. These nonlinear relations form an important contribution to the final equations. Several nonlinear structural and inertial terms in the final equations are identified that can substantially influence the aeroelastic stability and response of hingeless helicopter rotor blades.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, C; Zhong, Y; Wang, T
2015-06-15
Purpose: To investigate the accuracy in estimating the mean glandular dose (MGD) for homogeneous breast phantoms by converting from the average breast dose using the F-factor in cone beam breast CT. Methods: EGSnrc-based Monte Carlo codes were used to estimate the MGDs. 13-cm in diameter, 10-cm high hemi-ellipsoids were used to simulate pendant-geometry breasts. Two different types of hemi-ellipsoidal models were employed: voxels in quasi-homogeneous phantoms were designed as either adipose or glandular tissue while voxels in homogeneous phantoms were designed as the mixture of adipose and glandular tissues. Breast compositions of 25% and 50% volume glandular fractions (VGFs), definedmore » as the ratio of glandular tissue voxels to entire breast voxels in the quasi-homogeneous phantoms, were studied. These VGFs were converted into glandular fractions by weight and used to construct the corresponding homogeneous phantoms. 80 kVp x-rays with a mean energy of 47 keV was used in the simulation. A total of 109 photons were used to image the phantoms and the energies deposited in the phantom voxels were tallied. Breast doses in homogeneous phantoms were averaged over all voxels and then used to calculate the MGDs using the F-factors evaluated at the mean energy of the x-rays. The MGDs for quasi-homogeneous phantoms were computed directly by averaging the doses over all glandular tissue voxels. The MGDs estimated for the two types of phantoms were normalized to the free-in-air dose at the iso-center and compared. Results: The normalized MGDs were 0.756 and 0.732 mGy/mGy for the 25% and 50% VGF homogeneous breasts and 0.761 and 0.733 mGy/mGy for the corresponding quasi-homogeneous breasts, respectively. The MGDs estimated for the two types of phantoms were similar within 1% in this study. Conclusion: MGDs for homogeneous breast models may be adequately estimated by converting from the average breast dose using the F-factor.« less
Vitrification of ion exchange resins
Cicero-Herman, Connie A.; Workman, Rhonda Jackson
2001-01-01
The present invention relates to vitrification of ion exchange resins that have become loaded with hazardous or radioactive wastes, in a way that produces a homogenous and durable waste form and reduces the disposal volume of the resin. The methods of the present invention involve directly adding borosilicate glass formers and an oxidizer to the ion exchange resin and heating the mixture at sufficient temperature to produce homogeneous glass.
Processing of non-oxide ceramics from sol-gel methods
Landingham, Richard; Reibold, Robert A.; Satcher, Joe
2014-12-12
A general procedure applied to a variety of sol-gel precursors and solvent systems for preparing and controlling homogeneous dispersions of very small particles within each other. Fine homogenous dispersions processed at elevated temperatures and controlled atmospheres make a ceramic powder to be consolidated into a component by standard commercial means: sinter, hot press, hot isostatic pressing (HIP), hot/cold extrusion, spark plasma sinter (SPS), etc.
Ensemble Learning Method for Hidden Markov Models
2014-12-01
Ensemble HMM landmine detector Mine signatures vary according to the mine type, mine size , and burial depth. Similarly, clutter signatures vary with soil ...approaches for the di erent K groups depending on their size and homogeneity. In particular, we investigate the maximum likelihood (ML), the minimum...propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we
Code of Federal Regulations, 2012 CFR
2012-10-01
... Accommodation spaces 0.95 Consumable liquid tanks 0.00 or 0.95—whichever results in the more disabling condition...: (1) The hoppers are full of seawater; (2) The permeability of flooded spaces is as provided by Table... the calculations required by this section: (1) Dredged spoil in the hopper is a homogeneous liquid...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Accommodation spaces 0.95 Consumable liquid tanks 0.00 or 0.95—whichever results in the more disabling condition...: (1) The hoppers are full of seawater; (2) The permeability of flooded spaces is as provided by Table... the calculations required by this section: (1) Dredged spoil in the hopper is a homogeneous liquid...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Accommodation spaces 0.95 Consumable liquid tanks 0.00 or 0.95—whichever results in the more disabling condition...: (1) The hoppers are full of seawater; (2) The permeability of flooded spaces is as provided by Table... the calculations required by this section: (1) Dredged spoil in the hopper is a homogeneous liquid...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Accommodation spaces 0.95 Consumable liquid tanks 0.00 or 0.95—whichever results in the more disabling condition...: (1) The hoppers are full of seawater; (2) The permeability of flooded spaces is as provided by Table... the calculations required by this section: (1) Dredged spoil in the hopper is a homogeneous liquid...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Accommodation spaces 0.95 Consumable liquid tanks 0.00 or 0.95—whichever results in the more disabling condition...: (1) The hoppers are full of seawater; (2) The permeability of flooded spaces is as provided by Table... the calculations required by this section: (1) Dredged spoil in the hopper is a homogeneous liquid...
Durability of Capped Wood Plastic Composites
Mark Mankowski; Mark J. Manning; Damien P. Slowik
2015-01-01
Manufacturers of wood plastic composites (WPCs) have recently introduced capped decking to their product lines. These new materials have begun to take market share from the previous generation of uncapped products that possessed a homogenous composition throughout the thickness of their cross-section. These capped offerings have been introduced with claims that the...
Increasing Sensitivity In Continuous-Flow Electrophoresis
NASA Technical Reports Server (NTRS)
Sharnez, Rizwan; Sammons, David W.
1994-01-01
Sensitivity of continuous-flow electrophoresis (CFE) chamber increased by introducing lateral gradients in concentration of buffer solution and thickness of chamber. Such gradients, with resulting enhanced separation, achieved in CFE chamber with wedge-shaped cross section and collateral flow. Enables improved separations of homogeneous components of mixtures of variety of biologically important substances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valous, Nektarios A.; Lahrmann, Bernd; Halama, Niels
Purpose: The interactions of neoplastic cells with each other and the microenvironment are complex. To understand intratumoral heterogeneity, subtle differences should be quantified. Main factors contributing to heterogeneity include the gradient ischemic level within neoplasms, action of microenvironment, mechanisms of intercellular transfer of genetic information, and differential mechanisms of modifications of genetic material/proteins. This may reflect on the expression of biomarkers in the context of prognosis/stratification. Hence, a rigorous approach for assessing the spatial intratumoral heterogeneity of histological biomarker expression with accuracy and reproducibility is required, since patterns in immunohistochemical images can be challenging to identify and describe. Methods: Amore » quantitative method that is useful for characterizing complex irregular structures is lacunarity; it is a multiscale technique that exhaustively samples the image, while the decay of its index as a function of window size follows characteristic patterns for different spatial arrangements. In histological images, lacunarity provides a useful measure for the spatial organization of a biomarker when a sampling scheme is employed and relevant features are computed. The proposed approach quantifies the segmented proliferative cells and not the textural content of the histological slide, thus providing a more realistic measure of heterogeneity within the sample space of the tumor region. The aim is to investigate in whole sections of primary pancreatic neuroendocrine neoplasms (pNENs), using whole-slide imaging and image analysis, the spatial intratumoral heterogeneity of Ki-67 immunostains. Unsupervised learning is employed to verify that the approach can partition the tissue sections according to distributional heterogeneity. Results: The architectural complexity of histological images has shown that single measurements are often insufficient. Inhomogeneity of distribution depends not only on percentage content of proliferation phase but also on how the phase fills the space. Lacunarity curves demonstrate variations in the sampled image sections. Since the spatial distribution of proliferation in each case is different, the width of the curves changes too. Image sections that have smaller numerical variations in the computed features correspond to neoplasms with spatially homogeneous proliferation, while larger variations correspond to cases where proliferation shows various degrees of clumping. Grade 1 (uniform/nonuniform: 74%/26%) and grade 3 (uniform: 100%) pNENs demonstrate a more homogeneous proliferation with grade 1 neoplasms being more variant, while grade 2 tumor regions render a more diverse landscape (50%/50%). Hence, some cases show an increased degree of spatial heterogeneity comparing to others with similar grade. Whether this is a sign of different tumor biology and an association with a more benign/malignant clinical course needs to be investigated further. The extent and range of spatial heterogeneity has the potential to be evaluated as a prognostic marker. Conclusions: The association with tumor grade as well as the rationale that the methodology reflects true tumor architecture supports the technical soundness of the method. This reflects a general approach which is relevant to other solid tumors and biomarkers. Drawing upon the merits of computational biomedicine, the approach uncovers salient features for use in future studies of clinical relevance.« less
A new silica-infiltrated Y-TZP obtained by the sol-gel method.
Campos, T M B; Ramos, N C; Machado, J P B; Bottino, M A; Souza, R O A; Melo, R M
2016-05-01
The aim of this study was to evaluate silica infiltration into dental zirconia (VITA In-Ceram 2000 YZ, Vita Zahnfabrik) and its effects on zirconia's surface characteristics, structural homogeneity and bonding to a resin cement. Infiltration was performed by immersion of the pre-sintered zirconia specimens in silica sols for five days (ZIn). Negative (pure zirconia specimens, ZCon-) and positive controls (specimens kept in water for 5 days, ZCon+) were also performed. After sintering, the groups were evaluated by X-ray diffraction (XRD), grazing angle X-ray diffraction (DRXR), scanning electron microscopy (SEM), contact angle measurements, optical profilometry, biaxial flexural test and shear bonding test. Weibull analysis was used to determine the Weibull modulus (m) and characteristic strength (σ0) of all groups. There were no major changes in strength for the infiltrated group, and homogeneity (m) was also increased. A layer of ZrSiO4 was formed on the surface. The bond strength to resin cement was improved after zirconia infiltration, acid conditioning and the use of an MDP primer. The sol-gel method is an efficient and simple method to increase the homogeneity of zirconia. Infiltration also improved bonding to resin cement. The performance of a zirconia infiltrated by silica gel improved in at least two ways: structural homogeneity and bonding to resin cement. The infiltration is simple to perform and can be easily managed in a prosthesis laboratory. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cho, Seok-Cheol; Choi, Woon-Yong; Oh, Sung-Ho; Lee, Choon-Geun; Seo, Yong-Chang; Kim, Ji-Seon; Song, Chi-Ho; Kim, Ga-Vin; Lee, Shin-Young; Kang, Do-Hyung; Lee, Hyeon-Yong
2012-01-01
Marine microalga, Scenedesmus sp., which is known to be suitable for biodiesel production because of its high lipid content, was subjected to the conventional Folch method of lipid extraction combined with high-pressure homogenization pretreatment process at 1200 psi and 35°C. Algal lipid yield was about 24.9% through this process, whereas only 19.8% lipid can be obtained by following a conventional lipid extraction procedure using the solvent, chloroform : methanol (2 : 1, v/v). Present approach requires 30 min process time and a moderate working temperature of 35°C as compared to the conventional extraction method which usually requires >5 hrs and 65°C temperature. It was found that this combined extraction process followed second-order reaction kinetics, which means most of the cellular lipids were extracted during initial periods of extraction, mostly within 30 min. In contrast, during the conventional extraction process, the cellular lipids were slowly and continuously extracted for >5 hrs by following first-order kinetics. Confocal and scanning electron microscopy revealed altered texture of algal biomass pretreated with high-pressure homogenization. These results clearly demonstrate that the Folch method coupled with high-pressure homogenization pretreatment can easily destruct the rigid cell walls of microalgae and release the intact lipids, with minimized extraction time and temperature, both of which are essential for maintaining good quality of the lipids for biodiesel production. PMID:22969270
Martena, Valentina; Shegokar, Ranjita; Di Martino, Piera; Müller, Rainer H
2014-09-01
Nicergoline, a poorly soluble active pharmaceutical ingredient, possesses vaso-active properties which causes peripheral and central vasodilatation. In this study, nanocrystals of nicergoline were prepared in an aqueous solution of polysorbate 80 (nanosuspension) by using four different laboratory scale size reduction techniques: high pressure homogenization (HPH), bead milling (BM) and combination techniques (high pressure homogenization followed by bead milling HPH + BM, and bead milling followed by high pressure homogenization BM + HPH). Nanocrystals were investigated regarding to their mean particles size, zeta potential and particle dissolution. A short term physical stability study on nanocrystals stored at three different temperatures (4, 20 and 40 °C) was performed to evaluate the tendency to change in particle size, aggregation and zeta potential. The size reduction technique and the process parameters like milling time, number of homogenization cycles and pressure greatly affected the size of nanocrystals. Among the techniques used, the combination techniques showed superior and consistent particle size reduction compared to the other two methods, HPH + BM and BM + HPH giving nanocrystals of a mean particle size of 260 and 353 nm, respectively. The particle dissolution was increased for any nanocrystals samples, but it was particularly increased by HPH and combination techniques. Independently to the production method, nicergoline nanocrystals showed slight increase in particle size over the time, but remained below 500 nm at 20 °C and refrigeration conditions.
Asymptotic quantum inelastic generalized Lorenz Mie theory
NASA Astrophysics Data System (ADS)
Gouesbet, G.
2007-10-01
The (electromagnetic) generalized Lorenz-Mie theory describes the interaction between an electromagnetic arbitrary shaped beam and a homogeneous sphere. It is a generalization of the Lorenz-Mie theory which deals with the simpler case of a plane wave illumination. In a recent paper, we consider (i) elastic cross-sections in electromagnetic generalized Lorenz-Mie theory and (ii) elastic cross-sections in an associated quantum generalized Lorenz-Mie theory. We demonstrated that the electromagnetic problem is equivalent to a superposition of two effective quantum problems. We now intend to generalize this result from elastic cross-sections to inelastic cross-sections. A prerequisite is to build an asymptotic quantum inelastic generalized Lorenz-Mie theory, which is presented in this paper.
Comparison of Hansen--Roach and ENDF/B-IV cross sections for $sup 233$U criticality calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNeany, S. R.; Jenkins, J. D.
A comparison is made between criticality calculations performed using ENDF/B-IV cross sections and the 16-group Hansen-- Roach library at ORNL. The area investigated is homogeneous systems of highly enriched $sup 233$U in simple geometries. Calculations are compared with experimental data for a wide range of H/$sup 233$U ratios. Results show that calculations of k/sub eff/ made with the Hansen--Roach cross sections agree within 1.5 percent for the experiments considered. Results using ENDF/B-IV cross sections were in good agreement for well-thermalized systems, but discrepancies up to 7 percent in k/sub eff/ were observed in fast and epithermal systems. (auth)
Composite Beam Theory with Material Nonlinearities and Progressive Damage
NASA Astrophysics Data System (ADS)
Jiang, Fang
Beam has historically found its broad applications. Nowadays, many engineering constructions still rely on this type of structure which could be made of anisotropic and heterogeneous materials. These applications motivate the development of beam theory in which the impact of material nonlinearities and damage on the global constitutive behavior has been a focus in recent years. Reliable predictions of these nonlinear beam responses depend on not only the quality of the material description but also a comprehensively generalized multiscale methodology which fills the theoretical gaps between the scales in an efficient yet high-fidelity manner. The conventional beam modeling methodologies which are built upon ad hoc assumptions are in lack of such reliability in need. Therefore, the focus of this dissertation is to create a reliable yet efficient method and the corresponding tool for composite beam modeling. A nonlinear beam theory is developed based on the Mechanics of Structure Genome (MSG) using the variational asymptotic method (VAM). The three-dimensional (3D) nonlinear continuum problem is rigorously reduced to a one-dimensional (1D) beam model and a two-dimensional (2D) cross-sectional analysis featuring both geometric and material nonlinearities by exploiting the small geometric parameter which is an inherent geometric characteristic of the beam. The 2D nonlinear cross-sectional analysis utilizes the 3D material models to homogenize the beam cross-sectional constitutive responses considering the nonlinear elasticity and progressive damage. The results from such a homogenization are inputs as constitutive laws into the global nonlinear 1D beam analysis. The theoretical foundation is formulated without unnecessary kinematic assumptions. Curvilinear coordinates and vector calculus are utilized to build the 3D deformation gradient tensor, of which the components are formulated in terms of cross-sectional coordinates, generalized beam strains, unknown warping functions, and the 3D spatial gradients of these warping functions. Asymptotic analysis of the extended Hamiltonian's principle suggests dropping the terms of axial gradients of the warping functions. As a result, the solid mechanics problem resolved into a 3D continuum is dimensionally reduced to a problem of solving the warping functions on a 2D cross-sectional field by minimizing the information loss. The present theory is implemented using the finite element method (FEM) in Variational Asymptotic Beam Sectional Analysis (VABS), a general-purpose cross-sectional analysis tool. An iterative method is applied to solve the finite warping field for the classical-type model in the form of the Euler-Bernoulli beam theory. The deformation gradient tensor is directly used to enable the capability of dealing with finite deformation, various strain definitions, and several types of material constitutive laws regarding the nonlinear elasticity and progressive damage. Analytical and numerical examples are given for various problems including the trapeze effect, Poynting effect, Brazier effect, extension-bending coupling effect, and free edge damage. By comparison with the predictions from 3D finite element analyses (FEA), 2D FEA based on plane stress assumptions, and experimental data, the structural and material responses are proven to be rigorously captured by the present theory and the computational cost is significantly reduced. Due to the semi-analytical feature of the code developed, the unrealistic numerical issues widely seen in the conventional FEA with strain softening material behaviors are prevented by VABS. In light of these intrinsic features, the nonlinear elastic and inelastic 3D material models can be economically calibrated by data-matching the VABS predictions directly with the experimental measurements from slender coupons. Furthermore, the global behavior of slender composite structures in meters can also be effectively characterized by VABS without unnecessary loss of important information of its local laminae in micrometers.
Hot zero power reactor calculations using the Insilico code
Hamilton, Steven P.; Evans, Thomas M.; Davidson, Gregory G.; ...
2016-03-18
In this paper we describe the reactor physics simulation capabilities of the insilico code. A description of the various capabilities of the code is provided, including detailed discussion of the geometry, meshing, cross section processing, and neutron transport options. Numerical results demonstrate that the insilico SP N solver with pin-homogenized cross section generation is capable of delivering highly accurate full-core simulation of various PWR problems. Comparison to both Monte Carlo calculations and measured plant data is provided.
NASA Astrophysics Data System (ADS)
Moon, J. W.; Paradis, C. J.; von Netzer, F.; Dixon, E.; Majumder, E.; Joyner, D.; Zane, G.; Fitzgerald, K.; Xiaoxuan, G.; Thorgersen, M. P.; Lui, L.; Adams, B.; Brewer, S. S.; Williams, D.; Lowe, K. A.; Rodriguez, M., Jr.; Mehlhorn, T. L.; Pfiffner, S. M.; Chakraborty, R.; Arkin, A. P.; Terry, A. Y.; Wall, J. D.; Stahl, D. A.; Elias, D. A.; Hazen, T. C.
2017-12-01
Conventional monitoring wells have produced useful long-term data about the contaminants, carbon flux, microbial population and their evolution. The averaged homogenized groundwater matrix from these wells is insufficient to represent all media properties in subsurface. This pilot study investigated the solid, liquid and gas phases from soil core samples from both uncontaminated and contaminated areas of the ENIGMA field research site at Oak Ridge, Tennessee. We focused on a site-specific assessment with depth perspective that included soil structure, soil minerals, major and trace elements and biomass for the solid phase; centrifuged soil pore water including cations, anions, organic acid, pH and conductivity for the liquid phase; and gas (CO2, CH4, N2O) evolution over a 4 week incubation with soil and unfiltered groundwater. Pore water from soil core sections showed a correlation between contamination levels with depth and the potential abundance of sulfate- and nitrate-reducing bacteria based on the 2-order of magnitude decreased concentration. A merged interpretation with mineralogical consideration revealed a more complicated correlation among contaminants, soil texture, clay minerals, groundwater levels, and biomass. This sampling campaign emphasized that subsurface microbial activity and metabolic reactions can be influenced by a variety of factors but can be understood by considering the influence of multiple geochemical factors from all subsurface phases including water, air, and solid along depth rather than homogenized groundwater.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executedmore » in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.« less
NASA Astrophysics Data System (ADS)
Guijarro, José A.; López, José A.; Aguilar, Enric; Domonkos, Peter; Venema, Victor; Sigró, Javier; Brunet, Manola
2017-04-01
After the successful inter-comparison of homogenization methods carried out in the COST Action ES0601 (HOME), many methods kept improving their algorithms, suggesting the need of performing new inter-comparison exercises. However, manual applications of the methodologies to a large number of testing networks cannot be afforded without involving the work of many researchers over an extended time. The alternative is to make the comparisons as automatic as possible, as in the MULTITEST project, which, funded by the Spanish Ministry of Economy and Competitiveness, tests homogenization methods by applying them to a large number of synthetic networks of monthly temperature and precipitation. One hundred networks of 10 series were sampled from different master networks containing 100 series of 720 values (60 years times 12 months). Three master temperature networks were built with different degree of cross-correlations between the series in order to simulate conditions of different station densities or climatic heterogeneity. Also three master synthetic networks were developed for precipitation, this time mimicking the characteristics of three different climates: Atlantic temperate, Mediterranean and monsoonal. Inhomogeneities were introduced in every network sampled from the master networks, and all publicly available homogenization methods that we could run in an automatic way were applied to them: ACMANT 3.0, Climatol 3.0, MASH 3.03, RHTestV4, USHCN v52d and HOMER 2.6. Most of them were tested with different settings, and their comparative results can be inspected in box-plot graphics of Root Mean Squared Errors and trend biases computed between the homogenized data and their original homogeneous series. In a first stage, inhomogeneities were applied to the synthetic homogeneous series with five different settings with increasing difficulty and realism: i) big shifts in half of the series; ii) the same with a strong seasonality; iii) short term platforms and local trends; iv) random number of shifts with random size and location in all series; and v) the same plus seasonality of random amplitude. The shifts were additive for temperature and multiplicative for precipitation. The second stage is dedicated to study the impact of the number of series in the networks, seasonalities other than sinusoidal, and the occurrence of simultaneous shifts in a high number of series. Finally, tests will be performed on a longer and more realistic benchmark, with varying number of missing data along time, similar to that used in the COST Action ES0601. These inter-comparisons will be valuable both to the users and to the developers of the tested packages, who can see how their algorithms behave under varied climate conditions.
NASA Astrophysics Data System (ADS)
Schindler, Stefan; Mergheim, Julia; Zimmermann, Marco; Aurich, Jan C.; Steinmann, Paul
2017-01-01
A two-scale material modeling approach is adopted in order to determine macroscopic thermal and elastic constitutive laws and the respective parameters for metal matrix composite (MMC). Since the common homogenization framework violates the thermodynamical consistency for non-constant temperature fields, i.e., the dissipation is not conserved through the scale transition, the respective error is calculated numerically in order to prove the applicability of the homogenization method. The thermomechanical homogenization is applied to compute the macroscopic mass density, thermal expansion, elasticity, heat capacity and thermal conductivity for two specific MMCs, i.e., aluminum alloy Al2024 reinforced with 17 or 30 % silicon carbide particles. The temperature dependency of the material properties has been considered in the range from 0 to 500°C, the melting temperature of the alloy. The numerically determined material properties are validated with experimental data from the literature as far as possible.
Method of Mapping Anomalies in Homogenous Material
NASA Technical Reports Server (NTRS)
Taylor, Bryant D. (Inventor); Woodard, Stanley E. (Inventor)
2016-01-01
An electrical conductor and antenna are positioned in a fixed relationship to one another. Relative lateral movement is generated between the electrical conductor and a homogenous material while maintaining the electrical conductor at a fixed distance from the homogenous material. The antenna supplies a time-varying magnetic field that causes the electrical conductor to resonate and generate harmonic electric and magnetic field responses. Disruptions in at least one of the electric and magnetic field responses during this lateral movement are indicative of a lateral location of a subsurface anomaly. Next, relative out-of-plane movement is generated between the electrical conductor and the homogenous material in the vicinity of the anomaly's lateral location. Disruptions in at least one of the electric and magnetic field responses during this out-of-plane movement are indicative of a depth location of the subsurface anomaly. A recording of the disruptions provides a mapping of the anomaly.
Hoffmann, Brittany; Carlson, Christie; Rao, Deepa A
2014-01-01
The purpose of this work was to assess the use of food colors as a visual aid to determine homogeneous mixing in the extemporaneous preparation of capsules. Six different batches of progesterone slow-release 200-mg capsules were prepared by different mixing methods until visually determined as homogeneous based on yellow food coloring distribution in the preparation by the Central Iowa Compounding Pharmacy, Des Moines, Iowa. UV-Vis spectrophotometry was used to extract and evaluate yellow food coloring content in each of these batches and compared to an in-house, small-batch geometric dilution preparation of progesterone slow- release 200-mg capsules. Of the 6 batches tested, only one, which followed the principles of additive dilution and an appropriate mixing time, was both visually and quantitatively homogeneous in the detection of yellow food coloring. The use of food coloring alone is not a valid quality-assurance tool in determining homogeneous mixing. Principles of geometric and/or additive dilution and appropriate mixing times along with the food color can serve as a quality-assurance tool.
Avdievich, Nikolai I.; Oh, Suk-Hoon; Hetherington, Hoby P.; Collins, Christopher M.
2010-01-01
Purpose To improve the homogeneity of transmit volume coils at high magnetic fields (≥ 4 T). Due to RF field/ tissue interactions at high fields, 4–8 T, the transmit profile from head-sized volume coils shows a distinctive pattern with relatively strong RF magnetic field B1 in the center of the brain. Materials and Methods In contrast to conventional volume coils at high field strengths, surface coil phased arrays can provide increased RF field strength peripherally. In theory, simultaneous transmission from these two devices could produce a more homogeneous transmission field. To minimize interactions between the phased array and the volume coil, counter rotating current (CRC) surface coils consisting of two parallel rings carrying opposite currents were used for the phased array. Results Numerical simulations and experimental data demonstrate that substantial improvements in transmit field homogeneity can be obtained. Conclusion We have demonstrated the feasibility of using simultaneous transmission with human head-sized volume coils and CRC phased arrays to improve homogeneity of the transmit RF B1 field for high-field MRI systems. PMID:20677280
Effect of homogenous-heterogeneous reactions on MHD Prandtl fluid flow over a stretching sheet
NASA Astrophysics Data System (ADS)
Khan, Imad; Malik, M. Y.; Hussain, Arif; Salahuddin, T.
An analysis is performed to explore the effects of homogenous-heterogeneous reactions on two-dimensional flow of Prandtl fluid over a stretching sheet. In present analysis, we used the developed model of homogeneous-heterogeneous reactions in boundary layer flow. The mathematical configuration of presented flow phenomenon yields the nonlinear partial differential equations. Using scaling transformations, the governing partial differential equations (momentum equation and homogenous-heterogeneous reactions equations) are transformed into non-linear ordinary differential equations (ODE's). Then, resulting non-linear ODE's are solved by computational scheme known as shooting method. The quantitative and qualitative manners of concerned physical quantities (velocity, concentration and drag force coefficient) are examined under prescribed physical constrained through figures and tables. It is observed that velocity profile enhances verses fluid parameters α and β while Hartmann number reduced it. The homogeneous and heterogeneous reactions parameters have reverse effects on concentration profile. Concentration profile shows retarding behavior for large values of Schmidt number. Skin fraction coefficient enhances with increment in Hartmann number H and fluid parameter α .
Ganesh, D; Nagarajan, G; Ganesan, S
2014-01-01
In parallel to the interest in renewable fuels, there has also been increased interest in homogeneous charge compression ignition (HCCI) combustion. HCCI engines are being actively developed because they have the potential to be highly efficient and to produce low emissions. Even though HCCI has been researched extensively, few challenges still exist. These include controlling the combustion at higher loads and the formation of a homogeneous mixture. To obtain better homogeneity, in the present investigation external mixture formation method was adopted, in which the fuel vaporiser was used to achieve excellent HCCI combustion in a single cylinder air-cooled direct injection diesel engine. In continuation of our previous works, in the current study a vaporised jatropha methyl ester (JME) was mixed with air to form a homogeneous mixture and inducted into the cylinder during the intake stroke to analyze the combustion, emission and performance characteristics. To control the early ignition of JME vapor-air mixture, cooled (30 °C) Exhaust gas recirculation (EGR) technique was adopted. The experimental result shows 81% reduction in NOx and 72% reduction in smoke emission.
Benchmarking homogenization algorithms for monthly data
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2012-01-01
The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that automatic algorithms can perform as well as manual ones.
[Growth Factors and Interleukins in Amniotic Membrane Tissue Homogenate].
Stachon, T; Bischoff, M; Seitz, B; Huber, M; Zawada, M; Langenbucher, A; Szentmáry, N
2015-07-01
Application of amniotic membrane homogenate eye drops may be a potential treatment alternative for therapy resistant corneal epithelial defects. The purpose of this study was to determine the concentrations of epidermal growth factor (EGF), fibroblast growth factor basic (bFGF), hepatocyte growth factor (HGF), keratinocyte growth factor (KGF), interleukin-6 (IL-6) and interleukin-8 (IL-8) in amniotic membrane homogenates. Amniotic membranes of 8 placentas were prepared and thereafter stored at - 80 °C using the standard methods of the LIONS Cornea Bank Saar-Lor-Lux, Trier/Westpfalz. Following defreezing, amniotic membranes were cut in two pieces and homogenized in liquid nitrogen. One part of the homogenate was prepared in cell-lysis buffer, the other part was prepared in PBS. The tissue homogenates were stored at - 20 °C until enzyme-linked immunosorbent assay (ELISA) analysis for EGF, bFGF, HGF, KGF, IL-6 and IL-8 concentrations. Concentrations of KGF, IL-6 and IL-8 were below the detection limit using both preparation techniques. The EGF concentration in tissue homogenates treated with cell-lysis buffer (2412 pg/g tissue) was not significantly different compared to that of tissue homogenates treated with PBS (1586 pg/g tissue, p = 0.72). bFGF release was also not significantly different using cell-lysis buffer (3606 pg/g tissue) or PBS treated tissue homogenates (4649 pg/g tissue, p = 0.35). HGF release was significantly lower using cell-lysis buffer (23,555 pg/g tissue), compared to PBS treated tissue (47,766 pg/g tissue, p = 0.007). Containing EGF, bFGF and HGF, and lacking IL-6 and IL-8, the application of amniotic membrane homogenate eye drops may be a potential treatment alternative for therapy-resistant corneal epithelial defects. Georg Thieme Verlag KG Stuttgart · New York.
SU-E-T-76: Comparing Homogeneity Between Gafchromic Film EBT2 and EBT3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mizuno, H; Sumida, I; Ogawa, K
2014-06-01
Purpose: We found out that homogeneity of EBT2 was different among lot numbers in previous study. Variation in local homogeneity of EBT3 among several lot numbers has not been reported. In this study, we investigated film homogeneity of Gafcrhomic EBT3 films compared with EBT2 films. Methods: All sheets from five lots were cut into 12 pieces to investigate film homogeneity, and were irradiated at 0.5, 2, and 3 Gy. To investigate intra- and inter-sheet uniformity, five sheets from five lots were exposed to 2 Gy: intra-sheet uniformity was evaluated by the coefficient of variation of homogeneity for all pieces ofmore » a single sheet, and inter-sheet uniformity was evaluated by the coefficient of variation of homogeneity among the same piece numbers in the five sheets. To investigate the difference of ADC value in various doses, a single sheet from each of five lots was irradiated at 0.5 Gy and 3 Gy in addition to 2 Gy. A scan resolution of 72 dots per inch (dpi) and color depth of 48-bit RGB were used. Films were analyzed by the inhouse software; Average of ADC value in center ROI and profile X and Y axis were measured. Results and Conclusion: Intra-sheet uniformity of non-irradiated EBT2 films were ranged from 0.1% to 0.4%, however that of irradiated EBT2 films were ranged from 0.2% to 1.5%. On the other hand, intra-sheet uniformity of irradiated and non-irradiated EBT3 films were from 0.2% to 0.6%. Inter-sheet uniformity of all films were less than 0.5%. It was interesting point that homogeneity of EBT3 between no-irradiated and irradiated films were similar value, whereas EBT2 had dose dependence of homogeneity in ADC value evaluation. These results suggested that EBT3 homogeneity was corrected by this feature.« less
NASA Astrophysics Data System (ADS)
Guo, Yanhui; Zhou, Chuan; Chan, Heang-Ping; Wei, Jun; Chughtai, Aamer; Sundaram, Baskaran; Hadjiiski, Lubomir M.; Patel, Smita; Kazerooni, Ella A.
2013-04-01
A 3D multiscale intensity homogeneity transformation (MIHT) method was developed to reduce false positives (FPs) in our previously developed CAD system for pulmonary embolism (PE) detection. In MIHT, the voxel intensity of a PE candidate region was transformed to an intensity homogeneity value (IHV) with respect to the local median intensity. The IHVs were calculated in multiscales (MIHVs) to measure the intensity homogeneity, taking into account vessels of different sizes and different degrees of occlusion. Seven new features including the entropy, gradient, and moments that characterized the intensity distributions of the candidate regions were derived from the MIHVs and combined with the previously designed features that described the shape and intensity of PE candidates for the training of a linear classifier to reduce the FPs. 59 CTPA PE cases were collected from our patient files (UM set) with IRB approval and 69 cases from the PIOPED II data set with access permission. 595 and 800 PEs were identified as reference standard by experienced thoracic radiologists in the UM and PIOPED set, respectively. FROC analysis was used for performance evaluation. Compared with our previous CAD system, at a test sensitivity of 80%, the new method reduced the FP rate from 18.9 to 14.1/scan for the PIOPED set when the classifier was trained with the UM set and from 22.6 to 16.0/scan vice versa. The improvement was statistically significant (p<0.05) by JAFROC analysis. This study demonstrated that the MIHT method is effective in reducing FPs and improving the performance of the CAD system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favorite, Jeffrey A.
The Second-Level Adjoint Sensitivity System (2nd-LASS) that yields the second-order sensitivities of a response of uncollided particles with respect to isotope densities, cross sections, and source emission rates is derived in Refs. 1 and 2. In Ref. 2, we solved problems for the uncollided leakage from a homogeneous sphere and a multiregion cylinder using the PARTISN multigroup discrete-ordinates code. In this memo, we derive solutions of the 2nd-LASS for the particular case when the response is a flux or partial current density computed at a single point on the boundary, and the inner products are computed using ray-tracing. Both themore » PARTISN approach and the ray-tracing approach are implemented in a computer code, SENSPG. The next section of this report presents the equations of the 1st- and 2nd-LASS for uncollided particles and the first- and second-order sensitivities that use the solutions of the 1st- and 2nd-LASS. Section III presents solutions of the 1st- and 2nd-LASS equations for the case of ray-tracing from a detector point. Section IV presents specific solutions of the 2nd-LASS and derives the ray-trace form of the inner products needed for second-order sensitivities. Numerical results for the total leakage from a homogeneous sphere are presented in Sec. V and for the leakage from one side of a two-region slab in Sec. VI. Section VII is a summary and conclusions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, J; Hu, W; Xing, Y
Purpose: Different particle scanning beam delivery systems have different delivery accuracies. This study was performed to determine, for our particle treatment system, an appropriate composition (n=FWHM/GS) of spot size(FWHM) and grid size (GS), which can provide homogenous delivered dose distributions for both proton and heavy ion scanning beam radiotherapy. Methods: We analyzed the delivery errors of our beam delivery system using log files from the treatment of 28 patients. We used a homemade program to simulate square fields for different n values with and without considering the delivery errors and analyzed the homogeneity. All spots were located on a rectilinearmore » grid with equal spacing in the × and y directions. After that, we selected 7 energy levels for both proton and carbon ions. For each energy level, we made 6 square field plans with different n values (1, 1.5, 2, 2.5, 3, 3.5). Then we delivered those plans and used films to measure the homogeneity of each field. Results: For program simulation without delivery errors, when n≥1.1 the homogeneity can be within ±3%. For both proton and carbon program simulations with delivery errors and film measurements, the homogeneity can be within ±3% when n≥2.5. Conclusion: For our facility with system errors, the n≥2.5 is appropriate for maintaining homogeneity within ±3%.« less
NASA Astrophysics Data System (ADS)
Kondrat'ev, B. P.
1993-06-01
A method is developed for the representation of the potential energy of homogeneous gravitating, as well as electrically charged, bodies in the form of special series. These series contain members consisting of products of the corresponding coefficients appearing in the expansion of external and internal Newtonian potentials in Legendre polynomial series. Several versions of the representation of potential energy through these series are possible. A formula which expresses potential energy not as a volume integral, as is the convention, but as an integral over the body surface is derived. The method is tested for the particular cases of sphere and ellipsoid, and the convergence of the found series is shown.
Ryland, Bradford L; Stahl, Shannon S
2014-08-18
Oxidations of alcohols and amines are common reactions in the synthesis of organic molecules in the laboratory and industry. Aerobic oxidation methods have long been sought for these transformations, but few practical methods exist that offer advantages over traditional oxidation methods. Recently developed homogeneous Cu/TEMPO (TEMPO = 2,2,6,6-tetramethylpiperidinyl-N-oxyl) and related catalyst systems appear to fill this void. The reactions exhibit high levels of chemoselectivity and broad functional-group tolerance, and they often operate efficiently at room temperature with ambient air as the oxidant. These advances, together with their historical context and recent applications, are highlighted in this Minireview. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Zhang, Ruiying; Yao, Junjie; Maslov, Konstantin I.; Wang, Lihong V.
2013-08-01
We propose a method for photoacoustic flow measurement based on the Doppler effect from a flowing homogeneous medium. Excited by spatially modulated laser pulses, the flowing medium induces a Doppler frequency shift in the received photoacoustic signals. The frequency shift is proportional to the component of the flow speed projected onto the acoustic beam axis, and the sign of the shift reflects the flow direction. Unlike conventional flowmetry, this method does not rely on particle heterogeneity in the medium; thus, it can tolerate extremely high particle density. A red-ink phantom flowing in a tube immersed in water was used to validate the method in both the frequency and time domains. The phantom flow immersed in an intralipid solution was also measured.
Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy
NASA Astrophysics Data System (ADS)
Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas
2012-04-01
Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing.
Global stabilisation of a class of generalised cascaded systems by homogeneous method
NASA Astrophysics Data System (ADS)
Ding, Shihong; Zheng, Wei Xing
2016-04-01
This paper considers the problem of global stabilisation of a class of generalised cascaded systems. By using the extended adding a power integrator technique, a global controller is first constructed for the driving subsystem. Then based on the homogeneous properties and polynomial assumption, it is shown that the stabilisation of the driving subsystem implies the stabilisation of the overall cascaded system. Meanwhile, by properly choosing some control parameters, the global finite-time stability of the closed-loop cascaded system is also established. The proposed control method has several new features. First, the nonlinear cascaded systems considered in the paper are more general than the conventional ones, since the powers in the nominal part of the driving subsystem are not required to be restricted to ratios of positive odd numbers. Second, the proposed method has some flexible parameters which provide the possibility for designing continuously differentiable controllers for cascaded systems, while the existing designed controllers for such kind of cascaded systems are only continuous. Third, the homogenous and polynomial conditions adopted for the driven subsystem are easier to verify when compared with the matching conditions that are widely used previously. Furthermore, the efficiency of the proposed control method is validated by its application to finite-time tracking control of non-holonomic wheeled mobile robot.
NASA Astrophysics Data System (ADS)
Pipota, J.; Linhart, O.
The paper deals with a method of fertility inactivation of fish spermatozoa by gamma radiation. Spermatozoa motility remained unchanged after irradiation. Irradiated sperm has been utilized to induced gynogenesis by means of retention of the second polar body and of mitotic gynogenesis, realized in carp for the first time. Homogeneity of gamma-rays field was + - 1 %.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Zuo, Chao; Idir, Mourad
A novel transport-of-intensity equation (TIE) based phase retrieval method is proposed with putting an arbitrarily-shaped aperture into the optical wavefield. In this arbitrarily-shaped aperture, the TIE can be solved under non-uniform illuminations and even non-homogeneous boundary conditions by iterative discrete cosine transforms with a phase compensation mechanism. Simulation with arbitrary phase, arbitrary aperture shape, and non-uniform intensity distribution verifies the effective compensation and high accuracy of the proposed method. Experiment is also carried out to check the feasibility of the proposed method in real measurement. Comparing to the existing methods, the proposed method is applicable for any types of phasemore » distribution under non-uniform illumination and non-homogeneous boundary conditions within an arbitrarily-shaped aperture, which enables the technique of TIE with hard aperture become a more flexible phase retrieval tool in practical measurements.« less
Huang, Lei; Zuo, Chao; Idir, Mourad; ...
2015-04-21
A novel transport-of-intensity equation (TIE) based phase retrieval method is proposed with putting an arbitrarily-shaped aperture into the optical wavefield. In this arbitrarily-shaped aperture, the TIE can be solved under non-uniform illuminations and even non-homogeneous boundary conditions by iterative discrete cosine transforms with a phase compensation mechanism. Simulation with arbitrary phase, arbitrary aperture shape, and non-uniform intensity distribution verifies the effective compensation and high accuracy of the proposed method. Experiment is also carried out to check the feasibility of the proposed method in real measurement. Comparing to the existing methods, the proposed method is applicable for any types of phasemore » distribution under non-uniform illumination and non-homogeneous boundary conditions within an arbitrarily-shaped aperture, which enables the technique of TIE with hard aperture become a more flexible phase retrieval tool in practical measurements.« less
Tripathi, Rajnee; Mishra, Hradyesh Kumar
2016-01-01
In this communication, we describe the Homotopy Perturbation Method with Laplace Transform (LT-HPM), which is used to solve the Lane-Emden type differential equations. It's very difficult to solve numerically the Lane-Emden types of the differential equation. Here we implemented this method for two linear homogeneous, two linear nonhomogeneous, and four nonlinear homogeneous Lane-Emden type differential equations and use their appropriate comparisons with exact solutions. In the current study, some examples are better than other existing methods with their nearer results in the form of power series. The Laplace transform used to accelerate the convergence of power series and the results are shown in the tables and graphs which have good agreement with the other existing method in the literature. The results show that LT-HPM is very effective and easy to implement.
Wang, Dongqin; Li, Yanqun; Hu, Xueqiong; Su, Weimin; Zhong, Min
2015-01-01
Microalgal biodiesel is one of the most promising renewable fuels. The wet technique for lipids extraction has advantages over the dry method, such as energy-saving and shorter procedure. The cell disruption is a key factor in wet oil extraction to facilitate the intracellular oil release. Ultrasonication, high-pressure homogenization, enzymatic hydrolysis and the combination of enzymatic hydrolysis with high-pressure homogenization and ultrasonication were employed in this study to disrupt the cells of the microalga Neochloris oleoabundans. The cell disruption degree was investigated. The cell morphology before and after disruption was assessed with scanning and transmission electron microscopy. The energy requirements and the operation cost for wet cell disruption were also estimated. The highest disruption degree, up to 95.41%, assessed by accounting method was achieved by the combination of enzymatic hydrolysis and high-pressure homogenization. A lipid recovery of 92.6% was also obtained by the combined process. The combined process was found to be more efficient and economical compared with the individual process. PMID:25853267
Light emitting fabric technologies for photodynamic therapy.
Mordon, Serge; Cochrane, Cédric; Tylcz, Jean Baptiste; Betrouni, Nacim; Mortier, Laurent; Koncar, Vladan
2015-03-01
Photodynamic therapy (PDT) is considered to be a promising method for treating various types of cancer. A homogeneous and reproducible illumination during clinical PDT plays a determinant role in preventing under- or over-treatment. The development of flexible light sources would considerably improve the homogeneity of light delivery. The integration of optical fiber into flexible structures could offer an interesting alternative. This paper aims to describe different methods proposed to develop Side Emitting Optical Fibers (SEOF), and how these SEOF can be integrated in a flexible structure to improve light illumination of the skin during PDT. Four main techniques can be described: (i) light blanket integrating side-glowing optical fibers, (ii) light emitting panel composed of SEOF obtained by micro-perforations of the cladding, (iii) embroidery-based light emitting fabric, and (iv) woven-based light emitting fabric. Woven-based light emitting fabrics give the best performances: higher fluence rate, best homogeneity of light delivery, good flexibility. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Khebbab, Mohamed; Feliachi, Mouloud; El Hadi Latreche, Mohamed
2018-03-01
In this present paper, a simulation of eddy current non-destructive testing (EC NDT) on unidirectional carbon fiber reinforced polymer is performed; for this magneto-dynamic formulation in term of magnetic vector potential is solved using finite element heterogeneous multi-scale method (FE HMM). FE HMM has as goal to compute the homogenized solution without calculating the homogenized tensor explicitly, the solution is based only on the physical characteristic known in micro domain. This feature is well adapted to EC NDT to evaluate defect in carbon composite material in microscopic scale, where the defect detection is performed by coil impedance measurement; the measurement value is intimately linked to material characteristic in microscopic level. Based on this, our model can handle different defects such as: cracks, inclusion, internal electrical conductivity changes, heterogeneities, etc. The simulation results were compared with the solution obtained with homogenized material using mixture law, a good agreement was found.
Shields, T P; Mollova, E; Ste Marie, L; Hansen, M R; Pardi, A
1999-01-01
An improved method is presented for the preparation of milligram quantities of homogenous-length RNAs suitable for nuclear magnetic resonance or X-ray crystallographic structural studies. Heterogeneous-length RNA transcripts are processed with a hammerhead ribozyme to yield homogenous-length products that are then readily purified by anion exchange high-performance liquid chromatography. This procedure eliminates the need for denaturing polyacrylamide gel electrophoresis, which is the most laborious step in the standard procedure for large-scale production of RNA by in vitro transcription. The hammerhead processing of the heterogeneous-length RNA transcripts also substantially improves the overall yield and purity of the desired RNA product. PMID:10496226
Mt-Insar Landslide Monitoring with the Aid of Homogeneous Pixels Filter
NASA Astrophysics Data System (ADS)
Liu, X. J.; Zhao, C. Y.; Wang, B. H.; Zhu, W. F.
2018-04-01
SAR interferograms are often contaminated by random noises related to temporal decorrelation, geometrical decorrelation and thermal noises, which makes the fringes obscured and greatly decreases the density of the coherent target and the accuracy of InSAR deformation results, especially for the landslide monitoring in vegetated region and in rainy season. Two different SAR interferogram filtering methods, that is Goldstein filter and homogeneous pixels filter, for one specific landslide are compared. The results show that homogeneous pixels filter is better than Goldstein one for small-scale loess landslide monitoring, which can increase the density of monitoring points. Moreover, the precision of InSAR result can reach millimeter by comparing with GPS time series measurements.
TEST METHODS TO DETERMINE THE MERCURY EMISSIONS FROM SLUDGE INCINERATION PLANTS
Two test methods for mercury are described along with the laboratory and field studies done in developing and validating them. One method describes how to homogenize and analyze large quantities of sewage sludge. The other test method describes how to measure the mercury emission...
NASA Astrophysics Data System (ADS)
Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.
2017-09-01
Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.
103Rh NMR spectroscopy and its application to rhodium chemistry.
Ernsting, Jan Meine; Gaemers, Sander; Elsevier, Cornelis J
2004-09-01
Rhodium is used for a number of large processes that rely on homogeneous rhodium-catalyzed reactions, for instance rhodium-catalyzed hydroformylation of alkenes, carbonylation of methanol to acetic acid and hydrodesulfurization of thiophene derivatives (in crude oil). Many laboratory applications in organometallic chemistry and catalysis involve organorhodium chemistry and a wealth of rhodium coordination compounds is known. For these and other areas, 103Rh NMR spectroscopy appears to be a very useful analytical tool. In this review, most of the literature concerning 103Rh NMR spectroscopy published from 1989 up to and including 2003 has been covered. After an introduction to several experimental methods for the detection of the insensitive 103Rh nucleus, a discussion of factors affecting the transition metal chemical shift is given. Computational aspects and calculations of chemical shifts are also briefly addressed. Next, the application of 103Rh NMR in coordination and organometallic chemistry is elaborated in more detail by highlighting recent developments in measurement and interpretation of 103Rh NMR data, in relation to rhodium-assisted reactions and homogeneous catalysis. The dependence of the 103Rh chemical shift on the ligands at rhodium in the first coordination sphere, on the complex geometry, oxidation state, temperature, solvent and concentration is treated. Several classes of compounds and special cases such as chiral rhodium compounds are reviewed. Finally, a section on scalar coupling to rhodium is provided. 2004 John Wiley & Sons, Ltd.
An Active Patch Model for Real World Texture and Appearance Classification
Mao, Junhua; Zhu, Jun; Yuille, Alan L.
2014-01-01
This paper addresses the task of natural texture and appearance classification. Our goal is to develop a simple and intuitive method that performs at state of the art on datasets ranging from homogeneous texture (e.g., material texture), to less homogeneous texture (e.g., the fur of animals), and to inhomogeneous texture (the appearance patterns of vehicles). Our method uses a bag-of-words model where the features are based on a dictionary of active patches. Active patches are raw intensity patches which can undergo spatial transformations (e.g., rotation and scaling) and adjust themselves to best match the image regions. The dictionary of active patches is required to be compact and representative, in the sense that we can use it to approximately reconstruct the images that we want to classify. We propose a probabilistic model to quantify the quality of image reconstruction and design a greedy learning algorithm to obtain the dictionary. We classify images using the occurrence frequency of the active patches. Feature extraction is fast (about 100 ms per image) using the GPU. The experimental results show that our method improves the state of the art on a challenging material texture benchmark dataset (KTH-TIPS2). To test our method on less homogeneous or inhomogeneous images, we construct two new datasets consisting of appearance image patches of animals and vehicles cropped from the PASCAL VOC dataset. Our method outperforms competing methods on these datasets. PMID:25531013
NASA Technical Reports Server (NTRS)
Neugebauer, G. T.; Wilcox, William R.
1992-01-01
Azulene-doped naphthalene was directionally solidified during the vertical Bridgman-Stockbarger technique. Doping homogeneity and convection were determined as a function of the temperature profile in the furnace and the freezing rate. Convection velocities were two orders of magnitude lower when the temperature increased with height. Rarely was the convection pattern axisymmetric, even though the temperature varied less than 0.1 K around the circumference of the growth ampoule. Correspondingly the cross sectional variation in azulene concentration tended to be asymmetric, especially when the temperature increased with height. This cross sectional variation changed dramatically along the ingot, reflecting changes in convection presumably due to the decreasing height of the melt. Although there was large scatter and irreproducibility in the cross sectional variation in doping, this variation tended to be least when the growth rate was low and the convection was vigorous. It is expected that compositional variations would also be small at high growth rates with weak convection and flat interfaces, although this was not investigated in the present experiments. Neither rotation of the ampoule nor deliberate introduction of thermal asymmetries during solidification had a significant influence on cross sectional variations in doping. It is predicted that slow directional solidification under microgravity conditions could produce greater inhomogeneities than on Earth. Combined use of microgravity and magnetic fields would be required to achieve homogeneity when it is necessary to freeze slowly in order to avoid constitutional supercooling.
NASA Technical Reports Server (NTRS)
Schmid, F.
1981-01-01
The crystallinity of large HEM silicon ingots as a function of heat flow conditions is investigated. A balanced heat flow at the bottom of the ingot restricts spurious nucleation to the edge of the melted-back seed in contact with the crucible. Homogeneous resistivity distribution over all the ingot has been achieved. The positioning of diamonds electroplated on wirepacks used to slice silicon crystals is considered. The electroplating of diamonds on only the cutting edge is described and the improved slicing performance of these wires evaluated. An economic analysis of value added costs of HEM ingot casting and band saw sectioning indicates the projected add on cost of HEM is well below the 1986 allocation.
Hybridization and classification of the white pines (Pinus section strobus)
William B. Critchfield
1986-01-01
Many North American and Eurasian white pines retain their ability to hybridize even after long isolation, and about half of all white pine hybrids from controlled pollinations are inter-hemisphere crosses. Within the morphologically homogeneous and otherwise highly crossable core group of white pines, an exception in crossing behavior is Pinus lambertiana...
21 CFR 131.130 - Evaporated milk.
Code of Federal Regulations, 2014 CFR
2014-04-01
... added vitamin D as prescribed by paragraph (b) of this section. It is homogenized. It is sealed in a container and so processed by heat, either before or after sealing, as to prevent spoilage. (b) Vitamin addition. (1) Vitamin D shall be present in such quantity that each fluid ounce of the food contains 25...
Radar signatures of snowflake riming: A modeling study.
Leinonen, Jussi; Szyrmer, Wanda
2015-08-01
The capability to detect the state of snowflake riming reliably from remote measurements would greatly expand the understanding of its global role in cloud-precipitation processes. To investigate the ability of multifrequency radars to detect riming, a three-dimensional model of snowflake growth was used to generate simulated aggregate and crystal snowflakes with various degrees of riming. Three different growth scenarios, representing different temporal relationships between aggregation and riming, were formulated. The discrete dipole approximation was then used to compute the radar backscattering properties of the snowflakes at frequencies of 9.7, 13.6, 35.6, and 94 GHz. In two of the three growth scenarios, the rimed snowflakes exhibit large differences between the backscattering cross sections of the detailed three-dimensional models and the equivalent homogeneous spheroidal models, similarly to earlier results for unrimed snowflakes. When three frequencies are used simultaneously, riming appears to be detectable in a robust manner across all three scenarios. In spite of the differences in backscattering cross sections, the triple-frequency signatures of heavily rimed particles resemble those of the homogeneous spheroids, thus explaining earlier observational results that were compatible with such spheroids.
Ab initio molecular dynamics in a finite homogeneous electric field.
Umari, P; Pasquarello, Alfredo
2002-10-07
We treat homogeneous electric fields within density functional calculations with periodic boundary conditions. A nonlocal energy functional depending on the applied field is used within an ab initio molecular dynamics scheme. The reliability of the method is demonstrated in the case of bulk MgO for the Born effective charges, and the high- and low-frequency dielectric constants. We evaluate the static dielectric constant by performing a damped molecular dynamics in an electric field and avoiding the calculation of the dynamical matrix. Application of this method to vitreous silica shows good agreement with experiment and illustrates its potential for systems of large size.
Indirect tissue electrophoresis: a new method for analyzing solid tissue protein.
Smith, A C
1988-01-01
1. The eye lens core (nucleus) has been a valuable source of molecular biologic information. 2. In these studies, lens nuclei are usually homogenized so that any protein information related to anatomical subdivisions, or layers, of the nucleus is lost. 3. The present report is of a new method, indirect tissue electrophoresis (ITE), which, when applied to fish lens nuclei, permitted (a) automatic correlation of protein information with anatomic layer, (b) production of large, clear electrophoretic patterns even from small tissue samples and (c) detection of more proteins than in liquid extracts of homogenized tissues. 4. ITE seems potentially applicable to a variety of solid tissues.
Cloaking of arbitrarily shaped objects with homogeneous coatings
NASA Astrophysics Data System (ADS)
Forestiere, Carlo; Dal Negro, Luca; Miano, Giovanni
2014-05-01
We present a theory for the cloaking of arbitrarily shaped objects and demonstrate electromagnetic scattering cancellation through designed homogeneous coatings. First, in the small-particle limit, we expand the dipole moment of a coated object in terms of its resonant modes. By zeroing the numerator of the resulting rational function, we accurately predict the permittivity values of the coating layer that abates the total scattered power. Then, we extend the applicability of the method beyond the small-particle limit, deriving the radiation corrections of the scattering-cancellation permittivity within a perturbation approach. Our method permits the design of invisibility cloaks for irregularly shaped devices such as complex sensors and detectors.
Single-shot optical sectioning using two-color probes in HiLo fluorescence microscopy.
Muro, Eleonora; Vermeulen, Pierre; Ioannou, Andriani; Skourides, Paris; Dubertret, Benoit; Fragola, Alexandra; Loriette, Vincent
2011-06-08
We describe a wide-field fluorescence microscope setup which combines HiLo microscopy technique with the use of a two-color fluorescent probe. It allows one-shot fluorescence optical sectioning of thick biological moving sample which is illuminated simultaneously with a flat and a structured pattern at two different wavelengths. Both homogenous and structured fluorescence images are spectrally separated at detection and combined similarly with the HiLo microscopy technique. We present optically sectioned full-field images of Xenopus laevis embryos acquired at 25 images/s frame rate. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Multigroup cross section library for GFR2400
NASA Astrophysics Data System (ADS)
Čerba, Štefan; Vrban, Branislav; Lüley, Jakub; Haščík, Ján; Nečas, Vladimír
2017-09-01
In this paper the development and optimization of the SBJ_E71 multigroup cross section library for GFR2400 applications is discussed. A cross section processing scheme, merging Monte Carlo and deterministic codes, was developed. Several fine and coarse group structures and two weighting flux options were analysed through 18 benchmark experiments selected from the handbook of ICSBEP and based on performed similarity assessments. The performance of the collapsed version of the SBJ_E71 library was compared with MCNP5 CE ENDF/B VII.1 and the Korean KAFAX-E70 library. The comparison was made based on integral parameters of calculations performed on full core homogenous models.
Radiolabel ratio method for measuring pulmonary clearance of intratracheal bacterial challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaForce, F.M.; Boose, D.S.
Calculation of bacterial clearance is a fundamental step in any study of in situ lung antibacterial defenses. A method is described whereby about 85% of a radiolabeled bacterial inoculum was consistently introduced into the bronchopulmonary tree of a mouse by the intratracheal route. Mice were then killed 1 and 4 hours later; their lungs were removed aseptically and homogenized, and viable bacteria and radiolabel counts were determined. Radiolabel counts fell slowly, and more than 80% of the original radiolabel was still present in homogenized lung samples from animals sacrificed 4 hours after challenge. Bacteria/isotope ratios for the bacterial inoculum andmore » homogenized lung samples from animals sacrificed immediately after challenge were very similar. Bacterial clearance values were the same whether computed from bacterial counts alone or according to a radiolabel ratio method whereby the change in the bacteria/isotope ratio in ground lung aliquots was divided by a similar ratio from bacteria used to inoculate animals. Some contamination resulted from oral streptococci being swept into the bronchopulmonary free during the aspiration process. This contamination was not a problem when penicillin was incorporated into the agar and penicillin-resistant strains were used for the bacterial challenges.« less
Robinson, Eleanor M; Trumble, Stephen J; Subedi, Bikram; Sanders, Rebel; Usenko, Sascha
2013-12-06
Lipid-rich matrices are often sinks for lipophilic contaminants, such as pesticides, polychlorinated biphenyls (PCBs), and polybrominated diphenyl ethers (PBDEs). Typically methods for contaminant extraction and cleanup for lipid-rich matrices require multiple cleanup steps; however, a selective pressurized liquid extraction (SPLE) technique requiring no additional cleanup has been developed for the simultaneous extraction and cleanup of whale earwax (cerumen; a lipid-rich matrix). Whale earwax accumulates in select whale species over their lifetime to form wax earplugs. Typically used as an aging technique in cetaceans, layers or laminae that comprise the earplug are thought to be associated with annual or semiannual migration and feeding patterns. Whale earplugs (earwax) represent a unique matrix capable of recording and archiving whales' lifetime contaminant profiles. This study reports the first analytical method developed for identifying and quantifying lipophilic persistent organic pollutants (POPs) in a whale earplug including organochlorine pesticides, polychlorinated biphenyls (PCBs), and polybrominated diphenyl ethers (PBDEs). The analytical method was developed using SPLE to extract contaminants from ∼0.25 to 0.5g aliquots of each lamina of sectioned earplug. The SPLE was optimized for cleanup adsorbents (basic alumina, silica gel, and Florisil(®)), adsorbent to sample ratio, and adsorbent order. In the optimized SPLE method, the earwax homogenate was placed within the extraction cell on top of basic alumina (5g), silica gel (15g), and Florisil(®) (10g) and the target analytes were extracted from the homogenate using 1:1 (v/v) dichloromethane:hexane. POPs were analyzed using gas chromatography-mass spectrometry with electron capture negative ionization and electron impact ionization. The average percent recoveries for the POPs were 91% (±6% relative standard deviation), while limits of detection and quantification ranged from 0.00057 to 0.96ngg(-1) and 0.0017 to 2.9ngg(-1), respectively. Pesticides, PCBs, and PBDEs, were measured in a single blue whale (Balaenoptera musculus) cerumen lamina at concentrations ranging from 0.11 to 150ng g(-1). Copyright © 2013 Elsevier B.V. All rights reserved.
Nano-ceramics and method thereof
Satcher, Jr., Joe H.; Gash, Alex [Livermore, CA; Simpson, Randall [Livermore, CA; Landingham, Richard [Livermore, CA; Reibold, Robert A [Salida, CA
2006-08-08
Disclosed herein is a method to produce ceramic materials utilizing the sol-gel process. The methods enable the preparation of intimate homogeneous dispersions of materials while offering the ability to control the size of one component within another. The method also enables the preparation of materials that will densify at reduced temperature.
The generalized scattering coefficient method for plane wave scattering in layered structures
NASA Astrophysics Data System (ADS)
Liu, Yu; Li, Chao; Wang, Huai-Yu; Zhou, Yun-Song
2017-02-01
The generalized scattering coefficient (GSC) method is pedagogically derived and employed to study the scattering of plane waves in homogeneous and inhomogeneous layered structures. The numerical stabilities and accuracies of this method and other commonly used numerical methods are discussed and compared. For homogeneous layered structures, concise scattering formulas with clear physical interpretations and strong numerical stability are obtained by introducing the GSCs. For inhomogeneous layered structures, three numerical methods are employed: the staircase approximation method, the power series expansion method, and the differential equation based on the GSCs. We investigate the accuracies and convergence behaviors of these methods by comparing their predictions to the exact results. The conclusions are as follows. The staircase approximation method has a slow convergence in spite of its simple and intuitive implementation, and a fine stratification within the inhomogeneous layer is required for obtaining accurate results. The expansion method results are sensitive to the expansion order, and the treatment becomes very complicated for relatively complex configurations, which restricts its applicability. By contrast, the GSC-based differential equation possesses a simple implementation while providing fast and accurate results.
Luebker, Stephen A; Wojtkiewicz, Melinda; Koepsell, Scott A
2015-11-01
Formalin-fixed paraffin-embedded (FFPE) tissue is a rich source of clinically relevant material that can yield important translational biomarker discovery using proteomic analysis. Protocols for analyzing FFPE tissue by LC-MS/MS exist, but standardization of procedures and critical analysis of data quality is limited. This study compared and characterized data obtained from FFPE tissue using two methods: a urea in-solution digestion method (UISD) versus a commercially available Qproteome FFPE Tissue Kit method (Qkit). Each method was performed independently three times on serial sections of homogenous FFPE tissue to minimize pre-analytical variations and analyzed with three technical replicates by LC-MS/MS. Data were evaluated for reproducibility and physiochemical distribution, which highlighted differences in the ability of each method to identify proteins of different molecular weights and isoelectric points. Each method replicate resulted in a significant number of new protein identifications, and both methods identified significantly more proteins using three technical replicates as compared to only two. UISD was cheaper, required less time, and introduced significant protein modifications as compared to the Qkit method, which provided more precise and higher protein yields. These data highlight significant variability among method replicates and type of method used, despite minimizing pre-analytical variability. Utilization of only one method or too few replicates (both method and technical) may limit the subset of proteomic information obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
REPRESENTATIVE SAMPLING AND ANALYSIS OF HETEROGENEOUS SOILS
Standard sampling and analysis methods for hazardous substances in contaminated soils currently are available and routinely employed. Standard methods inherently assume a homogeneous soil matrix and contaminant distribution; therefore only small sample quantities typically are p...
Papliaka, Zoi Eirini; Vaccari, Lisa; Zanini, Franco; Sotiropoulou, Sophia
2015-07-01
Fourier transform infrared (FTIR) imaging in transmission mode, employing a bidimensional focal plane array (FPA) detector, was applied for the detection and spatially resolved chemical characterisation of organic compounds or their degradation products within the stratigraphy of a critical group of fragments, originating from prehistoric and roman wall paintings, containing a very low concentration of subsisted organic matter or its alteration products. Past analyses using attenuated total reflection (ATR) or reflection FTIR on polished cross sections failed to provide any evidence of any organic material assignable as binding medium of the original painting. In order to improve the method's performance, in the present study, a new method of sample preparation in thin section was developed. The procedure is based on the use of cyclododecane C12H24 as embedding material and a subsequent double-side polishing of the specimen. Such procedure provides samples to be studied in FTIR transmission mode without losing the information on the spatial distribution of the detected materials in the paint stratigraphy. For comparison purposes, the same samples were also studied after opening their stratigraphy with a diamond anvil cell. Both preparation techniques offered high-quality chemical imaging of the decay products of an organic substance, giving clues to the painting technique. In addition, the thin sections resulting from the cyclododecane pre-treatment offered more layer-specific data, as the layer thickness and order remained unaffected, whereas the samples resulting from compression within the diamond cell were slightly deformed; however, since thinner and more homogenous, they provided higher spectral quality in terms of S/N ratio. In summary, the present study illustrates the appropriateness of FTIR imaging in transmission mode associated with a new thin section preparation strategy to detect and localise very low-concentrated organic matter subjected to deterioration processes, when the application of FTIR in reflection mode or FTIR-ATR fails to give any relevant information.
NASA Astrophysics Data System (ADS)
Maloney, C.; Toon, B.; Bardeen, C.
2017-12-01
Recent studies indicate that heterogeneous nucleation may play a large role in cirrus cloud formation in the UT/LS, a region previously thought to be primarily dominated by homogeneous nucleation. As a result, it is beneficial to ensure that general circulation models properly represent heterogeneous nucleation in ice cloud simulations. Our work strives towards addressing this issue in the NSF/DOE Community Earth System Model's atmospheric model, CAM. More specifically we are addressing the role of heterogeneous nucleation in the coupled sectional microphysics cloud model, CARMA. Currently, our CAM/CARMA cirrus model only performs homogenous ice nucleation while ignoring heterogeneous nucleation. In our work, we couple the CAM/CARMA cirrus model with the Modal Aerosol Model (MAM). By combining the aerosol model with CAM/CARMA we can both account for heterogeneous nucleation, as well as directly link the sulfates used for homogeneous nucleation to computed fields instead of the current static field being utilized. Here we present our initial results and compare our findings to observations from the long running CALIPSO and MODIS satellite missions.
Benchmarking monthly homogenization algorithms
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.
Chen, Zi-Qi; Du, Ming-Ying; Zhao, You-Jin; Huang, Xiao-Qi; Li, Jing; Lui, Su; Hu, Jun-Mei; Sun, Huai-Qiang; Liu, Jia; Kemp, Graham J; Gong, Qi-Yong
2015-11-01
Published meta-analyses of resting-state regional cerebral blood flow (rCBF) studies of major depressive disorder (MDD) have included patients receiving antidepressants, which might affect brain activity and thus bias the results. To our knowledge, no meta-analysis has investigated regional homogeneity changes in medication-free patients with MDD. Moreover, an association between regional homogeneity and rCBF has been demonstrated in some brain regions in healthy controls. We sought to explore to what extent resting-state rCBF and regional homogeneity changes co-occur in the depressed brain without the potential confound of medication. Using the effect-size signed differential mapping method, we conducted 2 meta-analyses of rCBF and regional homogeneity studies of medication-free patients with MDD. Our systematic search identified 14 rCBF studies and 9 regional homogeneity studies. We identified conjoint decreases in resting-state rCBF and regional homogeneity in the insula and superior temporal gyrus in medication-free patients with MDD compared with controls. Other changes included altered resting-state rCBF in the precuneus and in the frontal-limbic-thalamic-striatal neural circuit as well as altered regional homogeneity in the uncus and parahippocampal gyrus. Meta-regression revealed that the percentage of female patients with MDD was negatively associated with resting-state rCBF in the right anterior cingulate cortex and that the age of patients with MDD was negatively associated with rCBF in the left insula and with regional homogeneity in the left uncus. The analysis techniques, patient characteristics and clinical variables of the included studies were heterogeneous. The conjoint alterations of rCBF and regional homogeneity in the insula and superior temporal gyrus may be core neuropathological changes in medication-free patients with MDD and serve as a specific region of interest for further studies on MDD.
Testing the cosmic anisotropy with supernovae data: Hemisphere comparison and dipole fitting
NASA Astrophysics Data System (ADS)
Deng, Hua-Kai; Wei, Hao
2018-06-01
The cosmological principle is one of the cornerstones in modern cosmology. It assumes that the universe is homogeneous and isotropic on cosmic scales. Both the homogeneity and the isotropy of the universe should be tested carefully. In the present work, we are interested in probing the possible preferred direction in the distribution of type Ia supernovae (SNIa). To our best knowledge, two main methods have been used in almost all of the relevant works in the literature, namely the hemisphere comparison (HC) method and the dipole fitting (DF) method. However, the results from these two methods are not always approximately coincident with each other. In this work, we test the cosmic anisotropy by using these two methods with the joint light-curve analysis (JLA) and simulated SNIa data sets. In many cases, both methods work well, and their results are consistent with each other. However, in the cases with two (or even more) preferred directions, the DF method fails while the HC method still works well. This might shed new light on our understanding of these two methods.
Analysis of messy data with heteroscedastic in mean models
NASA Astrophysics Data System (ADS)
Trianasari, Nurvita; Sumarni, Cucu
2016-02-01
In the analysis of the data, we often faced with the problem of data where the data did not meet some assumptions. In conditions of such data is often called data messy. This problem is a consequence of the data that generates outliers that bias or error estimation. To analyze the data messy, there are three approaches, namely standard analysis, transform data and data analysis methods rather than a standard. Simulations conducted to determine the performance of a third comparative test procedure on average often the model variance is not homogeneous. Data simulation of each scenario is raised as much as 500 times. Next, we do the analysis of the average comparison test using three methods, Welch test, mixed models and Welch-r test. Data generation is done through software R version 3.1.2. Based on simulation results, these three methods can be used for both normal and abnormal case (homoscedastic). The third method works very well on data balanced or unbalanced when there is no violation in the homogenity's assumptions variance. For balanced data, the three methods still showed an excellent performance despite the violation of the assumption of homogeneity of variance, with the requisite degree of heterogeneity is high. It can be shown from the level of power test above 90 percent, and the best to Welch method (98.4%) and the Welch-r method (97.8%). For unbalanced data, Welch method will be very good moderate at in case of heterogeneity positive pair with a 98.2% power. Mixed models method will be very good at case of highly heterogeneity was negative negative pairs with power. Welch-r method works very well in both cases. However, if the level of heterogeneity of variance is very high, the power of all method will decrease especially for mixed models methods. The method which still works well enough (power more than 50%) is Welch-r method (62.6%), and the method of Welch (58.6%) in the case of balanced data. If the data are unbalanced, Welch-r method works well enough in the case of highly heterogeneous positive positive or negative negative pairs, there power are 68.8% and 51% consequencly. Welch method perform well enough only in the case of highly heterogeneous variety of positive positive pairs with it is power of 64.8%. While mixed models method is good in the case of a very heterogeneous variety of negative partner with 54.6% power. So in general, when there is a variance is not homogeneous case, Welch method is applied to the data rank (Welch-r) has a better performance than the other methods.
Tonti, Marco; Salvatore, Sergio
2015-01-01
The problem of the measurement of emotion is a widely debated one. In this article we propose an instrument, the Homogenization of Classification Functions Measure (HOCFUN), designed for assessing the influence of emotional arousal on a rating task consisting of the evaluation of a sequence of images. The instrument defines an indicator (κ) that measures the degree of homogenization of the ratings given over 2 rating scales (pleasant-unpleasant and relevant-irrelevant). Such a degree of homogenization is interpreted as the effect of emotional arousal on thinking and therefore lends itself to be used as a marker of emotional arousal. A preliminary study of validation was implemented. The association of the κ indicator with 3 additional indicators was analyzed. Consistent with the hypotheses, the κ indicator proved to be associated, even if weakly and nonlinearly, with a marker of the homogenization of classification functions derived from a separate rating task and with 2 indirect indicators of emotional activation: the speed of performance on the HOCFUN task and an indicator of mood intensity. Taken as a whole, such results provide initial evidence supporting the HOCFUN construct validity.
Permian paleoclimate data from fluid inclusions in halite
Benison, K.C.; Goldstein, R.H.
1999-01-01
This study has yielded surface water paleotemperatures from primary fluid inclusions in mid Permian Nippewalla Group halite from western Kansas. A 'cooling nucleation' method is used to generate vapor bubbles in originally all-liquid primary inclusions. Then, surface water paleotemperatures are obtained by measuring temperatures of homogenization to liquid. Homogenization temperatures ranged from 21??C to 50??C and are consistent along individual fluid inclusion assemblages, indicating that the fluid inclusions have not been altered by thermal reequilibration. Homogenization temperatures show a range of up to 26??C from base to top of individual cloudy chevron growth bands. Petrographic and fluid inclusion evidence indicate that no significant pressure correction is needed for the homogenization temperature data. We interpret these homogenization temperatures to represent shallow surface water paleotemperatures. The range in temperatures from base to top of single chevron bands may reflect daily temperatures variations. These Permian surface water temperatures fall within the same range as some modern evaporative surface waters, suggesting that this Permian environment may have been relatively similar to its modern counterparts. Shallow surface water temperatures in evaporative settings correspond closely to local air temperatures. Therefore, the Permian surface water temperatures determined in this study may be considered proxies for local Permian air temperatures.
Automatic Control of the Concrete Mixture Homogeneity in Cycling Mixers
NASA Astrophysics Data System (ADS)
Anatoly Fedorovich, Tikhonov; Drozdov, Anatoly
2018-03-01
The article describes the factors affecting the concrete mixture quality related to the moisture content of aggregates, since the effectiveness of the concrete mixture production is largely determined by the availability of quality management tools at all stages of the technological process. It is established that the unaccounted moisture of aggregates adversely affects the concrete mixture homogeneity and, accordingly, the strength of building structures. A new control method and the automatic control system of the concrete mixture homogeneity in the technological process of mixing components have been proposed, since the tasks of providing a concrete mixture are performed by the automatic control system of processing kneading-and-mixing machinery with operational automatic control of homogeneity. Theoretical underpinnings of the control of the mixture homogeneity are presented, which are related to a change in the frequency of vibrodynamic vibrations of the mixer body. The structure of the technical means of the automatic control system for regulating the supply of water is determined depending on the change in the concrete mixture homogeneity during the continuous mixing of components. The following technical means for establishing automatic control have been chosen: vibro-acoustic sensors, remote terminal units, electropneumatic control actuators, etc. To identify the quality indicator of automatic control, the system offers a structure flowchart with transfer functions that determine the ACS operation in transient dynamic mode.
1949-08-24
t I- Results of Hardoed Survey* at Aberden Proving Ground an at Watertown Axue Average Average Surface Hardness...of surface hardness determination, made at Aberdeen ProvIng Ground and crocea-sectional hardness surveys made at Watertcwn Arsenal are limted in Table...Against 57 ma and, 90 w Armor-Piercing Ammunvtion,," At the request of this Arsenall,, Aerdeen Proving Ground provided 80 x 120" sections cut from a
Regolith irradiation stratigraphy at the Apollo 16 and 17 landing sites
NASA Technical Reports Server (NTRS)
Crozaz, G.
1978-01-01
Additional fossil track measurements in the Apollo 17 deep drill stem, as well as detailed track studies in section 3 of the Apollo 16 deep drill core are reported. Although the upper part of the Apollo 17 core seems to have accreted rapidly, no evidence for a rapid accretion of the lower part, as postulated by some authors, is found. Despite the apparent inhomogeneity of section 60003, its track record is unexpectedly homogeneous; all levels are heavily irradiated and emplacement of big slabs of material is not favored.
3-D Forward modeling of Induced Polarization Effects of Transient Electromagnetic Method
NASA Astrophysics Data System (ADS)
Wu, Y.; Ji, Y.; Guan, S.; Li, D.; Wang, A.
2017-12-01
In transient electromagnetic (TEM) detection, Induced polarization (IP) effects are so important that they cannot be ignored. The authors simulate the three-dimensional (3-D) induced polarization effects in time-domain directly by applying the finite-difference time-domain method (FDTD) based on Cole-Cole model. Due to the frequency dispersion characteristics of the electrical conductivity, the computations of convolution in the generalized Ohm's law of fractional order system makes the forward modeling particularly complicated. Firstly, we propose a method to approximate the fractional order function of Cole-Cole model using a lower order rational transfer function based on error minimum theory in the frequency domain. In this section, two auxiliary variables are introduced to transform nonlinear least square fitting problem of the fractional order system into a linear programming problem, thus avoiding having to solve a system of equations and nonlinear problems. Secondly, the time-domain expression of Cole-Cole model is obtained by using Inverse Laplace transform. Then, for the calculation of Ohm's law, we propose an e-index auxiliary equation of conductivity to transform the convolution to non-convolution integral; in this section, the trapezoid rule is applied to compute the integral. We then substitute the recursion equation into Maxwell's equations to derive the iterative equations of electromagnetic field using the FDTD method. Finally, we finish the stimulation of 3-D model and evaluate polarization parameters. The results are compared with those obtained from the digital filtering solution of the analytical equation in the homogeneous half space, as well as with the 3-D model results from the auxiliary ordinary differential equation method (ADE). Good agreements are obtained across the three methods. In terms of the 3-D model, the proposed method has higher efficiency and lower memory requirements as execution times and memory usage were reduced by 20% compared with ADE method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, Steven C.; Williamson, Chatt C.; Doughty, David C.
This paper uses a mathematical model of fluorescent biological particles composed of bacteria and/or proteins (mostly as in Hill et al., 2013 [23]) to investigate the size-dependence of the total fluorescence emitted in all directions. The model applies to particles which have negligible reabsorption of fluorescence within the particle. The specific particles modeled here are composed of ovalbumin and of a generic Bacillus. The particles need not be spherical, and in some cases need not be homogeneous. However, the results calculated in this paper are for spherical homogeneous particles. Light absorbing and fluorescing molecules included in the model are aminomore » acids, nucleic acids, and several coenzymes. Here the excitation wavelength is 266 nm. The emission range, 300 to 370 nm, encompasses the fluorescence of tryptophan. The fluorescence cross section (C F) is calculated and compared with one set of published measured values. We investigate power law (Ad y) approximations to C F, where d is diameter, and A and y are parameters adjusted to fit the data, and examine how y varies with d and composition, including the fraction as water. The particle's fluorescence efficiency (Q F=C F/geometric-cross-section) can be written for homogeneous particles as Q absR F, where Q abs is the absorption efficiency, and R F, the fraction of the absorbed light emitted as fluorescence, is independent of size and shape. When Q F is plotted vs. m id or mi(m r-1)d, where m=m r+im i is the complex refractive index, the plots for different fractions of water in the particle tend to overlap.« less
Grams, Michael P; Fong de Los Santos, Luis E; Antolak, John A; Brinkmann, Debra H; Clarke, Michelle J; Park, Sean S; Olivier, Kenneth R; Whitaker, Thomas J
2016-01-01
To assess the accuracy of the Eclipse Analytical Anisotropic Algorithm when calculating dose for spine stereotactic body radiation therapy treatments involving surgically implanted titanium hardware. A human spine was removed from a cadaver, cut sagittally along the midline, and then separated into thoracic and lumbar sections. The thoracic section was implanted with titanium stabilization hardware; the lumbar section was not implanted. Spine sections were secured in a water phantom and simulated for treatment planning using both standard and extended computed tomography (CT) scales. Target volumes were created on both spine sections. Dose calculations were performed using (1) the standard CT scale with relative electron density (RED) override of image artifacts and hardware, (2) the extended CT scale with RED override of image artifacts only, and (3) the standard CT scale with no RED overrides for hardware or artifacts. Plans were delivered with volumetric modulated arc therapy using a 6-MV beam with and without a flattening filter. A total of 3 measurements for each plan were made with Gafchromic film placed between the spine sections and compared with Eclipse dose calculations using gamma analysis with a 2%/2 mm passing criteria. A single measurement in a homogeneous phantom was made for each plan before actual delivery. Gamma passing rates for measurements in the homogeneous phantom were 99.6% or greater. Passing rates for measurements made in the lumbar spine section without hardware were 99.3% or greater; measurements made in the thoracic spine containing titanium were 98.6 to 99.5%. Eclipse Analytical Anisotropic Algorithm can adequately model the effects of titanium implants for spine stereotactic body radiation therapy treatments using volumetric modulated arc therapy. Calculations with standard or extended CT scales give similarly accurate results. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
A Simple MO Treatment of Metal Clusters.
ERIC Educational Resources Information Center
Sahyun, M. R. V.
1980-01-01
Illustrates how a qualitative description of the geometry and electronic characteristics of homogeneous metal clusters can be obtained using semiempirical MO (molecular orbital theory) methods. Computer applications of MO methods to inorganic systems are also described. (CS)
NASA Astrophysics Data System (ADS)
He, Yu; Shen, Yuecheng; Feng, Xiaohua; Liu, Changjun; Wang, Lihong V.
2017-08-01
A circularly polarized antenna, providing more homogeneous illumination compared to a linearly polarized antenna, is more suitable for microwave induced thermoacoustic tomography (TAT). The conventional realization of a circular polarization is by using a helical antenna, but it suffers from low efficiency, low power capacity, and limited aperture in TAT systems. Here, we report an implementation of a circularly polarized illumination method in TAT by inserting a single-layer linear-to-circular polarizer based on frequency selective surfaces between a pyramidal horn antenna and an imaging object. The performance of the proposed method was validated by both simulations and experimental imaging of a breast tumor phantom. The results showed that a circular polarization was achieved, and the resultant thermoacoustic signal-to-noise was twice greater than that in the helical antenna case. The proposed method is more desirable in a waveguide-based TAT system than the conventional method.
NASA Astrophysics Data System (ADS)
Guan, Qing-Qing; Zhou, Hua-Jing; Ning, Ping; Lian, Pei-Chao; Wang, Bo; He, Liang; Chai, Xin-Sheng
2018-05-01
We have developed an easy and efficient method for exfoliating few-layer sheets of black phosphorus (BP) in N-methyl-2-pyrrolidone, using ultra-high pressure homogenization (UPH). The BP was first exfoliated into sheets that were a few atomic layers thick, using a homogenizer for only 30 min. Next, a double centrifugation procedure was used to separate the material into few-layer nanosheets that were examined by X-ray diffraction, atomic force microscopy (AFM), transmission electron microscopy (TEM), high-angle annular dark field scanning transmission electron microscopy (HAADF-STEM), and energy-dispersive X-ray (EDX) spectroscopy. The results show that the products are specimens of phosphorene that are only a few-layer thick.
Some variance reduction methods for numerical stochastic homogenization
Blanc, X.; Le Bris, C.; Legoll, F.
2016-01-01
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065
Bayesian analysis of non-homogeneous Markov chains: application to mental health data.
Sung, Minje; Soyer, Refik; Nhan, Nguyen
2007-07-10
In this paper we present a formal treatment of non-homogeneous Markov chains by introducing a hierarchical Bayesian framework. Our work is motivated by the analysis of correlated categorical data which arise in assessment of psychiatric treatment programs. In our development, we introduce a Markovian structure to describe the non-homogeneity of transition patterns. In doing so, we introduce a logistic regression set-up for Markov chains and incorporate covariates in our model. We present a Bayesian model using Markov chain Monte Carlo methods and develop inference procedures to address issues encountered in the analyses of data from psychiatric treatment programs. Our model and inference procedures are implemented to some real data from a psychiatric treatment study. Copyright 2006 John Wiley & Sons, Ltd.
Homogenization theory for designing graded viscoelastic sonic crystals
NASA Astrophysics Data System (ADS)
Qu, Zhao-Liang; Ren, Chun-Yu; Pei, Yong-Mao; Fang, Dai-Ning
2015-02-01
In this paper, we propose a homogenization theory for designing graded viscoelastic sonic crystals (VSCs) which consist of periodic arrays of elastic scatterers embedded in a viscoelastic host material. We extend an elastic homogenization theory to VSC by using the elastic-viscoelastic correspondence principle and propose an analytical effective loss factor of VSC. The results of VSC and the equivalent structure calculated by using the finite element method are in good agreement. According to the relation of the effective loss factor to the filling fraction, a graded VSC plate is easily and quickly designed. Then, the graded VSC may have potential applications in the vibration absorption and noise reduction fields. Project supported by the National Basic Research Program of China (Grant No. 2011CB610301).
NASA Astrophysics Data System (ADS)
Capdeville, Yann; Métivier, Ludovic
2018-05-01
Seismic imaging is an efficient tool to investigate the Earth interior. Many of the different imaging techniques currently used, including the so-called full waveform inversion (FWI), are based on limited frequency band data. Such data are not sensitive to the true earth model, but to a smooth version of it. This smooth version can be related to the true model by the homogenization technique. Homogenization for wave propagation in deterministic media with no scale separation, such as geological media, has been recently developed. With such an asymptotic theory, it is possible to compute an effective medium valid for a given frequency band such that effective waveforms and true waveforms are the same up to a controlled error. In this work we make the link between limited frequency band inversion, mainly FWI, and homogenization. We establish the relation between a true model and an FWI result model. This relation is important for a proper interpretation of FWI images. We numerically illustrate, in the 2-D case, that an FWI result is at best the homogenized version of the true model. Moreover, it appears that the homogenized FWI model is quite independent of the FWI parametrization, as long as it has enough degrees of freedom. In particular, inverting for the full elastic tensor is, in each of our tests, always a good choice. We show how the homogenization can help to understand FWI behaviour and help to improve its robustness and convergence by efficiently constraining the solution space of the inverse problem.
Washburn, Kathryn E.; Birdwell, Justin E.; Foster, Michael; Gutierrez, Fernando
2015-01-01
Mineralogical and geochemical information on reservoir and source rocks is necessary to assess and produce from petroleum systems. The standard methods in the petroleum industry for obtaining these properties are bulk measurements on homogenized, generally crushed, and pulverized rock samples and can take from hours to days to perform. New methods using Fourier transform infrared (FTIR) spectroscopy have been developed to more rapidly obtain information on mineralogy and geochemistry. However, these methods are also typically performed on bulk, homogenized samples. We present a new approach to rock sample characterization incorporating multivariate analysis and FTIR microscopy to provide non-destructive, spatially resolved mineralogy and geochemistry on whole rock samples. We are able to predict bulk mineralogy and organic carbon content within the same margin of error as standard characterization techniques, including X-ray diffraction (XRD) and total organic carbon (TOC) analysis. Validation of the method was performed using two oil shale samples from the Green River Formation in the Piceance Basin with differing sedimentary structures. One sample represents laminated Green River oil shales, and the other is representative of oil shale breccia. The FTIR microscopy results on the oil shales agree with XRD and LECO TOC data from the homogenized samples but also give additional detail regarding sample heterogeneity by providing information on the distribution of mineral phases and organic content. While measurements for this study were performed on oil shales, the method could also be applied to other geological samples, such as other mudrocks, complex carbonates, and soils.
Pradhan, A S; Quast, U; Sharma, P K
1994-09-01
A simple and fast, but sensitive TLD method for the measurement of energy and homogeneity of therapeutically used electron beams has been developed and tested. This method is based on the fact that when small thicknesses of high-Z absorbers such as lead are interposed in the high-energy electron beams, the transmitted radiation increases with the energy of the electron beams. Consequently, the ratio of readouts of TLDS held on the two sides of a lead plate varied sharply (by factor of 70) with a change in energy of the electron beam from 5 MeV to 18 MeV, offering a very sensitive method for the measurement of the energy of electron beams. By using the ratio of TL readouts of two types of TLD ribbon with widely different sensitivities, LiF TLD-700 ribbons on the upstream side and highly sensitive CaF2:Dy TLD-200 ribbons on the downstream side, an electron energy discrimination of better than +/- 0.1 MeV could be achieved. The homogeneity of the electron beam energy and the absorbed dose was measured by using a jig in which the TLDS were held in the desired array on both sides of a 4 mm thick lead plate. The method takes minimal beam time and makes it possible to carry out measurements for the audit of the quality of electron beams as well as for intercomparison of beams by mail.
Optimization study on the magnetic field of superconducting Halbach Array magnet
NASA Astrophysics Data System (ADS)
Shen, Boyang; Geng, Jianzhao; Li, Chao; Zhang, Xiuchang; Fu, Lin; Zhang, Heng; Ma, Jun; Coombs, T. A.
2017-07-01
This paper presents the optimization on the strength and homogeneity of magnetic field from superconducting Halbach Array magnet. Conventional Halbach Array uses a special arrangement of permanent magnets which can generate homogeneous magnetic field. Superconducting Halbach Array utilizes High Temperature Superconductor (HTS) to construct an electromagnet to work below its critical temperature, which performs equivalently to the permanent magnet based Halbach Array. The simulations of superconducting Halbach Array were carried out using H-formulation based on B-dependent critical current density and bulk approximation, with the FEM platform COMSOL Multiphysics. The optimization focused on the coils' location, as well as the geometry and numbers of coils on the premise of maintaining the total amount of superconductor. Results show Halbach Array configuration based superconducting magnet is able to generate the magnetic field with intensity over 1 Tesla and improved homogeneity using proper optimization methods. Mathematical relation of these optimization parameters with the intensity and homogeneity of magnetic field was developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lytkina, D. N., E-mail: darya-lytkina@yandex.ru; Shapovalova, Y. G., E-mail: elena.shapovalova@ro.ru; Rasskazova, L. A., E-mail: ly-2207@mail.ru
Relevance of the work is due to the need for new materials that are used in medicine (orthopedics, surgery, dentistry, and others) as a substitute for natural bone tissue injuries, fractures, etc. The aim of presented work is developing of a method of producing biocompatible materials based on polyesters of hydroxycarboxylic acids and calcium phosphate ceramic (hydroxyapatite, HA) with homogeneous distribution of the inorganic component. Bioactive composites based on poly-L-lactide (PL) and hydroxyapatite with homogeneous distribution were prepared. The results of scanning electron microscopy confirm homogeneous distribution of the inorganic filler in the polymer matrix. The positive effect of ultrasoundmore » on the homogeneity of the composites was determined. The rate of hydrolysis of composites was evaluated. The rate of hydrolysis of polylactide as an individual substance is 7 times lower than the rate of hydrolysis of the polylactide as a part of the composite. It was found that materials submarines HA composite and do not cause a negative response in the cells of the immune system, while contributing to anti-inflammatory cytokines released by cells.« less
2014-01-01
Abstract Pitfall traps were used to sample Carabidae in agricultural land of the Spercheios valley, Fthiotida, Central Greece. Four pairs of cultivated fields were sampled. One field of each pair was located in a heterogeneous area and the other in a more homogeneous area. Heterogeneous areas were composed of small fields. They had high percentages of non-cropped habitats and a high diversity of land use types. Homogeneous areas were composed of larger fields. They had lower percentages of non-cropped habitats and a lower diversity of land use types. One pair of fields had been planted with cotton, one with maize, one with olives and one with wheat. Altogether 28 carabid species were recorded. This paper describes the study areas, the sampling methods used and presents the data collected during the study. Neither heterogeneous nor homogeneous areas had consistently higher abundance levels, activity density levels, species richness levels or diversity levels. However, significant differences were seen in some of the comparisons between heterogeneous and homogeneous areas. PMID:24891833
Homogenization of a Directed Dispersal Model for Animal Movement in a Heterogeneous Environment.
Yurk, Brian P
2016-10-01
The dispersal patterns of animals moving through heterogeneous environments have important ecological and epidemiological consequences. In this work, we apply the method of homogenization to analyze an advection-diffusion (AD) model of directed movement in a one-dimensional environment in which the scale of the heterogeneity is small relative to the spatial scale of interest. We show that the large (slow) scale behavior is described by a constant-coefficient diffusion equation under certain assumptions about the fast-scale advection velocity, and we determine a formula for the slow-scale diffusion coefficient in terms of the fast-scale parameters. We extend the homogenization result to predict invasion speeds for an advection-diffusion-reaction (ADR) model with directed dispersal. For periodic environments, the homogenization approximation of the solution of the AD model compares favorably with numerical simulations. Invasion speed approximations for the ADR model also compare favorably with numerical simulations when the spatial period is sufficiently small.
Yurk, Brian P
2018-07-01
Animal movement behaviors vary spatially in response to environmental heterogeneity. An important problem in spatial ecology is to determine how large-scale population growth and dispersal patterns emerge within highly variable landscapes. We apply the method of homogenization to study the large-scale behavior of a reaction-diffusion-advection model of population growth and dispersal. Our model includes small-scale variation in the directed and random components of movement and growth rates, as well as large-scale drift. Using the homogenized model we derive simple approximate formulas for persistence conditions and asymptotic invasion speeds, which are interpreted in terms of residence index. The homogenization results show good agreement with numerical solutions for environments with a high degree of fragmentation, both with and without periodicity at the fast scale. The simplicity of the formulas, and their connection to residence index make them appealing for studying the large-scale effects of a variety of small-scale movement behaviors.
Substrate specificity and pH dependence of homogeneous wheat germ acid phosphatase.
Van Etten, R L; Waymack, P P
1991-08-01
The broad substrate specificity of a homogeneous isoenzyme of wheat germ acid phosphatase (WGAP) was extensively investigated by chromatographic, electrophoretic, NMR, and kinetic procedures. WGAP exhibited no divalent metal ion requirement and was unaffected upon incubation with EDTA or o-phenanthroline. A comparison of two catalytically homogeneous isoenzymes revealed little difference in substrate specificity. The specificity of WGAP was established by determining the Michaelis constants for a wide variety of substrates. p-Nitrophenyl phosphate, pyrophosphate, tripolyphosphate, and ATP were preferred substrates while lesser activities were seen toward sugar phosphates, trimetaphosphate, phosphoproteins, and (much less) phosphodiesters. An extensive table of Km and Vmax values is given. The pathway for the hydrolysis of trimetaphosphate was examined by colorimetric and 31P NMR methods and it was found that linear tripolyphosphate is not a free intermediate in the enzymatic reaction. In contrast to literature reports, homogeneous wheat germ acid phosphatase exhibits no measurable carboxylesterase activity, nor does it hydrolyze phenyl phosphonothioate esters or phytic acid at significant rates.
NASA Astrophysics Data System (ADS)
Wu, Zhisheng; Tao, Ou; Cheng, Wei; Yu, Lu; Shi, Xinyuan; Qiao, Yanjiang
2012-02-01
This study demonstrated that near-infrared chemical imaging (NIR-CI) was a promising technology for visualizing the spatial distribution and homogeneity of Compound Liquorice Tablets. The starch distribution (indirectly, plant extraction) could be spatially determined using basic analysis of correlation between analytes (BACRA) method. The correlation coefficients between starch spectrum and spectrum of each sample were greater than 0.95. Depending on the accurate determination of starch distribution, a method to determine homogeneous distribution was proposed by histogram graph. The result demonstrated that starch distribution in sample 3 was relatively heterogeneous according to four statistical parameters. Furthermore, the agglomerates domain in each tablet was detected using score image layers of principal component analysis (PCA) method. Finally, a novel method named Standard Deviation of Macropixel Texture (SDMT) was introduced to detect agglomerates and heterogeneity based on binary image. Every binary image was divided into different sizes length of macropixel and the number of zero values in each macropixel was counted to calculate standard deviation. Additionally, a curve fitting graph was plotted on the relationship between standard deviation and the size length of macropixel. The result demonstrated the inter-tablet heterogeneity of both starch and total compounds distribution, simultaneously, the similarity of starch distribution and the inconsistency of total compounds distribution among intra-tablet were signified according to the value of slope and intercept parameters in the curve.
Trucksess, Mary W; Brewer, Vickery A; Williams, Kristina M; Westphal, Carmen D; Heeres, James T
2004-01-01
Peanuts are one of the 8 most common allergenic foods and a large proportion of peanut-allergic individuals have severe reactions, some to minimal exposure. Specific protein constituents in the peanuts are the cause of the allergic reactions in sensitized individuals who ingest the peanuts. To avoid accidental ingestion of peanut-contaminated food, methods of analysis for the determination of the allergenic proteins in foods are important tools. Such methods could help identify foods inadvertently contaminated with peanuts, thereby reducing the incidence of allergic reactions to peanuts. Commercial immunoassay kits are available but need study for method performance, which requires reference materials for within- and between-laboratory validations. In this study, National Institute of Standards and Technology Standard Reference Material 2387 peanut butter was used. A polytron homogenizer was used to prepare a homogenous aqueous Peanut Butter suspension for the evaluation of method performance of some commercially available immunoassay kits such as Veratox for Peanut Allergen Test (Neogen Corp.), Ridascreen Peanut (R-Biopharm GmbH), and Bio-Kit Peanut Protein Assay Kit (Tepnel). Each gram of the aqueous peanut butter suspension contained 20 mg carboxymethylcellulose sodium salt, 643 microg peanut, 0.5 mg thimerosal, and 2.5 mg bovine serum albumin. The suspension was homogenous, stable, reproducible, and applicable for adding to ice cream, cookies, breakfast cereals, and chocolate for recovery studies at spike levels ranging from 12 to 90 microg/g.
Eggenreich, Britta; Rajamanickam, Vignesh; Wurm, David Johannes; Fricke, Jens; Herwig, Christoph; Spadiut, Oliver
2017-08-01
Cell disruption is a key unit operation to make valuable, intracellular target products accessible for further downstream unit operations. Independent of the applied cell disruption method, each cell disruption process must be evaluated with respect to disruption efficiency and potential product loss. Current state-of-the-art methods, like measuring the total amount of released protein and plating-out assays, are usually time-delayed and involve manual intervention making them error-prone. An automated method to monitor cell disruption efficiency at-line is not available to date. In the current study we implemented a methodology, which we had originally developed to monitor E. coli cell integrity during bioreactor cultivations, to automatically monitor and evaluate cell disruption of a recombinant E. coli strain by high-pressure homogenization. We compared our tool with a library of state-of-the-art methods, analyzed the effect of freezing the biomass before high-pressure homogenization and finally investigated this unit operation in more detail by a multivariate approach. A combination of HPLC and automated data analysis describes a valuable, novel tool to monitor and evaluate cell disruption processes. Our methodology, which can be used both in upstream (USP) and downstream processing (DSP), describes a valuable tool to evaluate cell disruption processes as it can be implemented at-line, gives results within minutes after sampling and does not need manual intervention.
Nuclear Forensics Applications of Principal Component Analysis on Micro X-ray Fluorescence Images
analysis on quantified micro x-ray fluorescence intensity values. This method is then applied to address goals of nuclear forensics . Thefirst...researchers in the development and validation of nuclear forensics methods. A method for determining material homogeneity is developed and demonstrated
Control of the surface quality parameters of machine components during static pulsed treatment
NASA Astrophysics Data System (ADS)
Komkov, V. A.; Rabinskii, L. N.; Kokoreva, O. G.; Kuprikov, N. M.
2016-12-01
A technique is developed to determine the homogeneity of the structure in a surface layer subjected to strain hardening. Static pulsed treatment is found to be one of the most effective surface plastic deformation methods that can be used to control the uniformity of hardening a surface layer. This treatment makes it possible to create a hardened surface layer to a depth of 10 mm with a homogeneous or heterogeneous structure.
Esaulenko, E E; Khil'chuk, M A; Bykov, I M
2013-01-01
The results of the study of activity of digestive proteases (pepsin, trypsin, chymotrypsin) in homogenates of stomach, pancreas and duodenum in experimental animals have been presented. Rats were exposed to intoxication with carbon tetrachloride (subcutaneous administration of a 50% oil solution of CCl4 in the dose of 0.5 ml per 100 g body weight) for three days and then they were given analysed oils (black nut, walnut and flax oil) intragastrically by gavage at a dose of 0.2 ml per day within 23 days. Pepsin level in gastric mucosa homogenates and chymotrypsin activity in pancreatic homogenates were determined by method of N.P. Pyatnitskiy based on on the ability of enzymes to coagulate dairy-acetate mixture, respectively, at 25 degrees C and 35 degrees C. Trypsin activity in homogenates of pancreatic was determined by method of Erlanger - Shaternikova colorimetrically. It has been established that intoxication with CCl4 decreased the synthesis of proteolytic enzymes of the stomach (by 51%) and pancreas (by 70-78%). Injections of analysed vegetable oils to animals contributed to the normalization of proteolytic enzymes synthesis. The conclusion that there are prospects of using the analysed vegetable oils containing large quantity of polyunsaturated fatty acids (omega-3 and omega-6) for the correction of detected biochemical abnormalities has been done.
Stability of cosmetic emulsion containing different amount of hemp oil.
Kowalska, M; Ziomek, M; Żbikowska, A
2015-08-01
The aim of the study was to determine the optimal conditions, that is the content of hemp oil and time of homogenization to obtain stable dispersion systems. For this purpose, six emulsions were prepared, their stability was examined empirically and the most correctly formulated emulsion composition was determined using a computer simulation. Variable parameters (oil content and homogenization time) were indicated by the optimization software based on Kleeman's method. Physical properties of the synthesized emulsions were studied by numerous techniques involving particle size analysis, optical microscopy, Turbiscan test and viscosity of emulsions. The emulsion containing 50 g of oil and being homogenized for 6 min had the highest stability. Empirically determined parameters proved to be consistent with the results obtained using the computer software. The computer simulation showed that the most stable emulsion should contain from 30 to 50 g of oil and should be homogenized for 2.5-6 min. The computer software based on Kleeman's method proved to be useful for quick optimization of the composition and production parameters of stable emulsion systems. Moreover, obtaining an emulsion system with proper stability justifies further research extended with sensory analysis, which will allow the application of such systems (containing hemp oil, beneficial for skin) in the cosmetic industry. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Development and Analysis of Models for Handling the Refrigerated Containerized Cargoes
NASA Astrophysics Data System (ADS)
Nyrkov, A.; Pavlova, L.; Nikiforov, V.; Sokolov, S.; Budnik, V.
2017-07-01
This paper considers the open multi-channel queuing system, which receives irregular homogeneous or heterogeneous applications with an unlimited flow of standby time. The system is regarded as an example of a container terminal, having conditionally functional sections with a certain duty cycle, which receives an irregular, non-uniform flow of vessels with the resultant intensity.
Supported Dendrimer-Encapsulated Metal Clusters: Toward Heterogenizing Homogeneous Catalysts
Ye, Rong; Zhukhovitskiy, Aleksandr V.; Deraedt, Christophe V.; ...
2017-07-13
Recyclable catalysts, especially those that display selective reactivity, are vital for the development of sustainable chemical processes. Among available catalyst platforms, heterogeneous catalysts are particularly well-disposed toward separation from the reaction mixture via filtration methods, which renders them readily recyclable. Furthermore, heterogeneous catalysts offer numerous handles—some without homogeneous analogues—for performance and selectivity optimization. These handles include nanoparticle size, pore profile of porous supports, surface ligands and interface with oxide supports, and flow rate through a solid catalyst bed. Despite these available handles, however, conventional heterogeneous catalysts are themselves often structurally heterogeneous compared to homogeneous catalysts, which complicates efforts to optimizemore » and expand the scope of their reactivity and selectivity. Ongoing efforts in our laboratories are aimed to address the above challenge by heterogenizing homogeneous catalysts, which can be defined as the modification of homogeneous catalysts to render them in a separable (solid) phase from the starting materials and products. Specifically, we grow the small nanoclusters in dendrimers, a class of uniform polymers with the connectivity of fractal trees and generally radial symmetry. Thanks to their dense multivalency, shape persistence, and structural uniformity, dendrimers have proven to be versatile scaffolds for the synthesis and stabilization of small nanoclusters. Then these dendrimer-encapsulated metal clusters (DEMCs) are adsorbed onto mesoporous silica. Through this method, we have achieved selective transformations that had been challenging to accomplish in a heterogeneous setting, e.g., π-bond activation and aldol reactions. Extensive investigation into the catalytic systems under reaction conditions allowed us to correlate the structural features (e.g., oxidation states) of the catalysts and their activity. Moreover, we have demonstrated that supported DEMCs are also excellent catalysts for typical heterogeneous reactions, including hydrogenation and alkane isomerization. Critically, these investigations also confirmed that the supported DEMCs are heterogeneous and stable against leaching. Catalysts optimization is achieved through the modulation of various parameters. The clusters are oxidized (e.g., with PhICl 2) or reduced (e.g., with H 2) in situ. Changing the dendrimer properties (e.g., generation, terminal functional groups) is analogous to ligand modification in homogeneous catalysts, which affect both catalytic activity and selectivity. Similarly, pore size of the support is another factor in determining product distribution. In a flow reactor, the flow rate is adjusted to control the residence time of the starting material and intermediates, and thus the final product selectivity. Our approach to heterogeneous catalysis affords various advantages: (1) the catalyst system can tap into the reactivity typical to homogeneous catalysts, which conventional heterogeneous catalysts could not achieve; (2) unlike most homogeneous catalysts with comparable performance, the heterogenized homogeneous catalysts can be recycled; (3) improved activity or selectivity compared to conventional homogeneous catalysts is possible because of uniquely heterogeneous parameters for optimization. Here in this Account, we will briefly introduce metal clusters and describe the synthesis and characterizations of supported DEMCs. We will present the catalysis studies of supported DEMCs in both the batch and flow modes. Lastly, we will summarize the current state of heterogenizing homogeneous catalysis and provide future directions for this area of research.« less
Increasing Inferential Leverage in the Comparative Method: Placebo Tests in Small-"n" Research
ERIC Educational Resources Information Center
Glynn, Adam N.; Ichino, Nahomi
2016-01-01
We delineate the underlying homogeneity assumption, procedural variants, and implications of the comparative method and distinguish this from Mill's method of difference. We demonstrate that additional units can provide "placebo" tests for the comparative method even if the scope of inference is limited to the two units under comparison.…
Mbao, V; Speybroeck, N; Berkvens, D; Dolan, T; Dorny, P; Madder, M; Mulumba, M; Duchateau, L; Brandt, J; Marcotty, T
2005-07-01
Theileria parva sporozoite stabilates are used in the infection and treatment method of immunization, a widely accepted control option for East Coast fever in cattle. T. parva sporozoites are extracted from infected adult Rhipicephalus appendiculatus ticks either manually, using a pestle and a mortar, or by use of an electric homogenizer. A comparison of the two methods as a function of stabilate infectivity has never been documented. This study was designed to provide a quantitative comparison of stabilates produced by the two methods. The approach was to prepare batches of stabilate by both methods and then subject them to in vitro titration. Equivalence testing was then performed on the average effective doses (ED). The ratio of infective sporozoites yielded by the two methods was found to be 1.14 in favour of the manually ground stabilate with an upper limit of the 95% confidence interval equal to 1.3. We conclude that the choice of method rests more on costs, available infrastructure and standardization than on which method produces a richer sporozoite stabilate.
Li, Ting; Yan, Xu; Li, Yuan; Wang, Junjie; Li, Qiang; Li, Hong; Li, Junfeng
2017-01-01
There have been many neuroimaging studies of human personality traits, and it have already provided glimpse into the neurobiology of complex traits. And most of previous studies adopt voxel-based morphology (VBM) analysis to explore the brain-personality mechanism from two levels (vertex and regional based), the findings are mixed with great inconsistencies and the brain-personality relations are far from a full understanding. Here, we used a new method of surface-based morphology (SBM) analysis, which provides better alignment of cortical landmarks to generate about the associations between cortical morphology and the personality traits across 120 healthy individuals at both vertex and regional levels. While to further reveal local functional correlates of the morphology-personality relationships, we related surface-based functional homogeneity measures to the regions identified in the regional-based SBM correlation. Vertex-wise analysis revealed that people with high agreeableness exhibited larger areas in the left superior temporal gyrus. Based on regional parcellation we found that extroversion was negatively related with the volume of the left lateral occipito-temporal gyrus and agreeableness was negatively associated with the sulcus depth of the left superior parietal lobule. Moreover, increased regional homogeneity in the left lateral occipito-temporal gyrus is related to the scores of extroversion, and increased regional homogeneity in the left superior parietal lobule is related to the scores of agreeableness. These findings provide supporting evidence of a link between personality and brain structural mysteries with a method of SBM, and further suggest that local functional homogeneity of personality traits has neurobiological relevance that is likely based on anatomical substrates.
NASA Astrophysics Data System (ADS)
Song, Dawei; Ponte Castañeda, P.
2018-06-01
We make use of the recently developed iterated second-order homogenization method to obtain finite-strain constitutive models for the macroscopic response of porous polycrystals consisting of large pores randomly distributed in a fine-grained polycrystalline matrix. The porous polycrystal is modeled as a three-scale composite, where the grains are described by single-crystal viscoplasticity and the pores are assumed to be large compared to the grain size. The method makes use of a linear comparison composite (LCC) with the same substructure as the actual nonlinear composite, but whose local properties are chosen optimally via a suitably designed variational statement. In turn, the effective properties of the resulting three-scale LCC are determined by means of a sequential homogenization procedure, utilizing the self-consistent estimates for the effective behavior of the polycrystalline matrix, and the Willis estimates for the effective behavior of the porous composite. The iterated homogenization procedure allows for a more accurate characterization of the properties of the matrix by means of a finer "discretization" of the properties of the LCC to obtain improved estimates, especially at low porosities, high nonlinearties and high triaxialities. In addition, consistent homogenization estimates for the average strain rate and spin fields in the pores and grains are used to develop evolution laws for the substructural variables, including the porosity, pore shape and orientation, as well as the "crystallographic" and "morphological" textures of the underlying matrix. In Part II of this work has appeared in Song and Ponte Castañeda (2018b), the model will be used to generate estimates for both the instantaneous effective response and the evolution of the microstructure for porous FCC and HCP polycrystals under various loading conditions.
Cao, Lijuan; Yan, Zhongwei; Zhao, Ping; ...
2017-05-26
Monthly mean instrumental surface air temperature (SAT) observations back to the nineteenth century in China are synthesized from different sources via specific quality-control, interpolation, and homogenization. Compared with the first homogenized long-term SAT dataset for China which contained 18 stations mainly located in the middle and eastern part of China, the present dataset includes homogenized monthly SAT series at 32 stations, with an extended coverage especially towards western China. Missing values are interpolated by using observations at nearby stations, including those from neighboring countries. Cross validation shows that the mean bias error (MBE) is generally small and falls between 0.45more » °C and –0.35 °C. Multiple homogenization methods and available metadata are applied to assess the consistency of the time series and to adjust inhomogeneity biases. The homogenized annual mean SAT series shows a range of trends between 1.1 °C and 4.0 °C/century in northeastern China, between 0.4 °C and 1.9 °C/century in southeastern China, and between 1.4 °C and 3.7 °C/century in western China to the west of 105 E (from the initial years of the stations to 2015). The unadjusted data include unusually warm records during the 1940s and hence tend to underestimate the warming trends at a number of stations. As a result, the mean SAT series for China based on the climate anomaly method shows a warming trend of 1.56 °C/century during 1901–2015, larger than those based on other currently available datasets.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Lijuan; Yan, Zhongwei; Zhao, Ping
Monthly mean instrumental surface air temperature (SAT) observations back to the nineteenth century in China are synthesized from different sources via specific quality-control, interpolation, and homogenization. Compared with the first homogenized long-term SAT dataset for China which contained 18 stations mainly located in the middle and eastern part of China, the present dataset includes homogenized monthly SAT series at 32 stations, with an extended coverage especially towards western China. Missing values are interpolated by using observations at nearby stations, including those from neighboring countries. Cross validation shows that the mean bias error (MBE) is generally small and falls between 0.45more » °C and –0.35 °C. Multiple homogenization methods and available metadata are applied to assess the consistency of the time series and to adjust inhomogeneity biases. The homogenized annual mean SAT series shows a range of trends between 1.1 °C and 4.0 °C/century in northeastern China, between 0.4 °C and 1.9 °C/century in southeastern China, and between 1.4 °C and 3.7 °C/century in western China to the west of 105 E (from the initial years of the stations to 2015). The unadjusted data include unusually warm records during the 1940s and hence tend to underestimate the warming trends at a number of stations. As a result, the mean SAT series for China based on the climate anomaly method shows a warming trend of 1.56 °C/century during 1901–2015, larger than those based on other currently available datasets.« less
Colloidal synthesis of silicon nanoparticles in molten salts.
Shavel, A; Guerrini, L; Alvarez-Puebla, R A
2017-06-22
Silicon nanoparticles are unique materials with applications in a variety of fields, from electronics to catalysis and biomedical uses. Despite technological advancements in nanofabrication, the development of a simple and inexpensive route for the synthesis of homogeneous silicon nanoparticles remains highly challenging. Herein, we describe a new, simple and inexpensive colloidal synthetic method for the preparation, under normal pressure and mild temperature conditions, of relatively homogeneous spherical silicon nanoparticles of either ca. 4 or 6 nm diameter. The key features of this method are the selection of a eutectic salt mixture as a solvent, the identification of appropriate silicon alkoxide precursors, and the unconventional use of alkali earth metals as shape-controlling agents.
Rungarunlert, Sasitorn; Ferreira, Joao N; Dinnyes, Andras
2016-01-01
Generation of cardiomyocytes from pluripotent stem cells (PSCs) is a common and valuable approach to produce large amount of cells for various applications, including assays and models for drug development, cell-based therapies, and tissue engineering. All these applications would benefit from a reliable bioreactor-based methodology to consistently generate homogenous PSC-derived embryoid bodies (EBs) at a large scale, which can further undergo cardiomyogenic differentiation. The goal of this chapter is to describe a scalable method to consistently generate large amount of homogeneous and synchronized EBs from PSCs. This method utilizes a slow-turning lateral vessel bioreactor to direct the EB formation and their subsequent cardiomyogenic lineage differentiation.
Numerical experiments in homogeneous turbulence
NASA Technical Reports Server (NTRS)
Rogallo, R. S.
1981-01-01
The direct simulation methods developed by Orszag and Patternson (1972) for isotropic turbulence were extended to homogeneous turbulence in an incompressible fluid subjected to uniform deformation or rotation. The results of simulations for irrotational strain (plane and axisymmetric), shear, rotation, and relaxation toward isotropy following axisymmetric strain are compared with linear theory and experimental data. Emphasis is placed on the shear flow because of its importance and because of the availability of accurate and detailed experimental data. The computed results are used to assess the accuracy of two popular models used in the closure of the Reynolds-stress equations. Data from a variety of the computed fields and the details of the numerical methods used in the simulation are also presented.
A biochemical method for assessing the neurotoxic effects of misonidazole in the rat.
Rose, G. P.; Dewar, A. J.; Stratford, I. J.
1980-01-01
A proven biochemical method for assessing chemically induced neurotoxicity has been applied to the study of the toxic effects of misonidazole (MISO) in the rat. This involves the fluorimetric measurement of beta-glucuronidase and beta-galactosidase activities in homogenates of rat nervous tissue. The tissues analysed were sciatic/posterior tibial nerve (SPTN) cut into 4 sections, trigeminal ganglia and cerebellum. MISO administered i.p. to Wistar rats in doses greater than 300 mg/kg/day for 7 consecutive days produced maximal increases in both beta-glucuronidase and beta-galactosidase activities in th SPTN at 4 weeks (140-180% of control values). The highest increases were associated with the most distal secretion of the nerve. Significant enzyme-activity changes were also found in the trigeminal ganglia and cerebellum of MISO-dosed rats. The greatest activity occurred 4-5 weeks after dosing, and was dose-related. It is concluded that, in the rat, MISO can produce biochemical changes consistent with a dying-back peripheral neuropathy, and biochemical changes suggestive of cerebellar damage. This biochemical approach would appear to offer a convenient quantitative method for the detection of neurotoxic effects of other potential radio-sensitizing drugs. PMID:7459223
[Mechanical Shimming Method and Implementation for Permanent Magnet of MRI System].
Xue, Tingqiang; Chen, Jinjun
2015-03-01
A mechanical shimming method and device for permanent magnet of MRI system has been developed to meet its stringent homogeneity requirement without time-consuming passive shimming on site, installation and adjustment efficiency has been increased.
Validation of Hansen-Roach library for highly enriched uranium metal systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenz, T.R.; Busch, R.D.
The Hansen-Roach 16-group cross-section library has been validated for use in pure uranium metal systems by modeling the Godiva critical assembly using the neutronics transport theory code ONEDANT to perform effective multiplication factor (k{sub eff}) calculations. The cross-section library used contains data for 118 isotopes (34 unique elements), including the revised cross sections for {sup 235}U and {sup 238}U. The Godiva critical assembly is a 17.4-cm sphere composed of 93.7 wt% {sup 235}U, 1.0 wt% {sup 234}U, and 5.3 wt% {sup 238}U with an effective homogeneous density of 18.7 g/cm{sup 3}.
ERIC Educational Resources Information Center
Mattox, Daniel V., Jr.
Research compared conventional and experimental methods of instruction in a teacher education media course. The conventional method relied upon factual presentations to heterogeneous groups, while the experimental utilized homogeneous clusters of students and stressed individualized instruction. A pretest-posttest, experimental-control group…
Whole-angle spherical retroreflector using concentric layers of homogeneous optical media.
Oakley, John P
2007-03-01
Spherical retroreflectors have a much greater acceptance angle than conventional retroreflectors such as corner cubes. However, the optical performance of known spherical reflectors is limited by spherical aberration. It is shown that third-order spherical aberration may be corrected by using two or more layers of homogeneous optical media of different refractive indices. The performance of the retroreflector is characterized by the scattering (or radar) cross section, which is calculated by using optical design software. A practical spherical reflector is described that offers a significant increase in optical performance over existing devices. No gradient index components are required, and the device is constructed by using conventional optical materials and fabrication techniques. The experimental results confirm that the device operates correctly at the design wavelength of 690 nm.
Scattering and cloaking of binary hyper-particles in metamaterials.
Alexopoulos, A; Yau, K S B
2010-09-13
We derive the d-dimensional scattering cross section for homogeneous and composite hyper-particles inside a metamaterial. The polarizability of the hyper-particles is expressed in multi-dimensional form and is used in order to examine various scattering characteristics. We introduce scattering bounds that display interesting results when d --> ∞ and in particular consider the special limit of hyper-particle cloaking in some detail. We demonstrate cloaking via resonance for homogeneous particles and show that composite hyper-particles can be used in order to obtain electromagnetic cloaking with either negative or all positive constitutive parameters respectively. Our approach not only considers cloaking of particles of integer dimension but also particles with non-integer dimension such as fractals. Theoretical results are compared to full-wave numerical simulations for two interacting hyper-particles in a medium.
NASA Astrophysics Data System (ADS)
Moraleda, Joaquín; Segurado, Javier; LLorca, Javier
2009-09-01
The in-plane finite deformation of incompressible fiber-reinforced elastomers was studied using computational micromechanics. Composite microstructure was made up of a random and homogeneous dispersion of aligned rigid fibers within a hyperelastic matrix. Different matrices (Neo-Hookean and Gent), fibers (monodisperse or polydisperse, circular or elliptical section) and reinforcement volume fractions (10-40%) were analyzed through the finite element simulation of a representative volume element of the microstructure. A successive remeshing strategy was employed when necessary to reach the large deformation regime in which the evolution of the microstructure influences the effective properties. The simulations provided for the first time "quasi-exact" results of the in-plane finite deformation for this class of composites, which were used to assess the accuracy of the available homogenization estimates for incompressible hyperelastic composites.
Non-homogeneous updates for the iterative coordinate descent algorithm
NASA Astrophysics Data System (ADS)
Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang
2007-02-01
Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.
Physical-geometric optics method for large size faceted particles.
Sun, Bingqiang; Yang, Ping; Kattawar, George W; Zhang, Xiaodong
2017-10-02
A new physical-geometric optics method is developed to compute the single-scattering properties of faceted particles. It incorporates a general absorption vector to accurately account for inhomogeneous wave effects, and subsequently yields the relevant analytical formulas effective and computationally efficient for absorptive scattering particles. A bundle of rays incident on a certain facet can be traced as a single beam. For a beam incident on multiple facets, a systematic beam-splitting technique based on computer graphics is used to split the original beam into several sub-beams so that each sub-beam is incident only on an individual facet. The new beam-splitting technique significantly reduces the computational burden. The present physical-geometric optics method can be generalized to arbitrary faceted particles with either convex or concave shapes and with a homogeneous or an inhomogeneous (e.g., a particle with a core) composition. The single-scattering properties of irregular convex homogeneous and inhomogeneous hexahedra are simulated and compared to their counterparts from two other methods including a numerically rigorous method.
Optimal regionalization of extreme value distributions for flood estimation
NASA Astrophysics Data System (ADS)
Asadi, Peiman; Engelke, Sebastian; Davison, Anthony C.
2018-01-01
Regionalization methods have long been used to estimate high return levels of river discharges at ungauged locations on a river network. In these methods, discharge measurements from a homogeneous group of similar, gauged, stations are used to estimate high quantiles at a target location that has no observations. The similarity of this group to the ungauged location is measured in terms of a hydrological distance measuring differences in physical and meteorological catchment attributes. We develop a statistical method for estimation of high return levels based on regionalizing the parameters of a generalized extreme value distribution. The group of stations is chosen by optimizing over the attribute weights of the hydrological distance, ensuring similarity and in-group homogeneity. Our method is applied to discharge data from the Rhine basin in Switzerland, and its performance at ungauged locations is compared to that of other regionalization methods. For gauged locations we show how our approach improves the estimation uncertainty for long return periods by combining local measurements with those from the chosen group.
A stochastic vortex structure method for interacting particles in turbulent shear flows
NASA Astrophysics Data System (ADS)
Dizaji, Farzad F.; Marshall, Jeffrey S.; Grant, John R.
2018-01-01
In a recent study, we have proposed a new synthetic turbulence method based on stochastic vortex structures (SVSs), and we have demonstrated that this method can accurately predict particle transport, collision, and agglomeration in homogeneous, isotropic turbulence in comparison to direct numerical simulation results. The current paper extends the SVS method to non-homogeneous, anisotropic turbulence. The key element of this extension is a new inversion procedure, by which the vortex initial orientation can be set so as to generate a prescribed Reynolds stress field. After validating this inversion procedure for simple problems, we apply the SVS method to the problem of interacting particle transport by a turbulent planar jet. Measures of the turbulent flow and of particle dispersion, clustering, and collision obtained by the new SVS simulations are shown to compare well with direct numerical simulation results. The influence of different numerical parameters, such as number of vortices and vortex lifetime, on the accuracy of the SVS predictions is also examined.
Large-Eddy Simulation of Conductive Flows at Low Magnetic Reynolds Number
NASA Technical Reports Server (NTRS)
Knaepen, B.; Moin, P.
2003-01-01
In this paper we study the LES method with dynamic procedure in the context of conductive flows subject to an applied external magnetic field at low magnetic Reynolds number R(sub m). These kind of flows are encountered in many industrial applications. For example, in the steel industry, applied magnetic fields can be used to damp turbulence in the casting process. In nuclear fusion devices (Tokamaks), liquid-lithium flows are used as coolant blankets and interact with the surrounding magnetic field that drives and confines the fusion plasma. Also, in experimental facilities investigating the dynamo effect, the flow consists of liquid-sodium for which the Prandtl number and, as a consequence, the magnetic Reynolds number is low. Our attention is focused here on the case of homogeneous (initially isotropic) decaying turbulence. The numerical simulations performed mimic the thought experiment described in Moffatt in which an initially homogeneous isotropic conductive flow is suddenly subjected to an applied magnetic field and freely decays without any forcing. Note that this flow was first studied numerically by Schumann. It is well known that in that case, extra damping of turbulence occurs due to the Joule effect and that the flow tends to become progressively independent of the coordinate along the direction of the magnetic field. Our comparison of filtered direct numerical simulation (DNS) predictions and LES predictions show that the dynamic Smagorinsky model enables one to capture successfully the flow with LES, and that it automatically incorporates the effect of the magnetic field on the turbulence. Our paper is organized as follows. In the next section we summarize the LES approach in the case of MHD turbulence at low R(sub m) and recall the definition of the dynamic Smagorinsky model. In Sec. 3 we describe the parameters of the numerical experiments performed and the code used. Section 4 is devoted to the comparison of filtered DNS results and LES results. Conclusions are presented in Sec. 5.
Capoor, Manu N; Ruzicka, Filip; Machackova, Tana; Jancalek, Radim; Smrcka, Martin; Schmitz, Jonathan E; Hermanova, Marketa; Sana, Jiri; Michu, Elleni; Baird, John C; Ahmed, Fahad S; Maca, Karel; Lipina, Radim; Alamin, Todd F; Coscia, Michael F; Stonemetz, Jerry L; Witham, Timothy; Ehrlich, Garth D; Gokaslan, Ziya L; Mavrommatis, Konstantinos; Birkenmaier, Christof; Fischetti, Vincent A; Slaby, Ondrej
2016-01-01
The relationship between intervertebral disc degeneration and chronic infection by Propionibacterium acnes is controversial with contradictory evidence available in the literature. Previous studies investigating these relationships were under-powered and fraught with methodical differences; moreover, they have not taken into consideration P. acnes' ability to form biofilms or attempted to quantitate the bioburden with regard to determining bacterial counts/genome equivalents as criteria to differentiate true infection from contamination. The aim of this prospective cross-sectional study was to determine the prevalence of P. acnes in patients undergoing lumbar disc microdiscectomy. The sample consisted of 290 adult patients undergoing lumbar microdiscectomy for symptomatic lumbar disc herniation. An intraoperative biopsy and pre-operative clinical data were taken in all cases. One biopsy fragment was homogenized and used for quantitative anaerobic culture and a second was frozen and used for real-time PCR-based quantification of P. acnes genomes. P. acnes was identified in 115 cases (40%), coagulase-negative staphylococci in 31 cases (11%) and alpha-hemolytic streptococci in 8 cases (3%). P. acnes counts ranged from 100 to 9000 CFU/ml with a median of 400 CFU/ml. The prevalence of intervertebral discs with abundant P. acnes (≥ 1x103 CFU/ml) was 11% (39 cases). There was significant correlation between the bacterial counts obtained by culture and the number of P. acnes genomes detected by real-time PCR (r = 0.4363, p<0.0001). In a large series of patients, the prevalence of discs with abundant P. acnes was 11%. We believe, disc tissue homogenization releases P. acnes from the biofilm so that they can then potentially be cultured, reducing the rate of false-negative cultures. Further, quantification study revealing significant bioburden based on both culture and real-time PCR minimize the likelihood that observed findings are due to contamination and supports the hypothesis P. acnes acts as a pathogen in these cases of degenerative disc disease.
NASA Astrophysics Data System (ADS)
Gallezot, M.; Treyssède, F.; Laguerre, L.
2018-03-01
This paper investigates the computation of the forced response of elastic open waveguides with a numerical modal approach based on perfectly matched layers (PML). With a PML of infinite thickness, the solution can theoretically be expanded as a discrete sum of trapped modes, a discrete sum of leaky modes and a continuous sum of radiation modes related to the PML branch cuts. Yet with numerical methods (e.g. finite elements), the waveguide cross-section is discretized and the PML must be truncated to a finite thickness. This truncation transforms the continuous sum into a discrete set of PML modes. To guarantee the uniqueness of the numerical solution of the forced response problem, an orthogonality relationship is proposed. This relationship is applicable to any type of modes (trapped, leaky and PML modes) and hence allows the numerical solution to be expanded on a discrete sum in a convenient manner. This also leads to an expression for the modal excitability valid for leaky modes. The physical relevance of each type of mode for the solution is clarified through two numerical test cases, a homogeneous medium and a circular bar waveguide example, excited by a point source. The former is favourably compared to a transient analytical solution, showing that PML modes reassemble the bulk wave contribution in a homogeneous medium. The latter shows that the PML mode contribution yields the long-term diffraction phenomenon whereas the leaky mode contribution prevails closer to the source. The leaky mode contribution is shown to remain accurate even with a relatively small PML thickness, hence reducing the computational cost. This is of particular interest for solving three-dimensional waveguide problems, involving two-dimensional cross-sections of arbitrary shapes. Such a problem is handled in a third numerical example by considering a buried square bar.
ZERODUR: progress in CTE characterization
NASA Astrophysics Data System (ADS)
Jedamzik, Ralf; Kunisch, Clemens; Westerhoff, Thomas
2013-09-01
In 2010, SCHOTT introduced a method for the modeling of the thermal expansion behavior of ZERODUR® under arbitrary temperature profiles for an optimized production of material for the upcoming Extremely Large Telescope (ELT) projects. In 2012 a new product was introduced based on this method called ZERODUR® TAILORED. ZERODUR® TAILORED provides an evolution in the specification of the absolute Coefficient of Thermal Expansion (CTE) value by including the individual customer requirements in this process. This paper presents examples showing the benefit of an application oriented approach in the design of specifications using ZERODUR®. Additionally it will be shown how the modeling approach has advanced during the last years to improve the prediction accuracy on long time scales. ZERODUR® is known not only for its lowest CTE but also for its excellent CTE homogeneity as shown in the past for disc shaped blanks typical for telescope mirror substrates. Additionally this paper presents recent results of CTE homogeneity measurements in the single digit ppb/K range for a rectangular cast plate proving that the excellent CTE homogeneity is independent of the production format.
Flux density calibration in diffuse optical tomographic systems.
Biswas, Samir Kumar; Rajan, Kanhirodan; Vasu, Ram M
2013-02-01
The solution of the forward equation that models the transport of light through a highly scattering tissue material in diffuse optical tomography (DOT) using the finite element method gives flux density (Φ) at the nodal points of the mesh. The experimentally measured flux (Umeasured) on the boundary over a finite surface area in a DOT system has to be corrected to account for the system transfer functions (R) of various building blocks of the measurement system. We present two methods to compensate for the perturbations caused by R and estimate true flux density (Φ) from Umeasuredcal. In the first approach, the measurement data with a homogeneous phantom (Umeasuredhomo) is used to calibrate the measurement system. The second scheme estimates the homogeneous phantom measurement using only the measurement from a heterogeneous phantom, thereby eliminating the necessity of a homogeneous phantom. This is done by statistically averaging the data (Umeasuredhetero) and redistributing it to the corresponding detector positions. The experiments carried out on tissue mimicking phantom with single and multiple inhomogeneities, human hand, and a pork tissue phantom demonstrate the robustness of the approach.
Homogeneity study of a corn flour laboratory reference material candidate for inorganic analysis.
Dos Santos, Ana Maria Pinto; Dos Santos, Liz Oliveira; Brandao, Geovani Cardoso; Leao, Danilo Junqueira; Bernedo, Alfredo Victor Bellido; Lopes, Ricardo Tadeu; Lemos, Valfredo Azevedo
2015-07-01
In this work, a homogeneity study of a corn flour reference material candidate for inorganic analysis is presented. Seven kilograms of corn flour were used to prepare the material, which was distributed among 100 bottles. The elements Ca, K, Mg, P, Zn, Cu, Fe, Mn and Mo were quantified by inductively coupled plasma optical emission spectrometry (ICP OES) after acid digestion procedure. The method accuracy was confirmed by analyzing the rice flour certified reference material, NIST 1568a. All results were evaluated by analysis of variance (ANOVA) and principal component analysis (PCA). In the study, a sample mass of 400mg was established as the minimum mass required for analysis, according to the PCA. The between-bottle test was performed by analyzing 9 bottles of the material. Subsamples of a single bottle were analyzed for the within-bottle test. No significant differences were observed for the results obtained through the application of both statistical methods. This fact demonstrates that the material is homogeneous for use as a laboratory reference material. Copyright © 2015 Elsevier Ltd. All rights reserved.
Automated object-based classification of topography from SRTM data
Drăguţ, Lucian; Eisank, Clemens
2012-01-01
We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060
Observation of force-detected nuclear magnetic resonance in a homogeneous field
Madsen, L. A.; Leskowitz, G. M.; Weitekamp, D. P.
2004-01-01
We report the experimental realization of BOOMERANG (better observation of magnetization, enhanced resolution, and no gradient), a sensitive and general method of magnetic resonance. The prototype millimeter-scale NMR spectrometer shows signal and noise levels in agreement with the design principles. We present 1H and 19F NMR in both solid and liquid samples, including time-domain Fourier transform NMR spectroscopy, multiple-pulse echoes, and heteronuclear J spectroscopy. By measuring a 1H-19F J coupling, this last experiment accomplishes chemically specific spectroscopy with force-detected NMR. In BOOMERANG, an assembly of permanent magnets provides a homogeneous field throughout the sample, while a harmonically suspended part of the assembly, a detector, is mechanically driven by spin-dependent forces. By placing the sample in a homogeneous field, signal dephasing by diffusion in a field gradient is made negligible, enabling application to liquids, in contrast to other force-detection methods. The design appears readily scalable to μm-scale samples where it should have sensitivity advantages over inductive detection with microcoils and where it holds great promise for application of magnetic resonance in biology, chemistry, physics, and surface science. We briefly discuss extensions of the BOOMERANG method to the μm and nm scales. PMID:15326302
Automated object-based classification of topography from SRTM data
NASA Astrophysics Data System (ADS)
Drăguţ, Lucian; Eisank, Clemens
2012-03-01
We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download.
Development of Cross Section Library and Application Programming Interface (API)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C. H.; Marin-Lafleche, A.; Smith, M. A.
2014-04-09
The goal of NEAMS neutronics is to develop a high-fidelity deterministic neutron transport code termed PROTEUS for use on all reactor types of interest, but focused primarily on sodium-cooled fast reactors. While PROTEUS-SN has demonstrated good accuracy for homogeneous fast reactor problems and partially heterogeneous fast reactor problems, the simulation results were not satisfactory when applied on fully heterogeneous thermal problems like the Advanced Test Reactor (ATR). This is mainly attributed to the quality of cross section data for heterogeneous geometries since the conventional cross section generation approach does not work accurately for such irregular and complex geometries. Therefore, onemore » of the NEAMS neutronics tasks since FY12 has been the development of a procedure to generate appropriate cross sections for a heterogeneous geometry core.« less
Warm and cold pasta phase in relativistic mean field theory
NASA Astrophysics Data System (ADS)
Avancini, S. S.; Menezes, D. P.; Alloy, M. D.; Marinelli, J. R.; Moraes, M. M. W.; Providência, C.
2008-07-01
In the present article we investigate the onset of the pasta phase with different parametrizations of the nonlinear Walecka model. At zero temperature two different methods are used, one based on coexistent phases and the other on the Thomas-Fermi approximation. At finite temperature only the coexistence phases method is used. npe matter with fixed proton fractions and in β equilibrium is studied. The pasta phase decreases with the increase of temperature. The internal pasta structure and the beginning of the homogeneous phase vary depending on the proton fraction (or the imposition of β equilibrium), on the method used, and on the chosen parametrization. It is shown that a good parametrization of the surface tension with dependence on the temperature, proton fraction, and geometry is essential to describe correctly large isospin asymmetries and the transition from pasta to homogeneous matter.
NASA Astrophysics Data System (ADS)
Hill, Laura E.; Gomes, Carmen L.
2014-12-01
The goal of this study was to develop an effective method to synthesize poly-n-isopropylacrylamide (PNIPAAM) nanoparticles with entrapped cinnamon bark extract (CBE) to improve its delivery to foodborne pathogens and control its release with temperature stimuli. CBE was used as a model for hydrophobic natural antimicrobials. A top-down procedure using crosslinked PNIPAAM was compared to a bottom-up procedure using NIPAAM monomer. Both processes relied on self-assembly of the molecules into micelles around the CBE at 40 °C. Processing conditions were compared including homogenization time of the polymer, hydration time prior to homogenization, lyophilization, and the effect of particle ultrafiltration. The top-down versus bottom-up synthesis methods yielded particles with significantly different characteristics, especially their release profiles and antimicrobial activities. The synthesis methods affected particle size, with the bottom-up procedure resulting in smaller (P < 0.05) diameters than the top-down procedure. The controlled release profile of CBE from nanoparticles was dependent on the release media temperature. A faster, burst release was observed at 40 °C and a slower, more sustained release was observed at lower temperatures. PNIPAAM particles containing CBE were analyzed for their antimicrobial activity against Salmonella enterica serovar Typhimurium LT2 and Listeria monocytogenes Scott A. The PNIPAAM particles synthesized via the top-down procedure had a much faster release, which led to a greater (P < 0.05) antimicrobial activity. Both of the top-down nanoparticles performed similarly, therefore the 7 min homogenization time nanoparticles would be the best for this application, as the process time is shorter and little improvement was seen by using a slightly longer homogenization.
Morris, Renée; Mehta, Prachi
2018-01-01
In mammals, the central nervous system (CNS) is constituted of various cellular elements, posing a challenge to isolating specific cell types to investigate their expression profile. As a result, tissue homogenization is not amenable to analyses of motor neurons profiling as these represent less than 10% of the total spinal cord cell population. One way to tackle the problem of tissue heterogeneity and obtain meaningful genomic, proteomic, and transcriptomic profiling is to use laser capture microdissection technology (LCM). In this chapter, we describe protocols for the capture of isolated populations of motor neurons from spinal cord tissue sections and for downstream transcriptomic analysis of motor neurons with RT-PCR. We have also included a protocol for the immunological confirmation that the captured neurons are indeed motor neurons. Although focused on spinal cord motor neurons, these protocols can be easily optimized for the isolation of any CNS neurons.
Validation of Blockage Interference Corrections in the National Transonic Facility
NASA Technical Reports Server (NTRS)
Walker, Eric L.
2007-01-01
A validation test has recently been constructed for wall interference methods as applied to the National Transonic Facility (NTF). The goal of this study was to begin to address the uncertainty of wall-induced-blockage interference corrections, which will make it possible to address the overall quality of data generated by the facility. The validation test itself is not specific to any particular modeling. For this present effort, the Transonic Wall Interference Correction System (TWICS) as implemented at the NTF is the mathematical model being tested. TWICS uses linear, potential boundary conditions that must first be calibrated. These boundary conditions include three different classical, linear. homogeneous forms that have been historically used to approximate the physical behavior of longitudinally slotted test section walls. Results of the application of the calibrated wall boundary conditions are discussed in the context of the validation test.
Cyclooxygenase and lipoxygenase-like activity in Drosophila melanogaster.
Pagés, M; Roselló, J; Casas, J; Gelpí, E; Gualde, N; Rigaud, M
1986-11-01
To determine the possible activity of cyclooxygenase and lipoxygenase like enzymes in Drosophila melanogaster, we have investigated whether fly homogenates can biosynthesize prostaglandins and HETEs. Incubation of fly extracts with AA yields a mixture of 15- 12- 9- and 8-HETE as detected by selected ion monitoring GC-MS. Also the combination of HPLC-RIA using a PGE antibody shows the presence of endogenous PGE2 immunoreactivity in the extracts (405 pg/g in males and 165 pg/g in females). We have also detected the presence of lipoxygenase like immunoreactivity in the reproductive male system by using immunocytochemical techniques in whole body sections of the fly as well as reactivity in the digestive system of both males and females. Finally, we have not been able to detect endogenous AA in the fly by GC-MS methods. However, estimates by GC-MS of the total body fatty acids indicate substantial amounts of potential AA precursors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakamachi, Eiji; Yoshida, Takashi; Yamaguchi, Toshihiko
2014-10-06
We developed two-scale FE analysis procedure based on the crystallographic homogenization method by considering the hierarchical structure of poly-crystal aluminium alloy metal. It can be characterized as the combination of two-scale structure, such as the microscopic polycrystal structure and the macroscopic elastic plastic continuum. Micro polycrystal structure can be modeled as a three dimensional representative volume element (RVE). RVE is featured as by 3×3×3 eight-nodes solid finite elements, which has 216 crystal orientations. This FE analysis code can predict the deformation, strain and stress evolutions in the wire drawing processes in the macro- scales, and further the crystal texture andmore » hardening evolutions in the micro-scale. In this study, we analyzed the texture evolution in the wire drawing processes by our two-scale FE analysis code under conditions of various drawing angles of dice. We evaluates the texture evolution in the surface and center regions of the wire cross section, and to clarify the effects of processing conditions on the texture evolution.« less
NASA Astrophysics Data System (ADS)
Polilov, A. N.; Tatus’, N. A.
2018-04-01
The goal of this paper is analysis of design methods for composite beams and plates with curvilinear fiber trajectories. The novelty of this approach is determined by the fact that traditional composite materials are typically formed using prepregs with rectilinear fibers only. The results application area is associated with design process for shaped composite structure element by using of biomechanical principles. One of the related problems is the evaluation of fiber’s misorientation effect on stiffness and load carry capacity of shaped composite element with curvilinear fiber trajectories. Equistrong beam with constant cross-section area is considered as example, and it can be produced by unidirectional fiber bunch forming, impregnated with polymer matrix. Effective elastic modulus evaluation methods for structures with curvilinear fiber trajectories are validated. Misorientation angle range (up to 5o) when material with required accuracy can be considered as homogeneous, neglecting fiber misorientation, is determined. It is shown that for the beams with height-to-width ratio small enough it is possible to consider 2D misorientation only.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meredith, S.E.; Benjamin, J.F.
1993-07-13
A method is described of manufacturing corrosion resistant tubing from seam welded stock of a titanium or titanium based alloy, comprising: cold pilgering a seam welded tube hollow of titanium or titanium based alloy in a single pass to a final sized tubing, the tube hollow comprising a strip which has been bent and welded along opposed edges thereof to form the tube hollow, the tube hollow optionally being heat treated prior to the cold pilgering step provided the tube hollow is not heated to a temperature which would transform the titanium or titanium alloy into the beta phase, themore » cold pilgering effecting a reduction in cross sectional area of the tube hollow of at least 50% and a reduction of wall thickness of at least 50%, in order to achieve a radially oriented crystal structure; and annealing the final sized tubing at a temperature and time sufficient to effect complete recrystallization and reform grains in a weld area along the seam into smaller, homogeneous grains.« less
NASA Astrophysics Data System (ADS)
Nakamachi, Eiji; Yoshida, Takashi; Kuramae, Hiroyuki; Morimoto, Hideo; Yamaguchi, Toshihiko; Morita, Yusuke
2014-10-01
We developed two-scale FE analysis procedure based on the crystallographic homogenization method by considering the hierarchical structure of poly-crystal aluminium alloy metal. It can be characterized as the combination of two-scale structure, such as the microscopic polycrystal structure and the macroscopic elastic plastic continuum. Micro polycrystal structure can be modeled as a three dimensional representative volume element (RVE). RVE is featured as by 3×3×3 eight-nodes solid finite elements, which has 216 crystal orientations. This FE analysis code can predict the deformation, strain and stress evolutions in the wire drawing processes in the macro- scales, and further the crystal texture and hardening evolutions in the micro-scale. In this study, we analyzed the texture evolution in the wire drawing processes by our two-scale FE analysis code under conditions of various drawing angles of dice. We evaluates the texture evolution in the surface and center regions of the wire cross section, and to clarify the effects of processing conditions on the texture evolution.
Travelling wave solutions of the homogeneous one-dimensional FREFLO model
NASA Astrophysics Data System (ADS)
Huang, B.; Hong, J. Y.; Jing, G. Q.; Niu, W.; Fang, L.
2018-01-01
Presently there is quite few analytical studies in traffic flows due to the non-linearity of the governing equations. In the present paper we introduce travelling wave solutions for the homogeneous one-dimensional FREFLO model, which are expressed in the form of series and describe the procedure that vehicles/pedestrians move with a negative velocity and decelerate until rest, then accelerate inversely to positive velocities. This method is expect to be extended to more complex situations in the future.
NASA Technical Reports Server (NTRS)
Madnia, C. K.; Frankel, S. H.; Givi, P.
1992-01-01
The presently obtained closed-form analytical expressions, which predict the limiting rate of mean reactant conversion in homogeneous turbulent flows under the influence of a binary reaction, are derived via the single-point pdf method based on amplitude mapping closure. With this model, the maximum rate of the mean reactant's decay can be conveniently expressed in terms of definite integrals of the parabolic cylinder functions. The results obtained are shown to be in good agreement with data generated by direct numerical simulations.
NASA Astrophysics Data System (ADS)
Tie, Lin
2017-08-01
In this paper, the controllability problem of two-dimensional discrete-time multi-input bilinear systems is completely solved. The homogeneous and the inhomogeneous cases are studied separately and necessary and sufficient conditions for controllability are established by using a linear algebraic method, which are easy to apply. Moreover, for the uncontrollable systems, near-controllability is considered and similar necessary and sufficient conditions are also obtained. Finally, examples are provided to demonstrate the results of this paper.
Active electromagnetic invisibility cloaking and radiation force cancellation
NASA Astrophysics Data System (ADS)
Mitri, F. G.
2018-03-01
This investigation shows that an active emitting electromagnetic (EM) Dirichlet source (i.e., with axial polarization of the electric field) in a homogeneous non-dissipative/non-absorptive medium placed near a perfectly conducting boundary can render total invisibility (i.e. zero extinction cross-section or efficiency) in addition to a radiation force cancellation on its surface. Based upon the Poynting theorem, the mathematical expression for the extinction, radiation and amplification cross-sections (or efficiencies) are derived using the partial-wave series expansion method in cylindrical coordinates. Moreover, the analysis is extended to compute the self-induced EM radiation force on the active source, resulting from the waves reflected by the boundary. The numerical results predict the generation of a zero extinction efficiency, achieving total invisibility, in addition to a radiation force cancellation which depend on the source size, the distance from the boundary and the associated EM mode order of the active source. Furthermore, an attractive EM pushing force on the active source directed toward the boundary or a repulsive pulling one pointing away from it can arise accordingly. The numerical predictions and computational results find potential applications in the design and development of EM cloaking devices, invisibility and stealth technologies.
NASA Astrophysics Data System (ADS)
Maiz, Lotfi; Trzciński, Waldemar A.; Paszula, Józef
2017-01-01
Confined and semi-closed explosions of new class of energetic composites as well as TNT and RDX charges were investigated using optical spectroscopy. These composites are considered as thermobarics when used in layered charges or enhanced blast explosives when pressed. Two methods to estimate fireball temperature histories of both homogeneous and metallized explosives from the spectroscopic data are also presented, compared and analyzed. Fireball temperature results of the charges detonated in a small explosion chamber under air and argon atmospheres, and detonated in a semi-closed bunker are presented and compared with theoretical ones calculated by a thermochemical code. Important conclusions about the fireball temperatures and the physical and chemical phenomena occurring after the detonation of homogeneous explosives and composite formulations are deduced.
Production of solid lipid nanoparticles (SLN): scaling up feasibilities.
Dingler, A; Gohla, S
2002-01-01
Solid lipid nanoparticles (SLN/Lipopearls) are widely discussed as a new colloidal drug carrier system. In contrast to polymeric systems, such as Polylactic copolyol microcapsules, these systems show with a good biocompatibility, if applied parenterally. The solid lipid matrices can be comprised of fats or waxes, and allow protection of incorporated active ingredients against chemical and physical degradation. The SLN can either be produced by 'hot homogenization' of melted lipids at elevated temperatures or by a 'cold homogenization' process. This paper deals with production technologies for SLN formulations, based on non-ethoxylated fat components for topical application and high pressure homogenization. Based on the chosen fat components, a novel and easy manufacturing and scaling-up method was developed to maintain chemical and physical integrity of the encapsulated active ingredients in the carrier.
Homogeneous molybdenum disulfide tunnel diode formed via chemical doping
NASA Astrophysics Data System (ADS)
Liu, Xiaochi; Qu, Deshun; Choi, Min Sup; Lee, Changmin; Kim, Hyoungsub; Yoo, Won Jong
2018-04-01
We report on a simple, controllable chemical doping method to fabricate a lateral homogeneous MoS2 tunnel diode. MoS2 was doped to degenerate n- (1.6 × 1013 cm-2) and p-type (1.1 × 1013 cm-2) by benzyl viologen and AuCl3, respectively. The n- and p-doping can be patterned on the same MoS2 flake, and the high doping concentration can be maintained by Al2O3 masking together with vacuum annealing. A forward rectifying p-n diode and a band-to-band tunneling induced backward rectifying diode were realized by modulating the doping concentration of both the n- and p-sides. Our approach is a universal stratagem to fabricate diverse 2D homogeneous diodes with various functions.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).
Haas, H; Lange, A; Schlaak, M
1987-01-01
Using isoelectric focusing (IEF) with immunoblotting, we have analysed serum immunoglobulins of 15 lung cancer patients on cytotoxic chemotherapy. In five of the patients homogeneous immunoglobulins were found which appeared between 9 and 18 months after beginning of treatment and were monoclonal in two and oligoclonal in three cases. These abnormalities were only partially shown by zonal electrophoresis with immunofixation and not detected by immune electrophoresis. Examination of 10 normal and 10 myeloma sera by the three techniques in parallel confirmed the competence and sensitivity of IEF with immunoblotting in detecting homogeneous immunoglobulins. Thus, this method provides a valuable tool for investigating an abnormal regulation of the immunoglobulin synthesis. Images Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 5 PMID:3325203
Performance of electrolyte measurements assessed by a trueness verification program.
Ge, Menglei; Zhao, Haijian; Yan, Ying; Zhang, Tianjiao; Zeng, Jie; Zhou, Weiyan; Wang, Yufei; Meng, Qinghui; Zhang, Chuanbao
2016-08-01
In this study, we analyzed frozen sera with known commutabilities for standardization of serum electrolyte measurements in China. Fresh frozen sera were sent to 187 clinical laboratories in China for measurement of four electrolytes (sodium, potassium, calcium, and magnesium). Target values were assigned by two reference laboratories. Precision (CV), trueness (bias), and accuracy [total error (TEa)] were used to evaluate measurement performance, and the tolerance limit derived from the biological variation was used as the evaluation criterion. About half of the laboratories used a homogeneous system (same manufacturer for instrument, reagent and calibrator) for calcium and magnesium measurement, and more than 80% of laboratories used a homogeneous system for sodium and potassium measurement. More laboratories met the tolerance limit of imprecision (coefficient of variation [CVa]) than the tolerance limits of trueness (biasa) and TEa. For sodium, calcium, and magnesium, the minimal performance criterion derived from biological variation was used, and the pass rates for total error were approximately equal to the bias (<50%). For potassium, the pass rates for CV and TE were more than 90%. Compared with the non homogeneous system, the homogeneous system was superior for all three quality specifications. The use of commutable proficiency testing/external quality assessment (PT/EQA) samples with values assigned by reference methods can monitor performance and provide reliable data for improving the performance of laboratory electrolyte measurement. The homogeneous systems were superior to the non homogeneous systems, whereas accuracy of assigned values of calibrators and assay stability remained challenges.
Use of vertical temperature gradients for prediction of tidal flat sediment characteristics
Miselis, Jennifer L.; Holland, K. Todd; Reed, Allen H.; Abelev, Andrei
2012-01-01
Sediment characteristics largely govern tidal flat morphologic evolution; however, conventional methods of investigating spatial variability in lithology on tidal flats are difficult to employ in these highly dynamic regions. In response, a series of laboratory experiments was designed to investigate the use of temperature diffusion toward sediment characterization. A vertical thermistor array was used to quantify temperature gradients in simulated tidal flat sediments of varying compositions. Thermal conductivity estimates derived from these arrays were similar to measurements from a standard heated needle probe, which substantiates the thermistor methodology. While the thermal diffusivities of dry homogeneous sediments were similar, diffusivities for saturated homogeneous sediments ranged approximately one order of magnitude. The thermal diffusivity of saturated sand was five times the thermal diffusivity of saturated kaolin and more than eight times the thermal diffusivity of saturated bentonite. This suggests that vertical temperature gradients can be used for distinguishing homogeneous saturated sands from homogeneous saturated clays and perhaps even between homogeneous saturated clay types. However, experiments with more realistic tidal flat mixtures were less discriminating. Relationships between thermal diffusivity and percent fines for saturated mixtures varied depending upon clay composition, indicating that clay hydration and/or water content controls thermal gradients. Furthermore, existing models for the bulk conductivity of sediment mixtures were improved only through the use of calibrated estimates of homogeneous end-member conductivity and water content values. Our findings suggest that remotely sensed observations of water content and thermal diffusivity could only be used to qualitatively estimate tidal flat sediment characteristics.
Quantitative bioimaging of trace elements in the human lens by LA-ICP-MS.
Konz, Ioana; Fernández, Beatriz; Fernández, M Luisa; Pereiro, Rosario; González-Iglesias, Héctor; Coca-Prados, Miguel; Sanz-Medel, Alfredo
2014-04-01
Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) was used for the quantitative imaging of Fe, Cu and Zn in cryostat sections of human eye lenses and for depth profiling analysis in bovine lenses. To ensure a tight temperature control throughout the experiments, a new Peltier-cooled laser ablation cell was employed. For quantification purposes, matrix-matched laboratory standards were prepared from a pool of human lenses from eye donors and spiked with standard solutions containing different concentrations of natural abundance Fe, Cu and Zn. A normalisation strategy was also carried out to correct matrix effects, lack of tissue homogeneity and/or instrumental drifts using a thin gold film deposited on the sample surface. Quantitative images of cryo-sections of human eye lenses analysed by LA-ICP-MS revealed a homogeneous distribution of Fe, Cu and Zn in the nuclear region and a slight increase in Fe concentration in the outer cell layer (i.e. lens epithelium) at the anterior pole. These results were assessed also by isotope dilution mass spectrometry, and Fe, Cu and Zn concentrations determined by ID-ICP-MS in digested samples of lenses and lens capsules.
Cybulski, Olgierd; Jakiela, Slawomir; Garstecki, Piotr
2015-12-01
The simplest microfluidic network (a loop) comprises two parallel channels with a common inlet and a common outlet. Recent studies that assumed a constant cross section of the channels along their length have shown that the sequence of droplets entering the left (L) or right (R) arm of the loop can present either a uniform distribution of choices (e.g., RLRLRL...) or long sequences of repeated choices (RRR...LLL), with all the intermediate permutations being dynamically equivalent and virtually equally probable to be observed. We use experiments and computer simulations to show that even small variation of the cross section along channels completely shifts the dynamics either into the strong preference for highly grouped patterns (RRR...LLL) that generate system-size oscillations in flow or just the opposite-to patterns that distribute the droplets homogeneously between the arms of the loop. We also show the importance of noise in the process of self-organization of the spatiotemporal patterns of droplets. Our results provide guidelines for rational design of systems that reproducibly produce either grouped or homogeneous sequences of droplets flowing in microfluidic networks.
Atmospheric Gaseous Plasma with Large Dimensions
NASA Astrophysics Data System (ADS)
Korenev, Sergey
2012-10-01
The forming of atmospheric plasma with large dimensions using electrical discharge typically uses the Dielectric Barrier Discharge (DBD). The study of atmospheric DBD was shown some problems related to homogeneous volume plasma. The volume of this plasma determines by cross section and gas gap between electrode and dielectric. The using of electron beam for volume ionization of air molecules by CW relativistic electron beams was shown the high efficiency of this process [1, 2]. The main advantage of this approach consists in the ionization of gas molecules by electrons in longitudinal direction determines by their kinetic energy. A novel method for forming of atmospheric homogeneous plasma with large volume dimensions using ionization of gas molecules by pulsed non-relativistic electron beams is presented in the paper. The results of computer modeling for delivered doses of electron beams in gases and ionization are discussed. The structure of experimental bench with plasma diagnostics is considered. The preliminary results of forming atmospheric plasma with ionization gas molecules by pulsed nanosecond non-relativistic electron beam are given. The analysis of potential applications for atmospheric volume plasma is presented. Reference: [1] S. Korenev. ``The ionization of air by scanning relativistic high power CW electron beam,'' 2002 IEEE International Conference on Plasma Science. May 2002, Alberta, Canada. [2] S. Korenev, I. Korenev. ``The propagation of high power CW scanning electron beam in air.'' BEAMS 2002: 14th International Conference on High-Power Particle Beams, Albuquerque, New Mexico (USA), June 2002, AIP Conference Proceedings Vol. 650(1), pp. 373-376. December 17.
Characterization of the silver coins of the Hoard of Beçin by X-ray based methods
NASA Astrophysics Data System (ADS)
Rodrigues, M.; Schreiner, M.; Melcher, M.; Guerra, M.; Salomon, J.; Radtke, M.; Alram, M.; Schindel, N.
2011-12-01
Four hundred sixteen silver coins stemming from the Ottoman Empire (16th and 17th centuries) were analyzed in order to confirm the fineness of the coinage as well as to study the provenance of the alloy used for the coins. As most of the coins showed the typical green patina on their surfaces due to corrosion processes which have led to the depletion of copper in the near-surface domains of the silver coins in comparison to their core composition, small samples by cutting splinters from the coins had to be taken, embedded in synthetic resin and cross-sectioned in order to investigate the true-heart metal composition. The type of the alloy was investigated as well as if coins minted in different locations demonstrated homogeneous traits concerning the predominant impurities which could suggest a common ore. Several X-ray based techniques (μ-XRF, μ-SRXRF and μ-PIXE) could be applied in order to determine the silver contents as well as the minor and trace elements. Finally, SEM/EDX was applied in order to study the homogeneity/heterogeneity of the coins and the presence of surface enrichments. In general, the silver content of the analyzed specimen varies between 90% and 95%. These outcomes have not supported the historical interpretations, which predict that during the period studied a debasement of approximately 44% of the silver content of the coins should have occurred.
The Fourier transforms for the spatially homogeneous Boltzmann equation and Landau equation
NASA Astrophysics Data System (ADS)
Meng, Fei; Liu, Fang
2018-03-01
In this paper, we study the Fourier transforms for two equations arising in the kinetic theory. The first equation is the spatially homogeneous Boltzmann equation. The Fourier transform of the spatially homogeneous Boltzmann equation has been first addressed by Bobylev (Sov Sci Rev C Math Phys 7:111-233, 1988) in the Maxwellian case. Alexandre et al. (Arch Ration Mech Anal 152(4):327-355, 2000) investigated the Fourier transform of the gain operator for the Boltzmann operator in the cut-off case. Recently, the Fourier transform of the Boltzmann equation is extended to hard or soft potential with cut-off by Kirsch and Rjasanow (J Stat Phys 129:483-492, 2007). We shall first establish the relation between the results in Alexandre et al. (2000) and Kirsch and Rjasanow (2007) for the Fourier transform of the Boltzmann operator in the cut-off case. Then we give the Fourier transform of the spatially homogeneous Boltzmann equation in the non cut-off case. It is shown that our results cover previous works (Bobylev 1988; Kirsch and Rjasanow 2007). The second equation is the spatially homogeneous Landau equation, which can be obtained as a limit of the Boltzmann equation when grazing collisions prevail. Following the method in Kirsch and Rjasanow (2007), we can also derive the Fourier transform for Landau equation.
Analysis of openings and wide of leaf on multileaf Colimators Using Gafchromic RTQA2 Film
NASA Astrophysics Data System (ADS)
Setiawati, Evi; Lailla Rachma, Assyifa; Hidayatullah, M.
2018-05-01
The research determined an excitence of correction openings leaf for treatment, and the distribution dose using Gafchromic RTQA2 film. This was about MLC’s correction based on result of movement leaf and field irradiating uniform was done. Methods of research was conduct an irradiating on Gafchromic RTQA2 film based on the index planning homogeneity philosophy, openings leaf and wide leaf. The result of film was lit later in scan. It was continued to include image of the software scanning into matlab. From this case, the image of films common to greyscale image and analysis on the rise in doses blackish films. In this step, we made a correlation between the doses and determine the homogenity to know film dosimetri used homogeneous, and correction of openings leaf and wide leaf. The result between pixel and doses was linear with the equation y = (-0,6)x+108 to low dose and y = (-0,28)x + 108 to high doses and the index of homogeneity range of 0,003 – 0,084. The result homogeneous and correction distribution doses at the openings leaf and wide leaf was around 5% with a value still into the suggested tolerance from ICRU No.50 was 10%.
NASA Astrophysics Data System (ADS)
Gao, K.; van Dommelen, J. A. W.; Göransson, P.; Geers, M. G. D.
2015-09-01
In this paper, a homogenization method is proposed to obtain the parameters of Biot's poroelastic theory from a multiscale perspective. It is assumed that the behavior of a macroscopic material point can be captured through the response of a microscopic Representative Volume Element (RVE) consisting of both a solid skeleton and a gaseous fluid. The macroscopic governing equations are assumed to be Biot's poroelastic equations and the RVE is governed by the conservation of linear momentum and the adopted linear constitutive laws under the isothermal condition. With boundary conditions relying on the macroscopic solid displacement and fluid pressure, the homogenized solid stress and fluid displacement are obtained based on energy consistency. This homogenization framework offers an approach to obtain Biot's parameters directly through the response of the RVE in the regime of Darcy's flow where the pressure gradient is dominating. A numerical experiment is performed in the form of a sound absorption test on a porous material with an idealized partially open microstructure that is described by Biot's equations where the parameters are obtained through the proposed homogenization approach. The result is evaluated by comparison with Direct Numerical Simulations (DNS), showing a superior performance of this approach compared to an alternative semi-phenomenological model for estimating Biot's parameters of the studied porous material.
NASA Astrophysics Data System (ADS)
Sharifzadeh, M.; Hashemabadi, S. H.; Afarideh, H.; Khalafi, H.
2018-02-01
The problem of how to accurately measure multiphase flow in the oil/gas industry remains as an important issue since the early 80 s. Meanwhile, oil-water two-phase flow rate measurement has been regarded as an important issue. Gamma-ray attenuation is one of the most commonly used methods for phase fraction measurement which is entirely dependent on the flow regime variations. The peripheral strategy applied for removing the regime dependency problem, is using a homogenization system as a preconditioning tool, as this research work demonstrates. Here, at first, TPFHL as a two-phase flow homogenizer loop has been introduced and verified by a quantitative assessment. In the wake of this procedure, SEMPF as a static-equivalent multiphase flow with an additional capability for preparing a uniform mixture has been explained. The proposed idea in this system was verified by Monte Carlo simulations. Finally, the different water-gas oil two-phase volume fractions fed to the homogenizer loop and injected into the static-equivalent system. A comparison between performance of these two systems by using gamma-ray attenuation technique, showed not only an extra ability to prepare a homogenized mixture but a remarkably increased measurement accuracy for the static-equivalent system.
NASA Astrophysics Data System (ADS)
Formetta, Giuseppe; Bell, Victoria; Stewart, Elizabeth
2018-02-01
Regional flood frequency analysis is one of the most commonly applied methods for estimating extreme flood events at ungauged sites or locations with short measurement records. It is based on: (i) the definition of a homogeneous group (pooling-group) of catchments, and on (ii) the use of the pooling-group data to estimate flood quantiles. Although many methods to define a pooling-group (pooling schemes, PS) are based on catchment physiographic similarity measures, in the last decade methods based on flood seasonality similarity have been contemplated. In this paper, two seasonality-based PS are proposed and tested both in terms of the homogeneity of the pooling-groups they generate and in terms of the accuracy in estimating extreme flood events. The method has been applied in 420 catchments in Great Britain (considered as both gauged and ungauged) and compared against the current Flood Estimation Handbook (FEH) PS. Results for gauged sites show that, compared to the current PS, the seasonality-based PS performs better both in terms of homogeneity of the pooling-group and in terms of the accuracy of flood quantile estimates. For ungauged locations, a national-scale hydrological model has been used for the first time to quantify flood seasonality. Results show that in 75% of the tested locations the seasonality-based PS provides an improvement in the accuracy of the flood quantile estimates. The remaining 25% were located in highly urbanized, groundwater-dependent catchments. The promising results support the aspiration that large-scale hydrological models complement traditional methods for estimating design floods.
Towards machine ecoregionalization of Earth's landmass using pattern segmentation method
NASA Astrophysics Data System (ADS)
Nowosad, Jakub; Stepinski, Tomasz F.
2018-07-01
We present and evaluate a quantitative method for delineation of ecophysiographic regions throughout the entire terrestrial landmass. The method uses the new pattern-based segmentation technique which attempts to emulate the qualitative, weight-of-evidence approach to a delineation of ecoregions in a computer code. An ecophysiographic region is characterized by homogeneous physiography defined by the cohesiveness of patterns of four variables: land cover, soils, landforms, and climatic patterns. Homogeneous physiography is a necessary but not sufficient condition for a region to be an ecoregion, thus machine delineation of ecophysiographic regions is the first, important step toward global ecoregionalization. In this paper, we focus on the first-order approximation of the proposed method - delineation on the basis of the patterns of the land cover alone. We justify this approximation by the existence of significant spatial associations between various physiographic variables. Resulting ecophysiographic regionalization (ECOR) is shown to be more physiographically homogeneous than existing global ecoregionalizations (Terrestrial Ecoregions of the World (TEW) and Bailey's Ecoregions of the Continents (BEC)). The presented quantitative method has an advantage of being transparent and objective. It can be verified, easily updated, modified and customized for specific applications. Each region in ECOR contains detailed, SQL-searchable information about physiographic patterns within it. It also has a computer-generated label. To give a sense of how ECOR compares to TEW and, in the U.S., to EPA Level III ecoregions, we contrast these different delineations using two specific sites as examples. We conclude that ECOR yields regionalization somewhat similar to EPA level III ecoregions, but for the entire world, and by automatic means.
Friese, K C; Grobecker, K H; Wätjen, U
2001-07-01
A method has been developed for measurement of the homogeneity of analyte distribution in powdered materials by use of electrothermal vaporization with inductively coupled plasma mass spectrometric (ETV-ICP-MS) detection. The method enabled the simultaneous determination of As, Cd, Cu, Fe, Mn, Pb, and Zn in milligram amounts of samples of biological origin. The optimized conditions comprised a high plasma power of 1,500 W, reduced aerosol transport flow, and heating ramps below 300 degrees C s(-1). A temperature ramp to 550 degrees C ensured effective pyrolysis of approximately 70% of the organic compounds without losses of analyte. An additional hold stage at 700 degrees C led to separation of most of the analyte signals from the evaporation of carbonaceous matrix compounds. The effect of time resolution of signal acquisition on the precision of the ETV measurements was investigated. An increase in the number of masses monitored up to 20 is possible with not more than 1% additional relative standard deviation of results caused by limited temporal resolution of the transient signals. Recording of signals from the nebulization of aqueous standards in each sample run enabled correction for drift of the sensitivity of the ETV-ICP-MS instrument. The applicability of the developed method to homogeneity studies was assessed by use of four certified reference materials. According to the best repeatability observed in these sample runs, the maximum contribution of the method to the standard deviation is approximately 5% to 6% for all the elements investigated.
Eulerian formulation of the interacting particle representation model of homogeneous turbulence
Campos, Alejandro; Duraisamy, Karthik; Iaccarino, Gianluca
2016-10-21
The Interacting Particle Representation Model (IPRM) of homogeneous turbulence incorporates information about the morphology of turbulent structures within the con nes of a one-point model. In the original formulation [Kassinos & Reynolds, Center for Turbulence Research: Annual Research Briefs, 31{51, (1996)], the IPRM was developed in a Lagrangian setting by evolving second moments of velocity conditional on a given gradient vector. In the present work, the IPRM is re-formulated in an Eulerian framework and evolution equations are developed for the marginal PDFs. Eulerian methods avoid the issues associated with statistical estimators used by Lagrangian approaches, such as slow convergence. Amore » specific emphasis of this work is to use the IPRM to examine the long time evolution of homogeneous turbulence. We first describe the derivation of the marginal PDF in spherical coordinates, which reduces the number of independent variables and the cost associated with Eulerian simulations of PDF models. Next, a numerical method based on radial basis functions over a spherical domain is adapted to the IPRM. Finally, results obtained with the new Eulerian solution method are thoroughly analyzed. The sensitivity of the Eulerian simulations to parameters of the numerical scheme, such as the size of the time step and the shape parameter of the radial basis functions, is examined. A comparison between Eulerian and Lagrangian simulations is performed to discern the capabilities of each of the methods. Finally, a linear stability analysis based on the eigenvalues of the discrete differential operators is carried out for both the new Eulerian solution method and the original Lagrangian approach.« less
A Review of Web Information Seeking Research: Considerations of Method and Foci of Interest
ERIC Educational Resources Information Center
Martzoukou, Konstantina
2005-01-01
Introduction: This review shows that Web information seeking research suffers from inconsistencies in method and a lack of homogeneity in research foci. Background: Qualitative and quantitative methods are needed to produce a comprehensive view of information seeking. Studies also recommend observation as one of the most fundamental ways of…
Hasan, Mojeer; Azhar, Mohd; Nangia, Hina; Bhatt, Prakash Chandra; Panda, Bibhu Prasad
2016-01-01
In this study astaxanthin production by Phaffia rhodozyma was enhanced by chemical mutation using ethyl methane sulfonate. The mutant produces a higher amount of astaxanthin than the wild yeast strain. In comparison to supercritical fluid technique, high-pressure homogenization is better for extracting astaxanthin from yeast cells. Ultrasonication of dimethyl sulfoxide, hexane, and acetone-treated cells yielded less astaxanthin than β-glucanase enzyme-treated cells. The combination of ultrasonication with β-glucanase enzyme is found to be the most efficient method of extraction among all the tested physical and chemical extraction methods. It gives a maximum yield of 435.71 ± 6.55 µg free astaxanthin per gram of yeast cell mass.
Large eddy simulation of hydrodynamic cavitation
NASA Astrophysics Data System (ADS)
Bhatt, Mrugank; Mahesh, Krishnan
2017-11-01
Large eddy simulation is used to study sheet to cloud cavitation over a wedge. The mixture of water and water vapor is represented using a homogeneous mixture model. Compressible Navier-Stokes equations for mixture quantities along with transport equation for vapor mass fraction employing finite rate mass transfer between the two phases, are solved using the numerical method of Gnanaskandan and Mahesh. The method is implemented on unstructured grid with parallel MPI capabilities. Flow over a wedge is simulated at Re = 200 , 000 and the performance of the homogeneous mixture model is analyzed in predicting different regimes of sheet to cloud cavitation; namely, incipient, transitory and periodic, as observed in the experimental investigation of Harish et al.. This work is supported by the Office of Naval Research.
ISOTOPE CONVERSION DEVICE AND METHOD
Wigner, E.P.; Ohlinger, L.A.
1958-11-11
Homogeneous nuclear reactors are discussed, and an apparatus and method of operation are descrlbed. The apparatus consists essentially of a reaction tank, a heat exchanger connected to the reaction tank and two separate surge tanks connected to the heat exchanger. An oscillating differential pressure is applied to the surge tanks so that a portion of the homogeneous flssionable solution is circulated through the heat exchanger and reaction tank while maintaining sufficient solution in the reaction tank to sustain a controlled fission chain reaction. The reaction tank is disposed within another tank containing a neutron absorbing material through which coolant fluid is circulated, the outer tank being provided with means to permit and cause rotation thereof due to the circulation of the coolant therethrough.
Zhang, Ruilin; Chen, Jian; Zhang, Xuewu
2018-01-01
Due to the rigid cell wall of Chlorella species, it is still challenging to effectively extract significant amounts of protein. Mass methods were used for the extraction of intracellular protein from microalgae with biological, mechanical and chemical approaches. In this study, based on comparison of different extraction methods, a new protocol was established to maximize extract amounts of protein, which was involved in ethanol soaking, enzyme digest, ultrasonication and homogenization techniques. Under the optimized conditions, 72.4% of protein was extracted from the microalgae Chlorella pyrenoidosa, which should contribute to the research and development of Chlorella protein in functional food and medicine. Copyright © 2017 Elsevier Ltd. All rights reserved.
Volcanic hazard assessment for the Canary Islands (Spain) using extreme value theory
NASA Astrophysics Data System (ADS)
Sobradelo, R.; Martí, J.; Mendoza-Rosas, A. T.; Gómez, G.
2011-10-01
The Canary Islands are an active volcanic region densely populated and visited by several millions of tourists every year. Nearly twenty eruptions have been reported through written chronicles in the last 600 yr, suggesting that the probability of a new eruption in the near future is far from zero. This shows the importance of assessing and monitoring the volcanic hazard of the region in order to reduce and manage its potential volcanic risk, and ultimately contribute to the design of appropriate preparedness plans. Hence, the probabilistic analysis of the volcanic eruption time series for the Canary Islands is an essential step for the assessment of volcanic hazard and risk in the area. Such a series describes complex processes involving different types of eruptions over different time scales. Here we propose a statistical method for calculating the probabilities of future eruptions which is most appropriate given the nature of the documented historical eruptive data. We first characterize the eruptions by their magnitudes, and then carry out a preliminary analysis of the data to establish the requirements for the statistical method. Past studies in eruptive time series used conventional statistics and treated the series as an homogeneous process. In this paper, we will use a method that accounts for the time-dependence of the series and includes rare or extreme events, in the form of few data of large eruptions, since these data require special methods of analysis. Hence, we will use a statistical method from extreme value theory. In particular, we will apply a non-homogeneous Poisson process to the historical eruptive data of the Canary Islands to estimate the probability of having at least one volcanic event of a magnitude greater than one in the upcoming years. This is done in three steps: First, we analyze the historical eruptive series to assess independence and homogeneity of the process. Second, we perform a Weibull analysis of the distribution of repose time between successive eruptions. Third, we analyze the non-homogeneous Poisson process with a generalized Pareto distribution as the intensity function.
Hill, Steven C.; Williamson, Chatt C.; Doughty, David C.; ...
2015-02-02
This paper uses a mathematical model of fluorescent biological particles composed of bacteria and/or proteins (mostly as in Hill et al., 2013 [23]) to investigate the size-dependence of the total fluorescence emitted in all directions. The model applies to particles which have negligible reabsorption of fluorescence within the particle. The specific particles modeled here are composed of ovalbumin and of a generic Bacillus. The particles need not be spherical, and in some cases need not be homogeneous. However, the results calculated in this paper are for spherical homogeneous particles. Light absorbing and fluorescing molecules included in the model are aminomore » acids, nucleic acids, and several coenzymes. Here the excitation wavelength is 266 nm. The emission range, 300 to 370 nm, encompasses the fluorescence of tryptophan. The fluorescence cross section (C F) is calculated and compared with one set of published measured values. We investigate power law (Ad y) approximations to C F, where d is diameter, and A and y are parameters adjusted to fit the data, and examine how y varies with d and composition, including the fraction as water. The particle's fluorescence efficiency (Q F=C F/geometric-cross-section) can be written for homogeneous particles as Q absR F, where Q abs is the absorption efficiency, and R F, the fraction of the absorbed light emitted as fluorescence, is independent of size and shape. When Q F is plotted vs. m id or mi(m r-1)d, where m=m r+im i is the complex refractive index, the plots for different fractions of water in the particle tend to overlap.« less
Controlling depinning and propagation of single domain-walls in magnetic microwires
NASA Astrophysics Data System (ADS)
Jiménez, Alejandro; del Real, Rafael P.; Vázquez, Manuel
2013-03-01
The magnetization reversal in magnetostrictive amorphous microwires takes place by depinning and propagation of a single domain wall. This is a consequence of the particular domain structure determined by the strong uniaxial anisotropy from the reinforcement of magnetoelastic and shape contributions. In the present study, after an overview on the current state-of-the art on the topic, we introduce the general behaviour of single walls in 30 to 40 cm long Fe-base microwires propagating under homogeneous field. Depending on the way the walls are generated, we distinguish among three different walls namely, standard wall, DWst, depinned and propagating from the wire's end under homogeneous field which motion is the first one to switch on; reverse wall, DWrev, propagating from the opposite end under non-homogeneous field, and defect wall, DWdef, nucleated around local defect. Both, DWrev and DWdef are observed only under large enough applied field. In the subsequent section, we study the propagation of a wall under applied field smaller than the switching field. There, we conclude that a minimum field, Hdep,0, is needed to depin the DWst, as well as that a minimum field, Hprop,0, is required for the wall to propagate long distances. In the last section, we analyse the shape of induced signals in the pickup coils upon the crossing of the walls and its correlation to the domain walls shape. We conclude that length and shape of the wall are significantly distorted by the fact that the wall is typically as long as the measuring coils. Contribution to the Topical Issue "New Trends in Magnetism and Magnetic Materials", edited by Francesca Casoli, Massimo Solzi and Paola Tiberto.
Marcelino, Isabel; Lefrançois, Thierry; Martinez, Dominique; Giraud-Girard, Ken; Aprelon, Rosalie; Mandonnet, Nathalie; Gaucheron, Jérôme; Bertrand, François; Vachiéry, Nathalie
2015-01-29
The use of cheap and thermoresistant vaccines in poor tropical countries for the control of animal diseases is a key issue. Our work aimed at designing and validating a process for the large-scale production of a ready-to-use inactivated vaccine for ruminants. Our model was heartwater caused by the obligate intracellular bacterium Ehrlichia ruminantium (ER). The conventional inactivated vaccine against heartwater (based on whole bacteria inactivated with sodium azide) is prepared immediately before injection, using a syringe-extrusion method with Montanide ISA50. This is a fastidious time-consuming process and it limits the number of vaccine doses available. To overcome these issues, we tested three different techniques (syringe, vortex and homogenizer) and three Montanide ISA adjuvants (50, 70 and 70M). High-speed homogenizer was the optimal method to emulsify ER antigens with both ISA70 and 70M adjuvants. The emulsions displayed a good homogeneity (particle size below 1 μm and low phase separation), conductivity below 10 μS/cm and low antigen degradation at 4 °C for up to 1 year. The efficacy of the different formulations was then evaluated during vaccination trials on goats. The inactivated ER antigens emulsified with ISA70 and ISA70M in a homogenizer resulted in 80% and 100% survival rates, respectively. A cold-chain rupture assay using ISA70M+ER was performed to mimic possible field conditions exposing the vaccine at 37 °C for 4 days before delivery. Surprisingly, the animal survival rate was still high (80%). We also observed that the MAP-1B antibody response was very similar between animals vaccinated with ISA70+ER and ISA70M+ER emulsions, suggesting a more homogenous antigen distribution and presentation in these emulsions. Our work demonstrated that the combination of ISA70 or ISA70M and homogenizer is optimal for the production of an effective ready-to-use inactivated vaccine against heartwater, which could easily be produced on an industrial scale. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Study on Regional Frequency Analysis using Artificial Neural Network - the Sumjin River Basin
NASA Astrophysics Data System (ADS)
Jeong, C.; Ahn, J.; Ahn, H.; Heo, J. H.
2017-12-01
Regional frequency analysis means to make up for shortcomings in the at-site frequency analysis which is about a lack of sample size through the regional concept. Regional rainfall quantile depends on the identification of hydrologically homogeneous regions, hence the regional classification based on hydrological homogeneous assumption is very important. For regional clustering about rainfall, multidimensional variables and factors related geographical features and meteorological figure are considered such as mean annual precipitation, number of days with precipitation in a year and average maximum daily precipitation in a month. Self-Organizing Feature Map method which is one of the artificial neural network algorithm in the unsupervised learning techniques solves N-dimensional and nonlinear problems and be shown results simply as a data visualization technique. In this study, for the Sumjin river basin in South Korea, cluster analysis was performed based on SOM method using high-dimensional geographical features and meteorological factor as input data. then, for the results, in order to evaluate the homogeneity of regions, the L-moment based discordancy and heterogeneity measures were used. Rainfall quantiles were estimated as the index flood method which is one of regional rainfall frequency analysis. Clustering analysis using SOM method and the consequential variation in rainfall quantile were analyzed. This research was supported by a grant(2017-MPSS31-001) from Supporting Technology Development Program for Disaster Management funded by Ministry of Public Safety and Security(MPSS) of the Korean government.
Saboti, Denis; Maver, Uroš; Chan, Hak-Kim; Planinšek, Odon
2017-07-01
Budesonide (BDS) is a potent active pharmaceutical ingredient, often administered using respiratory devices such as metered dose inhalers, nebulizers, and dry powder inhalers. Inhalable drug particles are conventionally produced by crystallization followed by milling. This approach tends to generate partially amorphous materials that require post-processing to improve the formulations' stability. Other methods involve homogenization or precipitation and often require the use of stabilizers, mostly surfactants. The purpose of this study was therefore to develop a novel method for preparation of fine BDS particles using a microfluidic reactor coupled with ultrasonic spray freeze drying, and hence avoiding the need of additional homogenization or stabilizer use. A T-junction microfluidic reactor was employed to produce particle suspension (using an ethanol-water, methanol-water, and an acetone-water system), which was directly fed into an ultrasonic atomization probe, followed by direct feeding to liquid nitrogen. Freeze drying was the final preparation step. The result was fine crystalline BDS powders which, when blended with lactose and dispersed in an Aerolizer at 100 L/min, generated fine particle fraction in the range 47.6% ± 2.8% to 54.9% ± 1.8%, thus exhibiting a good aerosol performance. Subsequent sample analysis confirmed the suitability of the developed method to produce inhalable drug particles without additional homogenization or stabilizers. The developed method provides a viable solution for particle isolation in microfluidics in general. Copyright © 2017 American Pharmacists Association®. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Rong; Zhukhovitskiy, Aleksandr V.; Deraedt, Christophe V.
Recyclable catalysts, especially those that display selective reactivity, are vital for the development of sustainable chemical processes. Among available catalyst platforms, heterogeneous catalysts are particularly well-disposed toward separation from the reaction mixture via filtration methods, which renders them readily recyclable. Furthermore, heterogeneous catalysts offer numerous handles—some without homogeneous analogues—for performance and selectivity optimization. These handles include nanoparticle size, pore profile of porous supports, surface ligands and interface with oxide supports, and flow rate through a solid catalyst bed. Despite these available handles, however, conventional heterogeneous catalysts are themselves often structurally heterogeneous compared to homogeneous catalysts, which complicates efforts to optimizemore » and expand the scope of their reactivity and selectivity. Ongoing efforts in our laboratories are aimed to address the above challenge by heterogenizing homogeneous catalysts, which can be defined as the modification of homogeneous catalysts to render them in a separable (solid) phase from the starting materials and products. Specifically, we grow the small nanoclusters in dendrimers, a class of uniform polymers with the connectivity of fractal trees and generally radial symmetry. Thanks to their dense multivalency, shape persistence, and structural uniformity, dendrimers have proven to be versatile scaffolds for the synthesis and stabilization of small nanoclusters. Then these dendrimer-encapsulated metal clusters (DEMCs) are adsorbed onto mesoporous silica. Through this method, we have achieved selective transformations that had been challenging to accomplish in a heterogeneous setting, e.g., π-bond activation and aldol reactions. Extensive investigation into the catalytic systems under reaction conditions allowed us to correlate the structural features (e.g., oxidation states) of the catalysts and their activity. Moreover, we have demonstrated that supported DEMCs are also excellent catalysts for typical heterogeneous reactions, including hydrogenation and alkane isomerization. Critically, these investigations also confirmed that the supported DEMCs are heterogeneous and stable against leaching. Catalysts optimization is achieved through the modulation of various parameters. The clusters are oxidized (e.g., with PhICl 2) or reduced (e.g., with H 2) in situ. Changing the dendrimer properties (e.g., generation, terminal functional groups) is analogous to ligand modification in homogeneous catalysts, which affect both catalytic activity and selectivity. Similarly, pore size of the support is another factor in determining product distribution. In a flow reactor, the flow rate is adjusted to control the residence time of the starting material and intermediates, and thus the final product selectivity. Our approach to heterogeneous catalysis affords various advantages: (1) the catalyst system can tap into the reactivity typical to homogeneous catalysts, which conventional heterogeneous catalysts could not achieve; (2) unlike most homogeneous catalysts with comparable performance, the heterogenized homogeneous catalysts can be recycled; (3) improved activity or selectivity compared to conventional homogeneous catalysts is possible because of uniquely heterogeneous parameters for optimization. Here in this Account, we will briefly introduce metal clusters and describe the synthesis and characterizations of supported DEMCs. We will present the catalysis studies of supported DEMCs in both the batch and flow modes. Lastly, we will summarize the current state of heterogenizing homogeneous catalysis and provide future directions for this area of research.« less
Results of the Simulation of the HTR-Proteus Core 4.2 Using PEBBED-COMBINE: FY10 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hans Gougar
2010-07-01
ABSTRACT The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. This report is a follow-on to INL/EXT-09-16620 in which the same calculation was performed but using earlier versions of the codes and less developed methods. In that report, results indicated that the cross sections generated using COMBINE-7.0 did not yield satisfactory estimates of keff. It was concluded in the report that the modeling of control rods was not satisfactory. In the past year, improvements to the homogenization capability in COMBINE havemore » enabled the explicit modeling of TRIS particles, pebbles, and heterogeneous core zones including control rod regions using a new multi-scale version of COMBINE in which the 1-dimensional discrete ordinate transport code ANISN has been integrated. The new COMBINE is shown to yield benchmark quality results for pebble unit cell models, the first step in preparing few-group diffusion parameters for core simulations. In this report, the full critical core is modeled once again but with cross sections generated using the capabilities and physics of the improved COMBINE code. The new PEBBED-COMBINE model enables the exact modeling of the pebbles and control rod region along with better approximation to structures in the reflector. Initial results for the core multiplication factor indicate significant improvement in the INL’s tools for modeling the neutronic properties of a pebble bed reactor. Errors on the order of 1.6-2.5% in keff are obtained; a significant improvement over the 5-6% error observed in the earlier This is acceptable for a code system and model in the early stages of development but still too high for a production code. Analysis of a simpler core model indicates an over-prediction of the flux in the low end of the thermal spectrum. Causes of this discrepancy are under investigation. New homogenization techniques and assumptions were used in this analysis and as such, they require further confirmation and validation. Further refinement and review of the complex Proteus core model are likely to reduce the errors even further.« less
Transceiver-Phased Arrays for Human Brain Studies at 7 T
2013-01-01
The paper describes technological advances in high-field (7 T) transceiver-phased arrays developed for magnetic resonance imaging of the human brain. The first part of this work describes an 8-element inductively decoupled split elliptical transceiver-phased array with selectable geometry, which provides an easy and efficient way of compensating for changes in mutual inductive coupling associated with difference in loading due to variability in head shape and size. The second part of the work describes a double-row 16-element (2 × 8) transceiver array to extend the homogeneous transmit B1 profile in the longitudinal direction. Multiplexing eight transmit channels between the two rows of the array provides homogeneous excitation over the entire volume. The final section describes design and construction of a double-tuned 31P/1H 16-element (8 at each frequency) array. The array improves transmission efficiency and B1 homogeneity at 1H frequency in comparison with 31P/1H quadrature transverse electromagnetic volume coil. For 31P studies, the array also improves transmission efficiency (38%), signal-to-noise ratio (SNR) for central brain locations (20%) and provides substantially greater SNR (up to 400%) for peripheral locations. PMID:23516332
Liba, Amir; Wanagat, Jonathan
2014-11-01
Complex diseases such as heart disease, stroke, cancer, and aging are the primary causes of death in the US. These diseases cause heterogeneous conditions among cells, conditions that cannot be measured in tissue homogenates and require single cell approaches. Understanding protein levels within tissues is currently assayed using various molecular biology techniques (e.g., Western blots) that rely on milligram to gram quantities of tissue homogenates or immunofluorescent (IF) techniques that are limited by spectral overlap. Tissue homogenate studies lack references to tissue structure and mask signals from individual or rare cellular events. Novel techniques are required to bring protein measurement sensitivity to the single cell level and offer spatiotemporal resolution and scalability. We are developing a novel approach to protein quantification by exploiting the inherently low concentration of rare earth elements (REE) in biological systems. By coupling REE-antibody immunolabeling of cells with laser capture microdissection (LCM) and ICP-QQQ, we are achieving multiplexed protein measurement in histological sections of single cells. This approach will add to evolving single cell techniques and our ability to understand cellular heterogeneity in complex biological systems and diseases.
Reconstituted asbestos matrix for fuel cells
NASA Technical Reports Server (NTRS)
Mcbryar, H.
1975-01-01
Method is described for reprocessing commercially available asbestos matrix stock to yield greater porosity and bubble pressure (due to increased surface tension), improved homogeneity, and greater uniformity.
Understanding homogeneous nucleation in solidification of aluminum by molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Mahata, Avik; Asle Zaeem, Mohsen; Baskes, Michael I.
2018-02-01
Homogeneous nucleation from aluminum (Al) melt was investigated by million-atom molecular dynamics simulations utilizing the second nearest neighbor modified embedded atom method potentials. The natural spontaneous homogenous nucleation from the Al melt was produced without any influence of pressure, free surface effects and impurities. Initially isothermal crystal nucleation from undercooled melt was studied at different constant temperatures, and later superheated Al melt was quenched with different cooling rates. The crystal structure of nuclei, critical nucleus size, critical temperature for homogenous nucleation, induction time, and nucleation rate were determined. The quenching simulations clearly revealed three temperature regimes: sub-critical nucleation, super-critical nucleation, and solid-state grain growth regimes. The main crystalline phase was identified as face-centered cubic, but a hexagonal close-packed (hcp) and an amorphous solid phase were also detected. The hcp phase was created due to the formation of stacking faults during solidification of Al melt. By slowing down the cooling rate, the volume fraction of hcp and amorphous phases decreased. After the box was completely solid, grain growth was simulated and the grain growth exponent was determined for different annealing temperatures.
Sebe, István; Bodai, Zsolt; Eke, Zsuzsanna; Kállai-Szabó, Barnabás; Szabó, Péter; Zelkó, Romána
2015-01-01
Fiber-based dosage forms are potential alternatives of conventional dosage forms from the point of the improved extent and rate of drug dissolution. Rotary-spun polymer fibers and cast films were prepared and micronized in order to direct compress after homogenization with tabletting excipients. Particle size distribution of powder mixtures of micronized fibers and films homogenized with tabletting excipients were determined by laser scattering particle size distribution analyzer. Powder rheological behavior of the mixtures containing micronized fibers and cast films was also compared. Positron annihilation lifetime spectroscopy was applied for the microstructural characterization of micronized fibers and films. The water-soluble vitamin B12 release from the compressed tablets was determined. It was confirmed that the rotary spinning method resulted in homogeneous supramolecularly ordered powder mixture, which was successfully compressed after homogenization with conventional tabletting excipients. The obtained directly compressed tablets showed uniform drug release of low variations. The results highlight the novel application of micronized rotary-spun fibers as intermediate for further processing reserving the original favorable powder characteristics of fibrous systems.
Homogeneity tests of clustered diagnostic markers with applications to the BioCycle Study
Tang, Liansheng Larry; Liu, Aiyi; Schisterman, Enrique F.; Zhou, Xiao-Hua; Liu, Catherine Chun-ling
2014-01-01
Diagnostic trials often require the use of a homogeneity test among several markers. Such a test may be necessary to determine the power both during the design phase and in the initial analysis stage. However, no formal method is available for the power and sample size calculation when the number of markers is greater than two and marker measurements are clustered in subjects. This article presents two procedures for testing the accuracy among clustered diagnostic markers. The first procedure is a test of homogeneity among continuous markers based on a global null hypothesis of the same accuracy. The result under the alternative provides the explicit distribution for the power and sample size calculation. The second procedure is a simultaneous pairwise comparison test based on weighted areas under the receiver operating characteristic curves. This test is particularly useful if a global difference among markers is found by the homogeneity test. We apply our procedures to the BioCycle Study designed to assess and compare the accuracy of hormone and oxidative stress markers in distinguishing women with ovulatory menstrual cycles from those without. PMID:22733707
Parallel Excitation for B-Field Insensitive Fat-Saturation Preparation
Heilman, Jeremiah A.; Derakhshan, Jamal D.; Riffe, Matthew J.; Gudino, Natalia; Tkach, Jean; Flask, Chris A.; Duerk, Jeffrey L.; Griswold, Mark A.
2016-01-01
Multichannel transmission has the potential to improve many aspects of MRI through a new paradigm in excitation. In this study, multichannel transmission is used to address the effects that variations in B0 homogeneity have on fat-saturation preparation through the use of the frequency, phase, and amplitude degrees of freedom afforded by independent transmission channels. B1 homogeneity is intrinsically included via use of coil sensitivities in calculations. A new method, parallel excitation for B-field insensitive fat-saturation preparation, can achieve fat saturation in 89% of voxels with Mz ≤ 0.1 in the presence of ±4 ppm B0 variation, where traditional CHESS methods achieve only 40% in the same conditions. While there has been much progress to apply multichannel transmission at high field strengths, particular focus is given here to application of these methods at 1.5 T. PMID:22247080
NASA Astrophysics Data System (ADS)
Gou, Ming-Jiang; Yang, Ming-Lin; Sheng, Xin-Qing
2016-10-01
Mature red blood cells (RBC) do not contain huge complex nuclei and organelles, makes them can be approximately regarded as homogeneous medium particles. To compute the radiation pressure force (RPF) exerted by multiple laser beams on this kind of arbitrary shaped homogenous nano-particles, a fast electromagnetic optics method is demonstrated. In general, based on the Maxwell's equations, the matrix equation formed by the method of moment (MOM) has many right hand sides (RHS's) corresponding to the different laser beams. In order to accelerate computing the matrix equation, the algorithm conducts low-rank decomposition on the excitation matrix consisting of all RHS's to figure out the so-called skeleton laser beams by interpolative decomposition (ID). After the solutions corresponding to the skeletons are obtained, the desired responses can be reconstructed efficiently. Some numerical results are performed to validate the developed method.
Temperature Profile in Fuel and Tie-Tubes for Nuclear Thermal Propulsion Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishal Patel
A finite element method to calculate temperature profiles in heterogeneous geometries of tie-tube moderated LEU nuclear thermal propulsion systems and HEU designs with tie-tubes is developed and implemented in MATLAB. This new method is compared to previous methods to demonstrate shortcomings in those methods. Typical methods to analyze peak fuel centerline temperature in hexagonal geometries rely on spatial homogenization to derive an analytical expression. These methods are not applicable to cores with tie-tube elements because conduction to tie-tubes cannot be accurately modeled with the homogenized models. The fuel centerline temperature directly impacts safety and performance so it must be predictedmore » carefully. The temperature profile in tie-tubes is also important when high temperatures are expected in the fuel because conduction to the tie-tubes may cause melting in tie-tubes, which may set maximum allowable performance. Estimations of maximum tie-tube temperature can be found from equivalent tube methods, however this method tends to be approximate and overly conservative. A finite element model of heat conduction on a unit cell can model spatial dependence and non-linear conductivity for fuel and tie-tube systems allowing for higher design fidelity of Nuclear Thermal Propulsion.« less
Liu, Mingyang; Qin, Chaoran; Zhang, Zheng; Ma, Shuai; Cai, Xiuru; Li, Xueqian
2018-01-01
The electrodeposition of graphene has drawn considerable attention due to its appealing applications for sensors, supercapacitors and lithium-ion batteries. However, there are still some limitations in the current electrodeposition methods for graphene. Here, we present a novel electrodeposition method for the direct deposition of reduced graphene oxide (rGO) with chitosan. In this method, a 2-hydroxypropyltrimethylammonium chloride-based chitosan-modified rGO material was prepared. This material disperses homogenously in the chitosan solution, forming a deposition solution with good dispersion stability. Subsequently, the modified rGO material was deposited on an electrode through codeposition with chitosan, based on the coordination deposition method. After electrodeposition, the homogeneous, deposited rGO/chitosan films can be generated on copper or silver electrodes or substrates. The electrodeposition method allows for the convenient and controlled creation of rGO/chitosan nanocomposite coatings and films of different shapes and thickness. It also introduces a new method of creating films, as they can be peeled completely from the electrodes. Moreover, this method allows for a rGO/chitosan film to be deposited directly onto an electrode, which can then be used for electrochemical detection. PMID:29765797
A Homogeneous Time-Resolved Fluorescence Immunoassay Method for the Measurement of Compound W
Huang, Biao; Yu, Huixin; Bao, Jiandong; Zhang, Manda; Green, William L; Wu, Sing-Yung
2018-01-01
Objective: Using compound W (a 3,3′-diiodothyronine sulfate [T2S] immuno-crossreactive material)-specific polyclonal antibodies and homogeneous time-resolved fluorescence immunoassay assay techniques (AlphaLISA) to establish an indirect competitive compound W (ICW) quantitative detection method. Method: Photosensitive particles (donor beads) coated with compound W or T2S and rabbit anti-W antibody were incubated with biotinylated goat anti-rabbit antibody. This constitutes a detection system with streptavidin-coated acceptor particle. We have optimized the test conditions and evaluated the detection performance. Results: The sensitivity of the method was 5 pg/mL, and the detection range was 5 to 10 000 pg/mL. The intra-assay coefficient of variation averages <10% with stable reproducibility. Conclusions: The ICW-AlphaLISA shows good stability and high sensitivity and can measure a wide range of compound W levels in extracts of maternal serum samples. This may have clinical application to screen congenital hypothyroidism in utero. PMID:29449777