Daxini, S D; Prajapati, J M
2014-01-01
Meshfree methods are viewed as next generation computational techniques. With evident limitations of conventional grid based methods, like FEM, in dealing with problems of fracture mechanics, large deformation, and simulation of manufacturing processes, meshfree methods have gained much attention by researchers. A number of meshfree methods have been proposed till now for analyzing complex problems in various fields of engineering. Present work attempts to review recent developments and some earlier applications of well-known meshfree methods like EFG and MLPG to various types of structure mechanics and fracture mechanics applications like bending, buckling, free vibration analysis, sensitivity analysis and topology optimization, single and mixed mode crack problems, fatigue crack growth, and dynamic crack analysis and some typical applications like vibration of cracked structures, thermoelastic crack problems, and failure transition in impact problems. Due to complex nature of meshfree shape functions and evaluation of integrals in domain, meshless methods are computationally expensive as compared to conventional mesh based methods. Some improved versions of original meshfree methods and other techniques suggested by researchers to improve computational efficiency of meshfree methods are also reviewed here.
Meshfree truncated hierarchical refinement for isogeometric analysis
NASA Astrophysics Data System (ADS)
Atri, H. R.; Shojaee, S.
2018-05-01
In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.
Efficient searching in meshfree methods
NASA Astrophysics Data System (ADS)
Olliff, James; Alford, Brad; Simkins, Daniel C.
2018-04-01
Meshfree methods such as the Reproducing Kernel Particle Method and the Element Free Galerkin method have proven to be excellent choices for problems involving complex geometry, evolving topology, and large deformation, owing to their ability to model the problem domain without the constraints imposed on the Finite Element Method (FEM) meshes. However, meshfree methods have an added computational cost over FEM that come from at least two sources: increased cost of shape function evaluation and the determination of adjacency or connectivity. The focus of this paper is to formally address the types of adjacency information that arises in various uses of meshfree methods; a discussion of available techniques for computing the various adjacency graphs; propose a new search algorithm and data structure; and finally compare the memory and run time performance of the methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yuzhou, E-mail: yuzhousun@126.com; Chen, Gensheng; Li, Dongxia
2016-06-08
This paper attempts to study the application of mesh-free method in the numerical simulations of the higher-order continuum structures. A high-order bending beam considers the effect of the third-order derivative of deflections, and can be viewed as a one-dimensional higher-order continuum structure. The moving least-squares method is used to construct the shape function with the high-order continuum property, the curvature and the third-order derivative of deflections are directly interpolated with nodal variables and the second- and third-order derivative of the shape function, and the mesh-free computational scheme is establish for beams. The coupled stress theory is introduced to describe themore » special constitutive response of the layered rock mass in which the bending effect of thin layer is considered. The strain and the curvature are directly interpolated with the nodal variables, and the mesh-free method is established for the layered rock mass. The good computational efficiency is achieved based on the developed mesh-free method, and some key issues are discussed.« less
Convergence studies in meshfree peridynamic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seleson, Pablo; Littlewood, David J.
2016-04-15
Meshfree methods are commonly applied to discretize peridynamic models, particularly in numerical simulations of engineering problems. Such methods discretize peridynamic bodies using a set of nodes with characteristic volume, leading to particle-based descriptions of systems. In this article, we perform convergence studies of static peridynamic problems. We show that commonly used meshfree methods in peridynamics suffer from accuracy and convergence issues, due to a rough approximation of the contribution to the internal force density of nodes near the boundary of the neighborhood of a given node. We propose two methods to improve meshfree peridynamic simulations. The first method uses accuratemore » computations of volumes of intersections between neighbor cells and the neighborhood of a given node, referred to as partial volumes. The second method employs smooth influence functions with a finite support within peridynamic kernels. Numerical results demonstrate great improvements in accuracy and convergence of peridynamic numerical solutions, when using the proposed methods.« less
NASA Technical Reports Server (NTRS)
Contreras, Michael T.; Peng, Chia-Yen; Wang, Dongdong; Chen, Jiun-Shyan
2012-01-01
A wheel experiencing sinkage and slippage events poses a high risk to rover missions as evidenced by recent mobility challenges on the Mars Exploration Rover (MER) project. Because several factors contribute to wheel sinkage and slippage conditions such as soil composition, large deformation soil behavior, wheel geometry, nonlinear contact forces, terrain irregularity, etc., there are significant benefits to modeling these events to a sufficient degree of complexity. For the purposes of modeling wheel sinkage and slippage at an engineering scale, meshfree finite element approaches enable simulations that capture sufficient detail of wheel-soil interaction while remaining computationally feasible. This study demonstrates some of the large deformation modeling capability of meshfree methods and the realistic solutions obtained by accounting for the soil material properties. A benchmark wheel-soil interaction problem is developed and analyzed using a specific class of meshfree methods called Reproducing Kernel Particle Method (RKPM). The benchmark problem is also analyzed using a commercially available finite element approach with Lagrangian meshing for comparison. RKPM results are comparable to classical pressure-sinkage terramechanics relationships proposed by Bekker-Wong. Pending experimental calibration by future work, the meshfree modeling technique will be a viable simulation tool for trade studies assisting rover wheel design.
NASA Astrophysics Data System (ADS)
Nguyen-Thanh, Nhon; Li, Weidong; Zhou, Kun
2018-03-01
This paper develops a coupling approach which integrates the meshfree method and isogeometric analysis (IGA) for static and free-vibration analyses of cracks in thin-shell structures. In this approach, the domain surrounding the cracks is represented by the meshfree method while the rest domain is meshed by IGA. The present approach is capable of preserving geometry exactness and high continuity of IGA. The local refinement is achieved by adding the nodes along the background cells in the meshfree domain. Moreover, the equivalent domain integral technique for three-dimensional problems is derived from the additional Kirchhoff-Love theory to compute the J-integral for the thin-shell model. The proposed approach is able to address the problems involving through-the-thickness cracks without using additional rotational degrees of freedom, which facilitates the enrichment strategy for crack tips. The crack tip enrichment effects and the stress distribution and displacements around the crack tips are investigated. Free vibrations of cracks in thin shells are also analyzed. Numerical examples are presented to demonstrate the accuracy and computational efficiency of the coupling approach.
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less
Efficient Meshfree Large Deformation Simulation of Rainfall Induced Soil Slope Failure
NASA Astrophysics Data System (ADS)
Wang, Dongdong; Li, Ling
2010-05-01
An efficient Lagrangian Galerkin meshfree framework is presented for large deformation simulation of rainfall-induced soil slope failure. Detailed coupled soil-rainfall seepage equations are given for the proposed formulation. This nonlinear meshfree formulation is featured by the Lagrangian stabilized conforming nodal integration method where the low cost nature of nodal integration approach is kept and at the same time the numerical stability is maintained. The initiation and evolution of progressive failure in the soil slope is modeled by the coupled constitutive equations of isotropic damage and Drucker-Prager pressure-dependent plasticity. The gradient smoothing in the stabilized conforming integration also serves as a non-local regularization of material instability and consequently the present method is capable of effectively capture the shear band failure. The efficacy of the present method is demonstrated by simulating the rainfall-induced failure of two typical soil slopes.
A fast object-oriented Matlab implementation of the Reproducing Kernel Particle Method
NASA Astrophysics Data System (ADS)
Barbieri, Ettore; Meo, Michele
2012-05-01
Novel numerical methods, known as Meshless Methods or Meshfree Methods and, in a wider perspective, Partition of Unity Methods, promise to overcome most of disadvantages of the traditional finite element techniques. The absence of a mesh makes meshfree methods very attractive for those problems involving large deformations, moving boundaries and crack propagation. However, meshfree methods still have significant limitations that prevent their acceptance among researchers and engineers, namely the computational costs. This paper presents an in-depth analysis of computational techniques to speed-up the computation of the shape functions in the Reproducing Kernel Particle Method and Moving Least Squares, with particular focus on their bottlenecks, like the neighbour search, the inversion of the moment matrix and the assembly of the stiffness matrix. The paper presents numerous computational solutions aimed at a considerable reduction of the computational times: the use of kd-trees for the neighbour search, sparse indexing of the nodes-points connectivity and, most importantly, the explicit and vectorized inversion of the moment matrix without using loops and numerical routines.
Simulating incompressible flow on moving meshfree grids using General Finite Differences (GFD)
NASA Astrophysics Data System (ADS)
Vasyliv, Yaroslav; Alexeev, Alexander
2016-11-01
We simulate incompressible flow around an oscillating cylinder at different Reynolds numbers using General Finite Differences (GFD) on a meshfree grid. We evolve the meshfree grid by treating each grid node as a particle. To compute velocities and accelerations, we consider the particles at a particular instance as Eulerian observation points. The incompressible Navier-Stokes equations are directly discretized using GFD with boundary conditions enforced using a sharp interface treatment. Cloud sizes are set such that the local approximations use only 16 neighbors. To enforce incompressibility, we apply a semi-implicit approximate projection method. To prevent overlapping particles and formation of voids in the grid, we propose a particle regularization scheme based on a local minimization principle. We validate the GFD results for an oscillating cylinder against the lattice Boltzmann method and find good agreement. Financial support provided by National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.
PowderSim: Lagrangian Discrete and Mesh-Free Continuum Simulation Code for Cohesive Soils
NASA Technical Reports Server (NTRS)
Johnson, Scott; Walton, Otis; Settgast, Randolph
2013-01-01
PowderSim is a calculation tool that combines a discrete-element method (DEM) module, including calibrated interparticle-interaction relationships, with a mesh-free, continuum, SPH (smoothed-particle hydrodynamics) based module that utilizes enhanced, calibrated, constitutive models capable of mimicking both large deformations and the flow behavior of regolith simulants and lunar regolith under conditions anticipated during in situ resource utilization (ISRU) operations. The major innovation introduced in PowderSim is to use a mesh-free method (SPH-based) with a calibrated and slightly modified critical-state soil mechanics constitutive model to extend the ability of the simulation tool to also address full-scale engineering systems in the continuum sense. The PowderSim software maintains the ability to address particle-scale problems, like size segregation, in selected regions with a traditional DEM module, which has improved contact physics and electrostatic interaction models.
NASA Astrophysics Data System (ADS)
Nguyen Van Do, Vuong
2018-04-01
In this paper, a modified Kirchhoff theory is presented for free vibration analyses of functionally graded material (FGM) plate based on modified radial point interpolation method (RPIM). The shear deformation effects are taken account into modified theory to ignore the locking phenomenon of thin plates. Due to the proposed refined plate theory, the number of independent unknowns reduces one variable and exists with four degrees of freedom per node. The simulated free vibration results employed by the modified RPIM are compared with the other analytical solutions to verify the effectiveness and the accuracy of the developed mesh-free method. Detail parametric studies of the proposed method are then conducted including the effectiveness of thickness ratio, boundary condition and material inhomogeneity on the sample problems of square plates. Results illustrated that the modified mesh-free RPIM can effectively predict the numerical calculation as compared to the exact solutions. The obtained numerical results are indicated that the proposed method are stable and well accurate prediction to evaluate with other published analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slattery, Stuart R.
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less
Pixel-based meshfree modelling of skeletal muscles.
Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu
2016-01-01
This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.
NASA Astrophysics Data System (ADS)
Shcherbakov, V.; Ahlkrona, J.
2016-12-01
In this work we develop a highly efficient meshfree approach to ice sheet modeling. Traditionally mesh based methods such as finite element methods are employed to simulate glacier and ice sheet dynamics. These methods are mature and well developed. However, despite of numerous advantages these methods suffer from some drawbacks such as necessity to remesh the computational domain every time it changes its shape, which significantly complicates the implementation on moving domains, or a costly assembly procedure for nonlinear problems. We introduce a novel meshfree approach that frees us from all these issues. The approach is built upon a radial basis function (RBF) method that, thanks to its meshfree nature, allows for an efficient handling of moving margins and free ice surface. RBF methods are also accurate and easy to implement. Since the formulation is stated in strong form it allows for a substantial reduction of the computational cost associated with the linear system assembly inside the nonlinear solver. We implement a global RBF method that defines an approximation on the entire computational domain. This method exhibits high accuracy properties. However, it suffers from a disadvantage that the coefficient matrix is dense, and therefore the computational efficiency decreases. In order to overcome this issue we also implement a localized RBF method that rests upon a partition of unity approach to subdivide the domain into several smaller subdomains. The radial basis function partition of unity method (RBF-PUM) inherits high approximation characteristics form the global RBF method while resulting in a sparse system of equations, which essentially increases the computational efficiency. To demonstrate the usefulness of the RBF methods we model the velocity field of ice flow in the Haut Glacier d'Arolla. We assume that the flow is governed by the nonlinear Blatter-Pattyn equations. We test the methods for different basal conditions and for a free moving surface. Both RBF methods are compared with a classical finite element method in terms of accuracy and efficiency. We find that the RBF methods are more efficient than the finite element method and well suited for ice dynamics modeling, especially the partition of unity approach.
Incompressible flow simulations on regularized moving meshfree grids
NASA Astrophysics Data System (ADS)
Vasyliv, Yaroslav; Alexeev, Alexander
2017-11-01
A moving grid meshfree solver for incompressible flows is presented. To solve for the flow field, a semi-implicit approximate projection method is directly discretized on meshfree grids using General Finite Differences (GFD) with sharp interface stencil modifications. To maintain a regular grid, an explicit shift is used to relax compressed pseudosprings connecting a star node to its cloud of neighbors. The following test cases are used for validation: the Taylor-Green vortex decay, the analytic and modified lid-driven cavities, and an oscillating cylinder enclosed in a container for a range of Reynolds number values. We demonstrate that 1) the grid regularization does not impede the second order spatial convergence rate, 2) the Courant condition can be used for time marching but the projection splitting error reduces the convergence rate to first order, and 3) moving boundaries and arbitrary grid distortions can readily be handled. Financial support provided by the National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.
Meshfree simulation of avalanches with the Finite Pointset Method (FPM)
NASA Astrophysics Data System (ADS)
Michel, Isabel; Kuhnert, Jörg; Kolymbas, Dimitrios
2017-04-01
Meshfree methods are the numerical method of choice in case of applications which are characterized by strong deformations in conjunction with free surfaces or phase boundaries. In the past the meshfree Finite Pointset Method (FPM) developed by Fraunhofer ITWM (Kaiserslautern, Germany) has been successfully applied to problems in computational fluid dynamics such as water crossing of cars, water turbines, and hydraulic valves. Most recently the simulation of granular flows, e.g. soil interaction with cars (rollover), has also been tackled. This advancement is the basis for the simulation of avalanches. Due to the generalized finite difference formulation in FPM, the implementation of different material models is quite simple. We will demonstrate 3D simulations of avalanches based on the Drucker-Prager yield criterion as well as the nonlinear barodesy model. The barodesy model (Division of Geotechnical and Tunnel Engineering, University of Innsbruck, Austria) describes the mechanical behavior of soil by an evolution equation for the stress tensor. The key feature of successful and realistic simulations of avalanches - apart from the numerical approximation of the occurring differential operators - is the choice of the boundary conditions (slip, no-slip, friction) between the different phases of the flow as well as the geometry. We will discuss their influences for simplified one- and two-phase flow examples. This research is funded by the German Research Foundation (DFG) and the FWF Austrian Science Fund.
A Lagrangian meshfree method applied to linear and nonlinear elasticity.
Walker, Wade A
2017-01-01
The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.
A Lagrangian meshfree method applied to linear and nonlinear elasticity
2017-01-01
The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code. PMID:29045443
Meshfree Modeling of Munitions Penetration in Soils
2017-04-01
discretization ...................... 8 Figure 2. Nodal smoothing domain for the modified stabilized nonconforming nodal integration...projectile ............................................................................................... 36 Figure 17. Discretization for the...List of Acronyms DEM: discrete element methods FEM: finite element methods MSNNI: modified stabilized nonconforming nodal integration RK
CRKSPH: A new meshfree hydrodynamics method with applications to astrophysics
NASA Astrophysics Data System (ADS)
Owen, John Michael; Raskin, Cody; Frontiere, Nicholas
2018-01-01
The study of astrophysical phenomena such as supernovae, accretion disks, galaxy formation, and large-scale structure formation requires computational modeling of, at a minimum, hydrodynamics and gravity. Developing numerical methods appropriate for these kinds of problems requires a number of properties: shock-capturing hydrodynamics benefits from rigorous conservation of invariants such as total energy, linear momentum, and mass; lack of obvious symmetries or a simplified spatial geometry to exploit necessitate 3D methods that ideally are Galilean invariant; the dynamic range of mass and spatial scales that need to be resolved can span many orders of magnitude, requiring methods that are highly adaptable in their space and time resolution. We have developed a new Lagrangian meshfree hydrodynamics method called Conservative Reproducing Kernel Smoothed Particle Hydrodynamics, or CRKSPH, in order to meet these goals. CRKSPH is a conservative generalization of the meshfree reproducing kernel method, combining the high-order accuracy of reproducing kernels with the explicit conservation of mass, linear momentum, and energy necessary to study shock-driven hydrodynamics in compressible fluids. CRKSPH's Lagrangian, particle-like nature makes it simple to combine with well-known N-body methods for modeling gravitation, similar to the older Smoothed Particle Hydrodynamics (SPH) method. Indeed, CRKSPH can be substituted for SPH in existing SPH codes due to these similarities. In comparison to SPH, CRKSPH is able to achieve substantially higher accuracy for a given number of points due to the explicitly consistent (and higher-order) interpolation theory of reproducing kernels, while maintaining the same conservation principles (and therefore applicability) as SPH. There are currently two coded implementations of CRKSPH available: one in the open-source research code Spheral, and the other in the high-performance cosmological code HACC. Using these codes we have applied CRKSPH to a number of astrophysical scenarios, such as rotating gaseous disks, supernova remnants, and large-scale cosmological structure formation. In this poster we present an overview of CRKSPH and show examples of these astrophysical applications.
Anisotropic diffusion in mesh-free numerical magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2017-04-01
We extend recently developed mesh-free Lagrangian methods for numerical magnetohydrodynamics (MHD) to arbitrary anisotropic diffusion equations, including: passive scalar diffusion, Spitzer-Braginskii conduction and viscosity, cosmic ray diffusion/streaming, anisotropic radiation transport, non-ideal MHD (Ohmic resistivity, ambipolar diffusion, the Hall effect) and turbulent 'eddy diffusion'. We study these as implemented in the code GIZMO for both new meshless finite-volume Godunov schemes (MFM/MFV). We show that the MFM/MFV methods are accurate and stable even with noisy fields and irregular particle arrangements, and recover the correct behaviour even in arbitrarily anisotropic cases. They are competitive with state-of-the-art AMR/moving-mesh methods, and can correctly treat anisotropic diffusion-driven instabilities (e.g. the MTI and HBI, Hall MRI). We also develop a new scheme for stabilizing anisotropic tensor-valued fluxes with high-order gradient estimators and non-linear flux limiters, which is trivially generalized to AMR/moving-mesh codes. We also present applications of some of these improvements for SPH, in the form of a new integral-Godunov SPH formulation that adopts a moving-least squares gradient estimator and introduces a flux-limited Riemann problem between particles.
Methods to Prescribe Particle Motion to Minimize Quadrature Error in Meshfree Methods
NASA Astrophysics Data System (ADS)
Templeton, Jeremy; Erickson, Lindsay; Morris, Karla; Poliakoff, David
2015-11-01
Meshfree methods are an attractive approach for simulating material systems undergoing large-scale deformation, such as spray break up, free surface flows, and droplets. Particles, which can be easily moved, are used as nodes and/or quadrature points rather than a relying on a fixed mesh. Most methods move particles according to the local fluid velocity that allows for the convection terms in the Navier-Stokes equations to be easily accounted for. However, this is a trade-off against numerical accuracy as the flow can often move particles to configurations with high quadrature error, and artificial compressibility is often required to prevent particles from forming undesirable regions of high and low concentrations. In this work, we consider the other side of the trade-off: moving particles based on reducing numerical error. Methods derived from molecular dynamics show that particles can be moved to minimize a surrogate for the solution error, resulting in substantially more accurate simulations at a fixed cost. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Technical Reports Server (NTRS)
Contreras, Michael T.; Trease, Brian P.; Bojanowski, Cezary; Kulakx, Ronald F.
2013-01-01
A wheel experiencing sinkage and slippage events poses a high risk to planetary rover missions as evidenced by the mobility challenges endured by the Mars Exploration Rover (MER) project. Current wheel design practice utilizes loads derived from a series of events in the life cycle of the rover which do not include (1) failure metrics related to wheel sinkage and slippage and (2) performance trade-offs based on grouser placement/orientation. Wheel designs are rigorously tested experimentally through a variety of drive scenarios and simulated soil environments; however, a robust simulation capability is still in development due to myriad of complex interaction phenomena that contribute to wheel sinkage and slippage conditions such as soil composition, large deformation soil behavior, wheel geometry, nonlinear contact forces, terrain irregularity, etc. For the purposes of modeling wheel sinkage and slippage at an engineering scale, meshfree nite element approaches enable simulations that capture su cient detail of wheel-soil interaction while remaining computationally feasible. This study implements the JPL wheel-soil benchmark problem in the commercial code environment utilizing the large deformation modeling capability of Smooth Particle Hydrodynamics (SPH) meshfree methods. The nominal, benchmark wheel-soil interaction model that produces numerically stable and physically realistic results is presented and simulations are shown for both wheel traverse and wheel sinkage cases. A sensitivity analysis developing the capability and framework for future ight applications is conducted to illustrate the importance of perturbations to critical material properties and parameters. Implementation of the proposed soil-wheel interaction simulation capability and associated sensitivity framework has the potential to reduce experimentation cost and improve the early stage wheel design proce
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rasouli, C.; Abbasi Davani, F.; Rokrok, B.
Plasma confinement using external magnetic field is one of the successful ways leading to the controlled nuclear fusion. Development and validation of the solution process for plasma equilibrium in the experimental toroidal fusion devices is the main subject of this work. Solution of the nonlinear 2D stationary problem as posed by the Grad-Shafranov equation gives quantitative information about plasma equilibrium inside the vacuum chamber of hot fusion devices. This study suggests solving plasma equilibrium equation which is essential in toroidal nuclear fusion devices, using a mesh-free method in a condition that the plasma boundary is unknown. The Grad-Shafranov equation hasmore » been solved numerically by the point interpolation collocation mesh-free method. Important features of this approach include truly mesh free, simple mathematical relationships between points and acceptable precision in comparison with the parametric results. The calculation process has been done by using the regular and irregular nodal distribution and support domains with different points. The relative error between numerical and analytical solution is discussed for several test examples such as small size Damavand tokamak, ITER-like equilibrium, NSTX-like equilibrium, and typical Spheromak.« less
NASA Astrophysics Data System (ADS)
Farquharson, C.; Long, J.; Lu, X.; Lelievre, P. G.
2017-12-01
Real-life geology is complex, and so, even when allowing for the diffusive, low resolution nature of geophysical electromagnetic methods, we need Earth models that can accurately represent this complexity when modelling and inverting electromagnetic data. This is particularly the case for the scales, detail and conductivity contrasts involved in mineral and hydrocarbon exploration and development, but also for the larger scale of lithospheric studies. Unstructured tetrahedral meshes provide a flexible means of discretizing a general, arbitrary Earth model. This is important when wanting to integrate a geophysical Earth model with a geological Earth model parameterized in terms of surfaces. Finite-element and finite-volume methods can be derived for computing the electric and magnetic fields in a model parameterized using an unstructured tetrahedral mesh. A number of such variants have been proposed and have proven successful. However, the efficiency and accuracy of these methods can be affected by the "quality" of the tetrahedral discretization, that is, how many of the tetrahedral cells in the mesh are long, narrow and pointy. This is particularly the case if one wants to use an iterative technique to solve the resulting linear system of equations. One approach to deal with this issue is to develop sophisticated model and mesh building and manipulation capabilities in order to ensure that any mesh built from geological information is of sufficient quality for the electromagnetic modelling. Another approach is to investigate other methods of synthesizing the electromagnetic fields. One such example is a "meshfree" approach in which the electromagnetic fields are synthesized using a mesh that is distinct from the mesh used to parameterized the Earth model. There are then two meshes, one describing the Earth model and one used for the numerical mathematics of computing the fields. This means that there are no longer any quality requirements on the model mesh, which makes the process of building a geophysical Earth model from a geological model much simpler. In this presentation we will explore the issues that arise when working with realistic Earth models and when synthesizing geophysical electromagnetic data for them. We briefly consider meshfree methods as a possible means of alleviating some of these issues.
NASA Astrophysics Data System (ADS)
Navas, Pedro; Sanavia, Lorenzo; López-Querol, Susana; Yu, Rena C.
2017-12-01
Solving dynamic problems for fluid saturated porous media at large deformation regime is an interesting but complex issue. An implicit time integration scheme is herein developed within the framework of the u-w (solid displacement-relative fluid displacement) formulation for the Biot's equations. In particular, liquid water saturated porous media is considered and the linearization of the linear momentum equations taking into account all the inertia terms for both solid and fluid phases is for the first time presented. The spatial discretization is carried out through a meshfree method, in which the shape functions are based on the principle of local maximum entropy LME. The current methodology is firstly validated with the dynamic consolidation of a soil column and the plastic shear band formulation of a square domain loaded by a rigid footing. The feasibility of this new numerical approach for solving large deformation dynamic problems is finally demonstrated through the application to an embankment problem subjected to an earthquake.
2016-01-01
The problem of multi-scale modelling of damage development in a SiC ceramic fibre-reinforced SiC matrix ceramic composite tube is addressed, with the objective of demonstrating the ability of the finite-element microstructure meshfree (FEMME) model to introduce important aspects of the microstructure into a larger scale model of the component. These are particularly the location, orientation and geometry of significant porosity and the load-carrying capability and quasi-brittle failure behaviour of the fibre tows. The FEMME model uses finite-element and cellular automata layers, connected by a meshfree layer, to efficiently couple the damage in the microstructure with the strain field at the component level. Comparison is made with experimental observations of damage development in an axially loaded composite tube, studied by X-ray computed tomography and digital volume correlation. Recommendations are made for further development of the model to achieve greater fidelity to the microstructure. This article is part of the themed issue ‘Multiscale modelling of the structural integrity of composite materials’. PMID:27242308
Pricing and simulation for real estate index options: Radial basis point interpolation
NASA Astrophysics Data System (ADS)
Gong, Pu; Zou, Dong; Wang, Jiayue
2018-06-01
This study employs the meshfree radial basis point interpolation (RBPI) for pricing real estate derivatives contingent on real estate index. This method combines radial and polynomial basis functions, which can guarantee the interpolation scheme with Kronecker property and effectively improve accuracy. An exponential change of variables, a mesh refinement algorithm and the Richardson extrapolation are employed in this study to implement the RBPI. Numerical results are presented to examine the computational efficiency and accuracy of our method.
The Repeated Replacement Method: A Pure Lagrangian Meshfree Method for Computational Fluid Dynamics
Walker, Wade A.
2012-01-01
In this paper we describe the repeated replacement method (RRM), a new meshfree method for computational fluid dynamics (CFD). RRM simulates fluid flow by modeling compressible fluids’ tendency to evolve towards a state of constant density, velocity, and pressure. To evolve a fluid flow simulation forward in time, RRM repeatedly “chops out” fluid from active areas and replaces it with new “flattened” fluid cells with the same mass, momentum, and energy. We call the new cells “flattened” because we give them constant density, velocity, and pressure, even though the chopped-out fluid may have had gradients in these primitive variables. RRM adaptively chooses the sizes and locations of the areas it chops out and replaces. It creates more and smaller new cells in areas of high gradient, and fewer and larger new cells in areas of lower gradient. This naturally leads to an adaptive level of accuracy, where more computational effort is spent on active areas of the fluid, and less effort is spent on inactive areas. We show that for common test problems, RRM produces results similar to other high-resolution CFD methods, while using a very different mathematical framework. RRM does not use Riemann solvers, flux or slope limiters, a mesh, or a stencil, and it operates in a purely Lagrangian mode. RRM also does not evaluate numerical derivatives, does not integrate equations of motion, and does not solve systems of equations. PMID:22866175
NASA Astrophysics Data System (ADS)
Bazilevs, Y.; Moutsanidis, G.; Bueno, J.; Kamran, K.; Kamensky, D.; Hillman, M. C.; Gomez, H.; Chen, J. S.
2017-07-01
In this two-part paper we begin the development of a new class of methods for modeling fluid-structure interaction (FSI) phenomena for air blast. We aim to develop accurate, robust, and practical computational methodology, which is capable of modeling the dynamics of air blast coupled with the structure response, where the latter involves large, inelastic deformations and disintegration into fragments. An immersed approach is adopted, which leads to an a-priori monolithic FSI formulation with intrinsic contact detection between solid objects, and without formal restrictions on the solid motions. In Part I of this paper, the core air-blast FSI methodology suitable for a variety of discretizations is presented and tested using standard finite elements. Part II of this paper focuses on a particular instantiation of the proposed framework, which couples isogeometric analysis (IGA) based on non-uniform rational B-splines and a reproducing-kernel particle method (RKPM), which is a meshfree technique. The combination of IGA and RKPM is felt to be particularly attractive for the problem class of interest due to the higher-order accuracy and smoothness of both discretizations, and relative simplicity of RKPM in handling fragmentation scenarios. A collection of mostly 2D numerical examples is presented in each of the parts to illustrate the good performance of the proposed air-blast FSI framework.
NASA Astrophysics Data System (ADS)
Khayyer, Abbas; Gotoh, Hitoshi; Falahaty, Hosein; Shimizu, Yuma
2018-02-01
Simulation of incompressible fluid flow-elastic structure interactions is targeted by using fully-Lagrangian mesh-free computational methods. A projection-based fluid model (moving particle semi-implicit (MPS)) is coupled with either a Newtonian or a Hamiltonian Lagrangian structure model (MPS or HMPS) in a mathematically-physically consistent manner. The fluid model is founded on the solution of Navier-Stokes and continuity equations. The structure models are configured either in the framework of Newtonian mechanics on the basis of conservation of linear and angular momenta, or Hamiltonian mechanics on the basis of variational principle for incompressible elastodynamics. A set of enhanced schemes are incorporated for projection-based fluid model (Enhanced MPS), thus, the developed coupled solvers for fluid structure interaction (FSI) are referred to as Enhanced MPS-MPS and Enhanced MPS-HMPS. Besides, two smoothed particle hydrodynamics (SPH)-based FSI solvers, being developed by the authors, are considered and their potential applicability and comparable performance are briefly discussed in comparison with MPS-based FSI solvers. The SPH-based FSI solvers are established through coupling of projection-based incompressible SPH (ISPH) fluid model and SPH-based Newtonian/Hamiltonian structure models, leading to Enhanced ISPH-SPH and Enhanced ISPH-HSPH. A comparative study is carried out on the performances of the FSI solvers through a set of benchmark tests, including hydrostatic water column on an elastic plate, high speed impact of an elastic aluminum beam, hydroelastic slamming of a marine panel and dam break with elastic gate.
A particle-particle hybrid method for kinetic and continuum equations
NASA Astrophysics Data System (ADS)
Tiwari, Sudarshan; Klar, Axel; Hardt, Steffen
2009-10-01
We present a coupling procedure for two different types of particle methods for the Boltzmann and the Navier-Stokes equations. A variant of the DSMC method is applied to simulate the Boltzmann equation, whereas a meshfree Lagrangian particle method, similar to the SPH method, is used for simulations of the Navier-Stokes equations. An automatic domain decomposition approach is used with the help of a continuum breakdown criterion. We apply adaptive spatial and time meshes. The classical Sod's 1D shock tube problem is solved for a large range of Knudsen numbers. Results from Boltzmann, Navier-Stokes and hybrid solvers are compared. The CPU time for the hybrid solver is 3-4 times faster than for the Boltzmann solver.
Numerical and Experimental Investigations of the Flow in a Stationary Pelton Bucket
NASA Astrophysics Data System (ADS)
Nakanishi, Yuji; Fujii, Tsuneaki; Kawaguchi, Sho
A numerical code based on one of mesh-free particle methods, a Moving-Particle Semi-implicit (MPS) Method has been used for the simulation of free surface flows in a bucket of Pelton turbines so far. In this study, the flow in a stationary bucket is investigated by MPS simulation and experiment to validate the numerical code. The free surface flow dependent on the angular position of the bucket and the corresponding pressure distribution on the bucket computed by the numerical code are compared with that obtained experimentally. The comparison shows that numerical code based on MPS method is useful as a tool to gain an insight into the free surface flows in Pelton turbines.
NASA Astrophysics Data System (ADS)
Van Liedekerke, P.; Ghysels, P.; Tijskens, E.; Samaey, G.; Smeedts, B.; Roose, D.; Ramon, H.
2010-06-01
This paper is concerned with addressing how plant tissue mechanics is related to the micromechanics of cells. To this end, we propose a mesh-free particle method to simulate the mechanics of both individual plant cells (parenchyma) and cell aggregates in response to external stresses. The model considers two important features in the plant cell: (1) the cell protoplasm, the interior liquid phase inducing hydrodynamic phenomena, and (2) the cell wall material, a viscoelastic solid material that contains the protoplasm. In this particle framework, the cell fluid is modeled by smoothed particle hydrodynamics (SPH), a mesh-free method typically used to address problems with gas and fluid dynamics. In the solid phase (cell wall) on the other hand, the particles are connected by pairwise interactions holding them together and preventing the fluid to penetrate the cell wall. The cell wall hydraulic conductivity (permeability) is built in as well through the SPH formulation. Although this model is also meant to be able to deal with dynamic and even violent situations (leading to cell wall rupture or cell-cell debonding), we have concentrated on quasi-static conditions. The results of single-cell compression simulations show that the conclusions found by analytical models and experiments can be reproduced at least qualitatively. Relaxation tests revealed that plant cells have short relaxation times (1 µs-10 µs) compared to mammalian cells. Simulations performed on cell aggregates indicated an influence of the cellular organization to the tissue response, as was also observed in experiments done on tissues with a similar structure.
An RBF-PSO based approach for modeling prostate cancer
NASA Astrophysics Data System (ADS)
Perracchione, Emma; Stura, Ilaria
2016-06-01
Prostate cancer is one of the most common cancers in men; it grows slowly and it could be diagnosed in an early stage by dosing the Prostate Specific Antigen (PSA). However, a relapse after the primary therapy could arise in 25 - 30% of cases and different growth characteristics of the new tumor are observed. In order to get a better understanding of the phenomenon, a two parameters growth model is considered. To estimate the parameters values identifying the disease risk level a novel approach, based on combining Particle Swarm Optimization (PSO) with meshfree interpolation methods, is proposed.
Fast RBF OGr for solving PDEs on arbitrary surfaces
NASA Astrophysics Data System (ADS)
Piret, Cécile; Dunn, Jarrett
2016-10-01
The Radial Basis Functions Orthogonal Gradients method (RBF-OGr) was introduced in [1] to discretize differential operators defined on arbitrary manifolds defined only by a point cloud. We take advantage of the meshfree character of RBFs, which give us a high accuracy and the flexibility to represent complex geometries in any spatial dimension. A large limitation of the RBF-OGr method was its large computational complexity, which greatly restricted the size of the point cloud. In this paper, we apply the RBF-Finite Difference (RBF-FD) technique to the RBF-OGr method for building sparse differentiation matrices discretizing continuous differential operators such as the Laplace-Beltrami operator. This method can be applied to solving PDEs on arbitrary surfaces embedded in ℛ3. We illustrate the accuracy of our new method by solving the heat equation on the unit sphere.
Explicitly represented polygon wall boundary model for the explicit MPS method
NASA Astrophysics Data System (ADS)
Mitsume, Naoto; Yoshimura, Shinobu; Murotani, Kohei; Yamada, Tomonori
2015-05-01
This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.
Particle-based solid for nonsmooth multidomain dynamics
NASA Astrophysics Data System (ADS)
Nordberg, John; Servin, Martin
2018-04-01
A method for simulation of elastoplastic solids in multibody systems with nonsmooth and multidomain dynamics is developed. The solid is discretised into pseudo-particles using the meshfree moving least squares method for computing the strain tensor. The particle's strain and stress tensor variables are mapped to a compliant deformation constraint. The discretised solid model thus fit a unified framework for nonsmooth multidomain dynamics simulations including rigid multibodies with complex kinematic constraints such as articulation joints, unilateral contacts with dry friction, drivelines, and hydraulics. The nonsmooth formulation allows for impact impulses to propagate instantly between the rigid multibody and the solid. Plasticity is introduced through an associative perfectly plastic modified Drucker-Prager model. The elastic and plastic dynamics are verified for simple test systems, and the capability of simulating tracked terrain vehicles driving on a deformable terrain is demonstrated.
A Multiscale Meshfree Approach for Modeling Fragment Penetration into Ultra High-Strength Concrete
2011-09-01
velocity history................................................................................ 62 Figure 53. Yield stress versus strain rate for steel ...Spherical steel projectile properties. ....................................................................................... 54 Table 3. J2 material...10000E , Poisson’s ratio 0v , and density 1 . Here the Poisson’s effect is purposely removed for the wave to propagate only in the axial
Effect of microstructure on the elasto-viscoplastic deformation of dual phase titanium structures
NASA Astrophysics Data System (ADS)
Ozturk, Tugce; Rollett, Anthony D.
2018-02-01
The present study is devoted to the creation of a process-structure-property database for dual phase titanium alloys, through a synthetic microstructure generation method and a mesh-free fast Fourier transform based micromechanical model that operates on a discretized image of the microstructure. A sensitivity analysis is performed as a precursor to determine the statistically representative volume element size for creating 3D synthetic microstructures based on additively manufactured Ti-6Al-4V characteristics, which are further modified to expand the database for features of interest, e.g., lath thickness. Sets of titanium hardening parameters are extracted from literature, and The relative effect of the chosen microstructural features is quantified through comparisons of average and local field distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Bradley, E-mail: brma7253@colorado.edu; Fornberg, Bengt, E-mail: Fornberg@colorado.edu
In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy formore » the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.« less
NASA Astrophysics Data System (ADS)
Martin, Bradley; Fornberg, Bengt
2017-04-01
In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy for the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, Alex H.; Betcke, Timo; School of Mathematics, University of Manchester, Manchester, M13 9PL
2007-12-15
We report the first large-scale statistical study of very high-lying eigenmodes (quantum states) of the mushroom billiard proposed by L. A. Bunimovich [Chaos 11, 802 (2001)]. The phase space of this mixed system is unusual in that it has a single regular region and a single chaotic region, and no KAM hierarchy. We verify Percival's conjecture to high accuracy (1.7%). We propose a model for dynamical tunneling and show that it predicts well the chaotic components of predominantly regular modes. Our model explains our observed density of such superpositions dying as E{sup -1/3} (E is the eigenvalue). We compare eigenvaluemore » spacing distributions against Random Matrix Theory expectations, using 16 000 odd modes (an order of magnitude more than any existing study). We outline new variants of mesh-free boundary collocation methods which enable us to achieve high accuracy and high mode numbers ({approx}10{sup 5}) orders of magnitude faster than with competing methods.« less
A class of renormalised meshless Laplacians for boundary value problems
NASA Astrophysics Data System (ADS)
Basic, Josip; Degiuli, Nastia; Ban, Dario
2018-02-01
A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.
Meshfree and efficient modeling of swimming cells
NASA Astrophysics Data System (ADS)
Gallagher, Meurig T.; Smith, David J.
2018-05-01
Locomotion in Stokes flow is an intensively studied problem because it describes important biological phenomena such as the motility of many species' sperm, bacteria, algae, and protozoa. Numerical computations can be challenging, particularly in three dimensions, due to the presence of moving boundaries and complex geometries; methods which combine ease of implementation and computational efficiency are therefore needed. A recently proposed method to discretize the regularized Stokeslet boundary integral equation without the need for a connected mesh is applied to the inertialess locomotion problem in Stokes flow. The mathematical formulation and key aspects of the computational implementation in matlab® or GNU Octave are described, followed by numerical experiments with biflagellate algae and multiple uniflagellate sperm swimming between no-slip surfaces, for which both swimming trajectories and flow fields are calculated. These computational experiments required minutes of time on modest hardware; an extensible implementation is provided in a GitHub repository. The nearest-neighbor discretization dramatically improves convergence and robustness, a key challenge in extending the regularized Stokeslet method to complicated three-dimensional biological fluid problems.
An adaptively refined XFEM with virtual node polygonal elements for dynamic crack problems
NASA Astrophysics Data System (ADS)
Teng, Z. H.; Sun, F.; Wu, S. C.; Zhang, Z. B.; Chen, T.; Liao, D. M.
2018-02-01
By introducing the shape functions of virtual node polygonal (VP) elements into the standard extended finite element method (XFEM), a conforming elemental mesh can be created for the cracking process. Moreover, an adaptively refined meshing with the quadtree structure only at a growing crack tip is proposed without inserting hanging nodes into the transition region. A novel dynamic crack growth method termed as VP-XFEM is thus formulated in the framework of fracture mechanics. To verify the newly proposed VP-XFEM, both quasi-static and dynamic cracked problems are investigated in terms of computational accuracy, convergence, and efficiency. The research results show that the present VP-XFEM can achieve good agreement in stress intensity factor and crack growth path with the exact solutions or experiments. Furthermore, better accuracy, convergence, and efficiency of different models can be acquired, in contrast to standard XFEM and mesh-free methods. Therefore, VP-XFEM provides a suitable alternative to XFEM for engineering applications.
Hesford, Andrew J; Astheimer, Jeffrey P; Greengard, Leslie F; Waag, Robert C
2010-02-01
A multiple-scattering approach is presented to compute the solution of the Helmholtz equation when a number of spherical scatterers are nested in the interior of an acoustically large enclosing sphere. The solution is represented in terms of partial-wave expansions, and a linear system of equations is derived to enforce continuity of pressure and normal particle velocity across all material interfaces. This approach yields high-order accuracy and avoids some of the difficulties encountered when using integral equations that apply to surfaces of arbitrary shape. Calculations are accelerated by using diagonal translation operators to compute the interactions between spheres when the operators are numerically stable. Numerical results are presented to demonstrate the accuracy and efficiency of the method.
Hesford, Andrew J.; Astheimer, Jeffrey P.; Greengard, Leslie F.; Waag, Robert C.
2010-01-01
A multiple-scattering approach is presented to compute the solution of the Helmholtz equation when a number of spherical scatterers are nested in the interior of an acoustically large enclosing sphere. The solution is represented in terms of partial-wave expansions, and a linear system of equations is derived to enforce continuity of pressure and normal particle velocity across all material interfaces. This approach yields high-order accuracy and avoids some of the difficulties encountered when using integral equations that apply to surfaces of arbitrary shape. Calculations are accelerated by using diagonal translation operators to compute the interactions between spheres when the operators are numerically stable. Numerical results are presented to demonstrate the accuracy and efficiency of the method. PMID:20136208
Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.
2011-01-01
Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185
Nonlinear Meshfree Analysis Program (NMAP) Version 1.0 (User’s Manual)
2012-12-01
divided by the number of time increments used in the analysis . In addition to prescribing total nodal displacements in the neutral file, users are...conditions, the user must define material properties, initial conditions, and a variety of control parameters for the NMAP analysis . These data are provided...a script file. Restart A restart function is provided in the NMAP code, where the user may restart an analysis using a set of restart files. In
Evaluating simulant materials for understanding cranial backspatter from a ballistic projectile.
Das, Raj; Collins, Alistair; Verma, Anurag; Fernandez, Justin; Taylor, Michael
2015-05-01
In cranial wounds resulting from a gunshot, the study of backspatter patterns can provide information about the actual incidents by linking material to surrounding objects. This study investigates the physics of backspatter from a high-speed projectile impact and evaluates a range of simulant materials using impact tests. Next, we evaluate a mesh-free method called smoothed particle hydrodynamics (SPH) to model the splashing mechanism during backspatter. The study has shown that a projectile impact causes fragmentation at the impact site, while transferring momentum to fragmented particles. The particles travel along the path of least resistance, leading to partial material movement in the reverse direction of the projectile motion causing backspatter. Medium-density fiberboard is a better simulant for a human skull than polycarbonate, and lorica leather is a better simulant for a human skin than natural rubber. SPH is an effective numerical method for modeling the high-speed impact fracture and fragmentations. © 2015 American Academy of Forensic Sciences.
Writing analytic element programs in Python.
Bakker, Mark; Kelson, Victor A
2009-01-01
The analytic element method is a mesh-free approach for modeling ground water flow at both the local and the regional scale. With the advent of the Python object-oriented programming language, it has become relatively easy to write analytic element programs. In this article, an introduction is given of the basic principles of the analytic element method and of the Python programming language. A simple, yet flexible, object-oriented design is presented for analytic element codes using multiple inheritance. New types of analytic elements may be added without the need for any changes in the existing part of the code. The presented code may be used to model flow to wells (with either a specified discharge or drawdown) and streams (with a specified head). The code may be extended by any hydrogeologist with a healthy appetite for writing computer code to solve more complicated ground water flow problems. Copyright © 2009 The Author(s). Journal Compilation © 2009 National Ground Water Association.
Wada, Yuji; Kundu, Tribikram; Nakamura, Kentaro
2014-08-01
The distributed point source method (DPSM) is extended to model wave propagation in viscous fluids. Appropriate estimation on attenuation and boundary layer formation due to fluid viscosity is necessary for the ultrasonic devices used for acoustic streaming or ultrasonic levitation. The equations for DPSM modeling in viscous fluids are derived in this paper by decomposing the linearized viscous fluid equations into two components-dilatational and rotational components. By considering complex P- and S-wave numbers, the acoustic fields in viscous fluids can be calculated following similar calculation steps that are used for wave propagation modeling in solids. From the calculations reported the precision of DPSM is found comparable to that of the finite element method (FEM) for a fundamental ultrasonic field problem. The particle velocity parallel to the two bounding surfaces of the viscous fluid layer between two rigid plates (one in motion and one stationary) is calculated. The finite element results agree well with the DPSM results that were generated faster than the transient FEM results.
SPH-based numerical simulations of flow slides in municipal solid waste landfills.
Huang, Yu; Dai, Zili; Zhang, Weijie; Huang, Maosong
2013-03-01
Most municipal solid waste (MSW) is disposed of in landfills. Over the past few decades, catastrophic flow slides have occurred in MSW landfills around the world, causing substantial economic damage and occasionally resulting in human victims. It is therefore important to predict the run-out, velocity and depth of such slides in order to provide adequate mitigation and protection measures. To overcome the limitations of traditional numerical methods for modelling flow slides, a mesh-free particle method entitled smoothed particle hydrodynamics (SPH) is introduced in this paper. The Navier-Stokes equations were adopted as the governing equations and a Bingham model was adopted to analyse the relationship between material stress rates and particle motion velocity. The accuracy of the model is assessed using a series of verifications, and then flow slides that occurred in landfills located in Sarajevo and Bandung were simulated to extend its applications. The simulated results match the field data well and highlight the capability of the proposed SPH modelling method to simulate such complex phenomena as flow slides in MSW landfills.
A nonlinear generalized continuum approach for electro-elasticity including scale effects
NASA Astrophysics Data System (ADS)
Skatulla, S.; Arockiarajan, A.; Sansour, C.
2009-01-01
Materials characterized by an electro-mechanically coupled behaviour fall into the category of so-called smart materials. In particular, electro-active polymers (EAP) recently attracted much interest, because, upon electrical loading, EAP exhibit a large amount of deformation while sustaining large forces. This property can be utilized for actuators in electro-mechanical systems, artificial muscles and so forth. When it comes to smaller structures, it is a well-known fact that the mechanical response deviates from the prediction of classical mechanics theory. These scale effects are due to the fact that the size of the microscopic material constituents of such structures cannot be considered to be negligible small anymore compared to the structure's overall dimensions. In this context so-called generalized continuum formulations have been proven to account for the micro-structural influence to the macroscopic material response. Here, we want to adopt a strain gradient approach based on a generalized continuum framework [Sansour, C., 1998. A unified concept of elastic-viscoplastic Cosserat and micromorphic continua. J. Phys. IV Proc. 8, 341-348; Sansour, C., Skatulla, S., 2007. A higher gradient formulation and meshfree-based computation for elastic rock. Geomech. Geoeng. 2, 3-15] and extend it to also encompass the electro-mechanically coupled behaviour of EAP. The approach introduces new strain and stress measures which lead to the formulation of a corresponding generalized variational principle. The theory is completed by Dirichlet boundary conditions for the displacement field and its derivatives normal to the boundary as well as the electric potential. The basic idea behind this generalized continuum theory is the consideration of a micro- and a macro-space which together span the generalized space. As all quantities are defined in this generalized space, also the constitutive law, which is in this work conventional electro-mechanically coupled nonlinear hyperelasticity, is embedded in the generalized continuum. In this way material information of the micro-space, which are here only the geometrical specifications of the micro-continuum, can naturally enter the constitutive law. Several applications with moving least square-based approximations (MLS) demonstrate the potential of the proposed method. This particular meshfree method is chosen, as it has been proven to be highly flexible with regard to continuity and consistency required by this generalized approach.
Deformation of Soft Tissue and Force Feedback Using the Smoothed Particle Hydrodynamics
Liu, Xuemei; Wang, Ruiyi; Li, Yunhua; Song, Dongdong
2015-01-01
We study the deformation and haptic feedback of soft tissue in virtual surgery based on a liver model by using a force feedback device named PHANTOM OMNI developed by SensAble Company in USA. Although a significant amount of research efforts have been dedicated to simulating the behaviors of soft tissue and implementing force feedback, it is still a challenging problem. This paper introduces a kind of meshfree method for deformation simulation of soft tissue and force computation based on viscoelastic mechanical model and smoothed particle hydrodynamics (SPH). Firstly, viscoelastic model can present the mechanical characteristics of soft tissue which greatly promotes the realism. Secondly, SPH has features of meshless technique and self-adaption, which supply higher precision than methods based on meshes for force feedback computation. Finally, a SPH method based on dynamic interaction area is proposed to improve the real time performance of simulation. The results reveal that SPH methodology is suitable for simulating soft tissue deformation and force feedback calculation, and SPH based on dynamic local interaction area has a higher computational efficiency significantly compared with usual SPH. Our algorithm has a bright prospect in the area of virtual surgery. PMID:26417380
Rathnayaka, C M; Karunasena, H C P; Senadeera, W; Gu, Y T
2018-03-14
Numerical modelling has gained popularity in many science and engineering streams due to the economic feasibility and advanced analytical features compared to conventional experimental and theoretical models. Food drying is one of the areas where numerical modelling is increasingly applied to improve drying process performance and product quality. This investigation applies a three dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) and Coarse-Grained (CG) numerical approach to predict the morphological changes of different categories of food-plant cells such as apple, grape, potato and carrot during drying. To validate the model predictions, experimental findings from in-house experimental procedures (for apple) and sources of literature (for grape, potato and carrot) have been utilised. The subsequent comaprison indicate that the model predictions demonstrate a reasonable agreement with the experimental findings, both qualitatively and quantitatively. In this numerical model, a higher computational accuracy has been maintained by limiting the consistency error below 1% for all four cell types. The proposed meshfree-based approach is well-equipped to predict the morphological changes of plant cellular structure over a wide range of moisture contents (10% to 100% dry basis). Compared to the previous 2-D meshfree-based models developed for plant cell drying, the proposed model can draw more useful insights on the morphological behaviour due to the 3-D nature of the model. In addition, the proposed computational modelling approach has a high potential to be used as a comprehensive tool in many other tissue morphology related investigations.
Liquid-Gas-Like Phase Transition in Sand Flow Under Microgravity
NASA Astrophysics Data System (ADS)
Huang, Yu; Zhu, Chongqiang; Xiang, Xiang; Mao, Wuwei
2015-06-01
In previous studies of granular flow, it has been found that gravity plays a compacting role, causing convection and stratification by density. However, there is a lack of research and analysis of the characteristics of different particles' motion under normal gravity contrary to microgravity. In this paper, we conduct model experiments on sand flow using a model test system based on a drop tower under microgravity, within which the characteristics and development processes of granular flow under microgravity are captured by high-speed cameras. The configurations of granular flow are simulated using a modified MPS (moving particle simulation), which is a mesh-free, pure Lagrangian method. Moreover, liquid-gas-like phase transitions in the sand flow under microgravity, including the transitions to "escaped", "jumping", and "scattered" particles are highlighted, and their effects on the weakening of shear resistance, enhancement of fluidization, and changes in particle-wall and particle-particle contact mode are analyzed. This study could help explain the surface geology evolution of small solar bodies and elucidate the nature of granular interaction.
Meshless Method for Simulation of Compressible Flow
NASA Astrophysics Data System (ADS)
Nabizadeh Shahrebabak, Ebrahim
In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow problems. To solve this discontinuity problem, this research study deals with the implementation of a conservative meshless method and its applications in computational fluid dynamics (CFD). One of the most common types of collocating meshless method the RBF-DQ, is used to approximate the spatial derivatives. The issue with meshless methods when dealing with highly convective cases is that they cannot distinguish the influence of fluid flow from upstream or downstream and some methodology is needed to make the scheme stable. Therefore, an upwinding scheme similar to one used in the finite volume method is added to capture steep gradient or shocks. This scheme creates a flexible algorithm within which a wide range of numerical flux schemes, such as those commonly used in the finite volume method, can be employed. In addition, a blended RBF is used to decrease the dissipation ensuing from the use of a low shape parameter. All of these steps are formulated for the Euler equation and a series of test problems used to confirm convergence of the algorithm. The present scheme was first employed on several incompressible benchmarks to validate the framework. The application of this algorithm is illustrated by solving a set of incompressible Navier-Stokes problems. Results from the compressible problem are compared with the exact solution for the flow over a ramp and compared with solutions of finite volume discretization and the discontinuous Galerkin method, both requiring a mesh. The applicability of the algorithm and its robustness are shown to be applied to complex problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Violette, Daniel M.; Rathbun, Pamela
This chapter focuses on the methods used to estimate net energy savings in evaluation, measurement, and verification (EM and V) studies for energy efficiency (EE) programs. The chapter provides a definition of net savings, which remains an unsettled topic both within the EE evaluation community and across the broader public policy evaluation community, particularly in the context of attribution of savings to a program. The chapter differs from the measure-specific Uniform Methods Project (UMP) chapters in both its approach and work product. Unlike other UMP resources that provide recommended protocols for determining gross energy savings, this chapter describes and comparesmore » the current industry practices for determining net energy savings but does not prescribe methods.« less
Methods for collection and analysis of aquatic biological and microbiological samples
Britton, L.J.; Greeson, P.E.
1989-01-01
The series of chapters on techniques describes methods used by the U.S. Geological Survey for planning and conducting water-resources investigations. The material is arranged under major subject headings called books and is further subdivided into sections and chapters. Book 5 is on laboratory analysis. Section A is on water. The unit of publication, the chapter, is limited to a narrow field of subject matter. "Methods for Collection and Analysis of Aquatic Biological and Microbiological Samples" is the fourth chapter to be published under Section A of Book 5. The chapter number includes the letter of the section.This chapter was prepared by several aquatic biologists and microbiologists of the U.S. Geological Survey to provide accurate and precise methods for the collection and analysis of aquatic biological and microbiological samples.Use of brand, firm, and trade names in this chapter is for identification purposes only and does not constitute endorsement by the U.S. Geological Survey.This chapter supersedes "Methods for Collection and Analysis of Aquatic Biological and Microbiological Samples" edited by P.E. Greeson, T.A. Ehlke, G.A. Irwin, B.W. Lium, and K.V. Slack (U.S. Geological Survey Techniques of Water-Resources Investigations, Book 5, Chapter A4, 1977) and also supersedes "A Supplement to-Methods for Collection and Analysis of Aquatic Biological and Microbiological Samples" by P.E. Greeson (U.S. Geological Survey Techniques of Water-Resources Investigations, Book 5, Chapter A4), Open-File Report 79-1279, 1979.
NASA Technical Reports Server (NTRS)
2004-01-01
The grant closure report is organized in the following four chapters: Chapter describes the two research areas Design optimization and Solid mechanics. Ten journal publications are listed in the second chapter. Five highlights is the subject matter of chapter three. CHAPTER 1. The Design Optimization Test Bed CometBoards. CHAPTER 2. Solid Mechanics: Integrated Force Method of Analysis. CHAPTER 3. Five Highlights: Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft. Neural Network and Regression Soft Model Extended for PX-300 Aircraft Engine. Engine with Regression and Neural Network Approximators Designed. Cascade Optimization Strategy with Neural network and Regression Approximations Demonstrated on a Preliminary Aircraft Engine Design. Neural Network and Regression Approximations Used in Aircraft Design.
NASA Astrophysics Data System (ADS)
Katsaounis, T. D.
2005-02-01
The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. The first chapter is an introduction to parallel processing. It covers fundamentals of parallel processing in a simple and concrete way and no prior knowledge of the subject is required. Examples of parallel implementation of basic linear algebra operations are presented using the Message Passing Interface (MPI) programming environment. Here, some knowledge of MPI routines is required by the reader. Examples solving in parallel simple PDEs using Diffpack and MPI are also presented. Chapter 2 presents the overlapping domain decomposition method for solving PDEs. It is well known that these methods are suitable for parallel processing. The first part of the chapter covers the mathematical formulation of the method as well as algorithmic and implementational issues. The second part presents a serial and a parallel implementational framework within the programming environment of Diffpack. The chapter closes by showing how to solve two application examples with the overlapping domain decomposition method using Diffpack. Chapter 3 is a tutorial about how to incorporate the multigrid solver in Diffpack. The method is illustrated by examples such as a Poisson solver, a general elliptic problem with various types of boundary conditions and a nonlinear Poisson type problem. In chapter 4 the mixed finite element is introduced. Technical issues concerning the practical implementation of the method are also presented. The main difficulties of the efficient implementation of the method, especially in two and three space dimensions on unstructured grids, are presented and addressed in the framework of Diffpack. The implementational process is illustrated by two examples, namely the system formulation of the Poisson problem and the Stokes problem. Chapter 5 is closely related to chapter 4 and addresses the problem of how to solve efficiently the linear systems arising by the application of the mixed finite element method. The proposed method is block preconditioning. Efficient techniques for implementing the method within Diffpack are presented. Optimal block preconditioners are used to solve the system formulation of the Poisson problem, the Stokes problem and the bidomain model for the electrical activity in the heart. The subject of chapter 6 is systems of PDEs. Linear and nonlinear systems are discussed. Fully implicit and operator splitting methods are presented. Special attention is paid to how existing solvers for scalar equations in Diffpack can be used to derive fully implicit solvers for systems. The proposed techniques are illustrated in terms of two applications, namely a system of PDEs modelling pipeflow and a two-phase porous media flow. Stochastic PDEs is the topic of chapter 7. The first part of the chapter is a simple introduction to stochastic PDEs; basic analytical properties are presented for simple models like transport phenomena and viscous drag forces. The second part considers the numerical solution of stochastic PDEs. Two basic techniques are presented, namely Monte Carlo and perturbation methods. The last part explains how to implement and incorporate these solvers into Diffpack. Chapter 8 describes how to operate Diffpack from Python scripts. The main goal here is to provide all the programming and technical details in order to glue the programming environment of Diffpack with visualization packages through Python and in general take advantage of the Python interfaces. Chapter 9 attempts to show how to use numerical experiments to measure the performance of various PDE solvers. The authors gathered a rather impressive list, a total of 14 PDE solvers. Solvers for problems like Poisson, Navier--Stokes, elasticity, two-phase flows and methods such as finite difference, finite element, multigrid, and gradient type methods are presented. The authors provide a series of numerical results combining various solvers with various methods in order to gain insight into their computational performance and efficiency. In Chapter 10 the authors consider a computationally challenging problem, namely the computation of the electrical activity of the human heart. After a brief introduction on the biology of the problem the authors present the mathematical models involved and a numerical method for solving them within the framework of Diffpack. Chapter 11 and 12 are closely related; actually they could have been combined in a single chapter. Chapter 11 introduces several mathematical models used in finance, based on the Black--Scholes equation. Chapter 12 considers several numerical methods like Monte Carlo, lattice methods, finite difference and finite element methods. Implementation of these methods within Diffpack is presented in the last part of the chapter. Chapter 13 presents how the finite element method is used for the modelling and analysis of elastic structures. The authors describe the structural elements of Diffpack which include popular elements such as beams and plates and examples are presented on how to use them to simulate elastic structures. Chapter 14 describes an application problem, namely the extrusion of aluminum. This is a rather\\endcolumn complicated process which involves non-Newtonian flow, heat transfer and elasticity. The authors describe the systems of PDEs modelling the underlying process and use a finite element method to obtain a numerical solution. The implementation of the numerical method in Diffpack is presented along with some applications. The last chapter, chapter 15, focuses on mathematical and numerical models of systems of PDEs governing geological processes in sedimentary basins. The underlying mathematical model is solved using the finite element method within a fully implicit scheme. The authors discuss the implementational issues involved within Diffpack and they present results from several examples. In summary, the book focuses on the computational and implementational issues involved in solving partial differential equations. The potential reader should have a basic knowledge of PDEs and the finite difference and finite element methods. The examples presented are solved within the programming framework of Diffpack and the reader should have prior experience with the particular software in order to take full advantage of the book. Overall the book is well written, the subject of each chapter is well presented and can serve as a reference for graduate students, researchers and engineers who are interested in the numerical solution of partial differential equations modelling various applications.
Multiobjective Decision Analysis With Engineering and Business Applications
NASA Astrophysics Data System (ADS)
Wood, Eric
The last 15 years have witnessed the development of a large number of multiobjective decision techniques. Applying these techniques to environmental, engineering, and business problems has become well accepted. Multiobjective Decision Analysis With Engineering and Business Applications attempts to cover the main multiobjective techniques both in their mathematical treatment and in their application to real-world problems.The book is divided into 12 chapters plus three appendices. The main portion of the book is represented by chapters 3-6, Where the various approaches are identified, classified, and reviewed. Chapter 3 covers methods for generating nondominated solutions; chapter 4, continuous methods with prior preference articulation; chapter 5, discrete methods with prior preference articulation; and chapter 6, methods of progressive articulation of preferences. In these four chapters, close to 20 techniques are discussed with over 20 illustrative examples. This is both a strength and a weakness; the breadth of techniques and examples provide comprehensive coverage, but it is in a style too mathematically compact for most readers. By my count, the presentation of the 20 techniques in chapters 3-6 covered 85 pages, an average of about 4.5 pages each; therefore, a sound basis in linear algebra and linear programing is required if the reader hopes to follow the material. Chapter 2, “Concepts in Multiobjective Analysis,” also assumes such a background.
Lu, Zhao; Sun, Jing; Butts, Kenneth
2016-02-03
A giant leap has been made in the past couple of decades with the introduction of kernel-based learning as a mainstay for designing effective nonlinear computational learning algorithms. In view of the geometric interpretation of conditional expectation and the ubiquity of multiscale characteristics in highly complex nonlinear dynamic systems [1]-[3], this paper presents a new orthogonal projection operator wavelet kernel, aiming at developing an efficient computational learning approach for nonlinear dynamical system identification. In the framework of multiresolution analysis, the proposed projection operator wavelet kernel can fulfill the multiscale, multidimensional learning to estimate complex dependencies. The special advantage of the projection operator wavelet kernel developed in this paper lies in the fact that it has a closed-form expression, which greatly facilitates its application in kernel learning. To the best of our knowledge, it is the first closed-form orthogonal projection wavelet kernel reported in the literature. It provides a link between grid-based wavelets and mesh-free kernel-based methods. Simulation studies for identifying the parallel models of two benchmark nonlinear dynamical systems confirm its superiority in model accuracy and sparsity.
A constrained-gradient method to control divergence errors in numerical MHD
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2016-10-01
In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.
Identifying finite-time coherent sets from limited quantities of Lagrangian data.
Williams, Matthew O; Rypina, Irina I; Rowley, Clarence W
2015-08-01
A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that "leak" from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, "data rich" test problems, and conceptually related methods based on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or "mesh-free" methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.
Banerjee, Sourav; Kundu, Tribikram
2008-03-01
Multilayered solid structures made of isotropic, transversely isotropic, or general anisotropic materials are frequently used in aerospace, mechanical, and civil structures. Ultrasonic fields developed in such structures by finite size transducers simulating actual experiments in laboratories or in the field have not been rigorously studied. Several attempts to compute the ultrasonic field inside solid media have been made based on approximate paraxial methods like the classical ray tracing and multi-Gaussian beam models. These approximate methods have several limitations. A new semianalytical method is adopted in this article to model elastic wave field in multilayered solid structures with planar or nonplanar interfaces generated by finite size transducers. A general formulation good for both isotropic and anisotropic solids is presented in this article. A variety of conditions have been incorporated in the formulation including irregularities at the interfaces. The method presented here requires frequency domain displacement and stress Green's functions. Due to the presence of different materials in the problem geometry various elastodynamic Green's functions for different materials are used in the formulation. Expressions of displacement and stress Green's functions for isotropic and anisotropic solids as well as for the fluid media are presented. Computed results are verified by checking the stress and displacement continuity conditions across the interface of two different solids of a bimetal plate and investigating if the results for a corrugated plate with very small corrugation match with the flat plate results.
A new class of accurate, mesh-free hydrodynamic simulation methods
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2015-06-01
We present two new Lagrangian methods for hydrodynamics, in a systematic comparison with moving-mesh, smoothed particle hydrodynamics (SPH), and stationary (non-moving) grid methods. The new methods are designed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume `overlap'. We implement and test a parallel, second-order version of the method with self-gravity and cosmological integration, in the code GIZMO:1 this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require `artificial diffusion' terms; and allows the fluid elements to move with the flow, so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods appear competitive with moving-mesh schemes, with some advantages (particularly in angular momentum conservation), at the cost of enhanced noise. The new methods have many advantages versus SPH: proper convergence, good capturing of fluid-mixing instabilities, dramatically reduced `particle noise' and numerical viscosity, more accurate sub-sonic flow evolution, and sharp shock-capturing. Advantages versus non-moving meshes include: automatic adaptivity, dramatically reduced advection errors and numerical overmixing, velocity-independent errors, accurate coupling to gravity, good angular momentum conservation and elimination of `grid alignment' effects. We can, for example, follow hundreds of orbits of gaseous discs, while AMR and SPH methods break down in a few orbits. However, fixed meshes minimize `grid noise'. These differences are important for a range of astrophysical problems.
Analysis options for estimating status and trends in long-term monitoring
Bart, Jonathan; Beyer, Hawthorne L.
2012-01-01
This chapter describes methods for estimating long-term trends in ecological parameters. Other chapters in this volume discuss more advanced methods for analyzing monitoring data, but these methods may be relatively inaccessible to some readers. Therefore, this chapter provides an introduction to trend analysis for managers and biologists while also discussing general issues relevant to trend assessment in any long-term monitoring program. For simplicity, we focus on temporal trends in population size across years. We refer to the survey results for each year as the “annual means” (e.g. mean per transect, per plot, per time period). The methods apply with little or no modification, however, to formal estimates of population size, other temporal units (e.g. a month), to spatial or other dimensions such as elevation or a north–south gradient, and to other quantities such as chemical or geological parameters. The chapter primarily discusses methods for estimating population-wide parameters rather than studying variation in trend within the population, which can be examined using methods presented in other chapters (e.g. Chapters 7, 12, 20). We begin by reviewing key concepts related to trend analysis. We then describe how to evaluate potential bias in trend estimates. An overview of the statistical models used to quantify trends is then presented. We conclude by showing ways to estimate trends using simple methods that can be implemented with spreadsheets.
An analysis of temperature-induced errors for an ultrasound distance measuring system. M. S. Thesis
NASA Technical Reports Server (NTRS)
Wenger, David Paul
1991-01-01
The presentation of research is provided in the following five chapters. Chapter 2 presents the necessary background information and definitions for general work with ultrasound and acoustics. It also discusses the basis for errors in the slant range measurements. Chapter 3 presents a method of problem solution and an analysis of the sensitivity of the equations to slant range measurement errors. It also presents various methods by which the error in the slant range measurements can be reduced to improve overall measurement accuracy. Chapter 4 provides a description of a type of experiment used to test the analytical solution and provides a discussion of its results. Chapter 5 discusses the setup of a prototype collision avoidance system, discusses its accuracy, and demonstrates various methods of improving the accuracy along with the improvements' ramifications. Finally, Chapter 6 provides a summary of the work and a discussion of conclusions drawn from it. Additionally, suggestions for further research are made to improve upon what has been presented here.
40 CFR 75.22 - Reference test methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... appendix A to part 60 of this chapter, except for Methods 2B and 2E, are the reference methods for... provided in appendix A to part 60 of this chapter, except for Methods 2B and 2E, for determining volumetric...
40 CFR 75.22 - Reference test methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... appendix A to part 60 of this chapter, except for Methods 2B and 2E, are the reference methods for... provided in appendix A to part 60 of this chapter, except for Methods 2B and 2E, for determining volumetric...
NASA Astrophysics Data System (ADS)
Serra, Reviewed By Martin J.
2000-01-01
Genomics is one of the most rapidly expanding areas of science. This book is an outgrowth of a series of lectures given by one of the former heads (CRC) of the Human Genome Initiative. The book is designed to reach a wide audience, from biologists with little chemical or physical science background through engineers, computer scientists, and physicists with little current exposure to the chemical or biological principles of genetics. The text starts with a basic review of the chemical and biological properties of DNA. However, without either a biochemistry background or a supplemental biochemistry text, this chapter and much of the rest of the text would be difficult to digest. The second chapter is designed to put DNA into the context of the larger chromosomal unit. Specialized chromosomal structures and sequences (centromeres, telomeres) are introduced, leading to a section on chromosome organization and purification. The next 4 chapters cover the physical (hybridization, electrophoresis), chemical (polymerase chain reaction), and biological (genetic) techniques that provide the backbone of genomic analysis. These chapters cover in significant detail the fundamental principles underlying each technique and provide a firm background for the remainder of the text. Chapters 79 consider the need and methods for the development of physical maps. Chapter 7 primarily discusses chromosomal localization techniques, including in situ hybridization, FISH, and chromosome paintings. The next two chapters focus on the development of libraries and clones. In particular, Chapter 9 considers the limitations of current mapping and clone production. The current state and future of DNA sequencing is covered in the next three chapters. The first considers the current methods of DNA sequencing - especially gel-based methods of analysis, although other possible approaches (mass spectrometry) are introduced. Much of the chapter addresses the limitations of current methods, including analysis of error in sequencing and current bottlenecks in the sequencing effort. The next chapter describes the steps necessary to scale current technologies for the sequencing of entire genomes. Chapter 12 examines alternate methods for DNA sequencing. Initially, methods of single-molecule sequencing and sequencing by microscopy are introduced; the majority of the chapter is devoted to the development of DNA sequencing methods using chip microarrays and hybridization. The remaining chapters (13-15) consider the uses and analysis of DNA sequence information. The initial focus is on the identification of genes. Several examples are given of the use of DNA sequence information for diagnosis of inherited or infectious diseases. The sequence-specific manipulation of DNA is discussed in Chapter 14. The final chapter deals with the implications of large-scale sequencing, including methods for identifying genes and finding errors in DNA sequences, to the development of computer algorithms for the interpretation of DNA sequence information. The text figures are black and white line drawings that, although clearly done, seem a bit primitive for 1999. While I appreciated the simplicity of the drawings, many students accustomed to more colorful presentations will find them wanting. The four color figures in the center of the text seem an afterthought and add little to the text's clarity. Each chapter has a set of additional reading sources, mostly primary sources. Often, specialized topics are offset into boxes that provide clarification and amplification without cluttering the text. An appendix includes a list of the Web-based database resources. As an undergraduate instructor who has previously taught biochemistry, molecular biology, and a course on the human genome, I found many interesting tidbits and amplifications throughout the text. I would recommend this book as a text for an advanced undergraduate or beginning graduate course in genomics. Although the text works though several examples of genetic and genome analysis, additional problem/homework sets would need to be developed to ensure student comprehension. The text steers clear of the ethical implications of the Human Genome Initiative and remains true to its subtitle The Science and Technology .
NASA Astrophysics Data System (ADS)
Roland, Caroline; de Resseguier, Thibaut; Sollier, Arnaud; Lescoute, Emilien; Tangiang, Diouwel; Toulminet, Marc; Soulard, Laurent
2017-06-01
The interaction of a shock wave with a rough free surface may lead to micrometric material ejection of high velocity (km/s-order). This microjetting phenomenon is a key issue for many applications, such as industrial safety, pyrotechnics or inertial confinement fusion experiments. We have studied this process from single V-shaped grooves of various angles in copper and tin samples shock-loaded by a high energy laser. Experimental details are presented elsewhere in this conference [T. de Rességuier, C. Roland et al., abstract #000154]. As the Smoothed Particles Hydrodynamics formulation is well-suited for the high strains involved in jet expansion and for subsequent fragmentation, this mesh-free method was chosen to simulate microjetting. Computed predictions are compared to experimental results including jet tip and planar surface velocities, spall fracture, and size distribution of the fragments inferred from both fast shadowgraphy and post-recovery observations. Special focus is made on the dependence of the ballistic properties (velocity and mass distributions) of the ejecta on numerical parameters such as the initial inter-particular distance, the smoothing length and a random noise introduced to simulate inner irregularities of the material.
NASA Astrophysics Data System (ADS)
Yan, J. W.; Tong, L. H.; Xiang, Ping
2017-12-01
Free vibration behaviors of single-walled boron nitride nanotubes are investigated using a computational mechanics approach. Tersoff-Brenner potential is used to reflect atomic interaction between boron and nitrogen atoms. The higher-order Cauchy-Born rule is employed to establish the constitutive relationship for single-walled boron nitride nanotubes on the basis of higher-order gradient continuum theory. It bridges the gaps between the nanoscale lattice structures with a continuum body. A mesh-free modeling framework is constructed, using the moving Kriging interpolation which automatically satisfies the higher-order continuity, to implement numerical simulation in order to match the higher-order constitutive model. In comparison with conventional atomistic simulation methods, the established atomistic-continuum multi-scale approach possesses advantages in tackling atomic structures with high-accuracy and high-efficiency. Free vibration characteristics of single-walled boron nitride nanotubes with different boundary conditions, tube chiralities, lengths and radii are examined in case studies. In this research, it is pointed out that a critical radius exists for the evaluation of fundamental vibration frequencies of boron nitride nanotubes; opposite trends can be observed prior to and beyond the critical radius. Simulation results are presented and discussed.
40 CFR Table 4 to Subpart Jjjjjj... - Performance (Stack) Testing Requirements
Code of Federal Regulations, 2012 CFR
2012-07-01
...-factor methodology in appendix A-7 to part 60 of this chapter. 3. Carbon Monoxide a. Select the sampling... carbon monoxide emission concentration Method 10, 10A, or 10B in appendix A-4 to part 60 of this chapter... location and the number of traverse points Method 1 in appendix A-1 to part 60 of this chapter. b...
40 CFR Table 4 to Subpart Jjjjjj... - Performance (Stack) Testing Requirements
Code of Federal Regulations, 2014 CFR
2014-07-01
...-factor methodology in appendix A-7 to part 60 of this chapter. 3. Carbon Monoxide a. Select the sampling... carbon monoxide emission concentration Method 10, 10A, or 10B in appendix A-4 to part 60 of this chapter... location and the number of traverse points Method 1 in appendix A-1 to part 60 of this chapter. b...
40 CFR Table 4 to Subpart Jjjjjj... - Performance (Stack) Testing Requirements
Code of Federal Regulations, 2013 CFR
2013-07-01
...-factor methodology in appendix A-7 to part 60 of this chapter. 3. Carbon Monoxide a. Select the sampling... carbon monoxide emission concentration Method 10, 10A, or 10B in appendix A-4 to part 60 of this chapter... location and the number of traverse points Method 1 in appendix A-1 to part 60 of this chapter. b...
40 CFR Table 4 to Subpart Jjjjjj... - Performance (Stack) Testing Requirements
Code of Federal Regulations, 2011 CFR
2011-07-01
...-factor methodology in appendix A-7 to part 60 of this chapter. 3. Carbon Monoxide a. Select the sampling... carbon monoxide emission concentration Method 10, 10A, or 10B in appendix A-4 to part 60 of this chapter... location and the number of traverse points Method 1 in appendix A-1 to part 60 of this chapter. b...
Quantum Monte Carlo Calculations Applied to Magnetic Molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelhardt, Larry
2006-01-01
We have calculated the equilibrium thermodynamic properties of Heisenberg spin systems using a quantum Monte Carlo (QMC) method. We have used some of these systems as models to describe recently synthesized magnetic molecules, and-upon comparing the results of these calculations with experimental data-have obtained accurate estimates for the basic parameters of these models. We have also performed calculations for other systems that are of more general interest, being relevant both for existing experimental data and for future experiments. Utilizing the concept of importance sampling, these calculations can be carried out in an arbitrarily large quantum Hilbert space, while still avoidingmore » any approximations that would introduce systematic errors. The only errors are statistical in nature, and as such, their magnitudes are accurately estimated during the course of a simulation. Frustrated spin systems present a major challenge to the QMC method, nevertheless, in many instances progress can be made. In this chapter, the field of magnetic molecules is introduced, paying particular attention to the characteristics that distinguish magnetic molecules from other systems that are studied in condensed matter physics. We briefly outline the typical path by which we learn about magnetic molecules, which requires a close relationship between experiments and theoretical calculations. The typical experiments are introduced here, while the theoretical methods are discussed in the next chapter. Each of these theoretical methods has a considerable limitation, also described in Chapter 2, which together serve to motivate the present work. As is shown throughout the later chapters, the present QMC method is often able to provide useful information where other methods fail. In Chapter 3, the use of Monte Carlo methods in statistical physics is reviewed, building up the fundamental ideas that are necessary in order to understand the method that has been used in this work. With these ideas in hand, we then provide a detailed explanation of the current QMC method in Chapter 4. The remainder of the thesis is devoted to presenting specific results: Chapters 5 and 6 contain articles in which this method has been used to answer general questions that are relevant to broad classes of systems. Then, in Chapter 7, we provide an analysis of four different species of magnetic molecules that have recently been synthesized and studied. In all cases, comparisons between QMC calculations and experimental data allow us to distinguish a viable microscopic model and make predictions for future experiments. In Chapter 8, the infamous ''negative sign problem'' is described in detail, and we clearly indicate the limitations on QMC that are imposed by this obstacle. Finally, Chapter 9 contains a summary of the present work and the expected directions for future research.« less
NASA Astrophysics Data System (ADS)
Bazilevs, Y.; Kamran, K.; Moutsanidis, G.; Benson, D. J.; Oñate, E.
2017-07-01
In this two-part paper we begin the development of a new class of methods for modeling fluid-structure interaction (FSI) phenomena for air blast. We aim to develop accurate, robust, and practical computational methodology, which is capable of modeling the dynamics of air blast coupled with the structure response, where the latter involves large, inelastic deformations and disintegration into fragments. An immersed approach is adopted, which leads to an a-priori monolithic FSI formulation with intrinsic contact detection between solid objects, and without formal restrictions on the solid motions. In Part I of this paper, the core air-blast FSI methodology suitable for a variety of discretizations is presented and tested using standard finite elements. Part II of this paper focuses on a particular instantiation of the proposed framework, which couples isogeometric analysis (IGA) based on non-uniform rational B-splines and a reproducing-kernel particle method (RKPM), which is a Meshfree technique. The combination of IGA and RKPM is felt to be particularly attractive for the problem class of interest due to the higher-order accuracy and smoothness of both discretizations, and relative simplicity of RKPM in handling fragmentation scenarios. A collection of mostly 2D numerical examples is presented in each of the parts to illustrate the good performance of the proposed air-blast FSI framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Baumgartner, Robert
This chapter presents an overview of best practices for designing and executing survey research to estimate gross energy savings in energy efficiency evaluations. A detailed description of the specific techniques and strategies for designing questions, implementing a survey, and analyzing and reporting the survey procedures and results is beyond the scope of this chapter. So for each topic covered below, readers are encouraged to consult articles and books cited in References, as well as other sources that cover the specific topics in greater depth. This chapter focuses on the use of survey methods to collect data for estimating gross savingsmore » from energy efficiency programs.« less
Kashyap, Kanchan L; Bajpai, Manish K; Khanna, Pritee; Giakos, George
2018-01-01
Automatic segmentation of abnormal region is a crucial task in computer-aided detection system using mammograms. In this work, an automatic abnormality detection algorithm using mammographic images is proposed. In the preprocessing step, partial differential equation-based variational level set method is used for breast region extraction. The evolution of the level set method is done by applying mesh-free-based radial basis function (RBF). The limitation of mesh-based approach is removed by using mesh-free-based RBF method. The evolution of variational level set function is also done by mesh-based finite difference method for comparison purpose. Unsharp masking and median filtering is used for mammogram enhancement. Suspicious abnormal regions are segmented by applying fuzzy c-means clustering. Texture features are extracted from the segmented suspicious regions by computing local binary pattern and dominated rotated local binary pattern (DRLBP). Finally, suspicious regions are classified as normal or abnormal regions by means of support vector machine with linear, multilayer perceptron, radial basis, and polynomial kernel function. The algorithm is validated on 322 sample mammograms of mammographic image analysis society (MIAS) and 500 mammograms from digital database for screening mammography (DDSM) datasets. Proficiency of the algorithm is quantified by using sensitivity, specificity, and accuracy. The highest sensitivity, specificity, and accuracy of 93.96%, 95.01%, and 94.48%, respectively, are obtained on MIAS dataset using DRLBP feature with RBF kernel function. Whereas, the highest 92.31% sensitivity, 98.45% specificity, and 96.21% accuracy are achieved on DDSM dataset using DRLBP feature with RBF kernel function. Copyright © 2017 John Wiley & Sons, Ltd.
Ecological Census Techniques - 2nd Edition
NASA Astrophysics Data System (ADS)
Sutherland, Edited By William J.
2006-08-01
This is an updated version of the best selling first edition, Ecological Census Techniques, with updating, some new chapters and authors. Almost all ecological and conservation work involves carrying out a census or survey. This practically focussed book describes how to plan a census, the practical details and shows with worked examples how to analyse the results. The first three chapters describe planning, sampling and the basic theory necessary for carrying out a census. In the subsequent chapters international experts describe the appropriate methods for counting plants, insects, fish, amphibians, reptiles, mammals and birds. As many censuses also relate the results to environmental variability, there is a chapter explaining the main methods. Finally, there is a list of the most common mistakes encountered when carrying out a census. Gives worked examples and describes practical details The chapter on research planning provides an approach for planning any research, not just those relating to census techniques Latest edition of a very highly-regarded book. Includes new authors, each chapter has been updated, and additional chapters on sampling and designing research programmes have been added
Aerodynamic Design of Axial-flow Compressors. Volume III
NASA Technical Reports Server (NTRS)
Johnson, Irving A; Bullock, Robert O; Graham, Robert W; Costilow, Eleanor L; Huppert, Merle C; Benser, William A; Herzig, Howard Z; Hansen, Arthur G; Jackson, Robert J; Yohner, Peggy L;
1956-01-01
Chapters XI to XIII concern the unsteady compressor operation arising when compressor blade elements stall. The fields of compressor stall and surge are reviewed in Chapters XI and XII, respectively. The part-speed operating problem in high-pressure-ratio multistage axial-flow compressors is analyzed in Chapter XIII. Chapter XIV summarizes design methods and theories that extend beyond the simplified two-dimensional approach used previously in the report. Chapter XV extends this three-dimensional treatment by summarizing the literature on secondary flows and boundary layer effects. Charts for determining the effects of errors in design parameters and experimental measurements on compressor performance are given in Chapters XVI. Chapter XVII reviews existing literature on compressor and turbine matching techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Richard O.
The application of statistics to environmental pollution monitoring studies requires a knowledge of statistical analysis methods particularly well suited to pollution data. This book fills that need by providing sampling plans, statistical tests, parameter estimation procedure techniques, and references to pertinent publications. Most of the statistical techniques are relatively simple, and examples, exercises, and case studies are provided to illustrate procedures. The book is logically divided into three parts. Chapters 1, 2, and 3 are introductory chapters. Chapters 4 through 10 discuss field sampling designs and Chapters 11 through 18 deal with a broad range of statistical analysis procedures. Somemore » statistical techniques given here are not commonly seen in statistics book. For example, see methods for handling correlated data (Sections 4.5 and 11.12), for detecting hot spots (Chapter 10), and for estimating a confidence interval for the mean of a lognormal distribution (Section 13.2). Also, Appendix B lists a computer code that estimates and tests for trends over time at one or more monitoring stations using nonparametric methods (Chapters 16 and 17). Unfortunately, some important topics could not be included because of their complexity and the need to limit the length of the book. For example, only brief mention could be made of time series analysis using Box-Jenkins methods and of kriging techniques for estimating spatial and spatial-time patterns of pollution, although multiple references on these topics are provided. Also, no discussion of methods for assessing risks from environmental pollution could be included.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuo, Ye
2011-01-01
In this thesis, we theoretically study the electromagnetic wave propagation in several passive and active optical components and devices including 2-D photonic crystals, straight and curved waveguides, organic light emitting diodes (OLEDs), and etc. Several optical designs are also presented like organic photovoltaic (OPV) cells and solar concentrators. The first part of the thesis focuses on theoretical investigation. First, the plane-wave-based transfer (scattering) matrix method (TMM) is briefly described with a short review of photonic crystals and other numerical methods to study them (Chapter 1 and 2). Next TMM, the numerical method itself is investigated in details and developed inmore » advance to deal with more complex optical systems. In chapter 3, TMM is extended in curvilinear coordinates to study curved nanoribbon waveguides. The problem of a curved structure is transformed into an equivalent one of a straight structure with spatially dependent tensors of dielectric constant and magnetic permeability. In chapter 4, a new set of localized basis orbitals are introduced to locally represent electromagnetic field in photonic crystals as alternative to planewave basis. The second part of the thesis focuses on the design of optical devices. First, two examples of TMM applications are given. The first example is the design of metal grating structures as replacements of ITO to enhance the optical absorption in OPV cells (chapter 6). The second one is the design of the same structure as above to enhance the light extraction of OLEDs (chapter 7). Next, two design examples by ray tracing method are given, including applying a microlens array to enhance the light extraction of OLEDs (chapter 5) and an all-angle wide-wavelength design of solar concentrator (chapter 8). In summary, this dissertation has extended TMM which makes it capable of treating complex optical systems. Several optical designs by TMM and ray tracing method are also given as a full complement of this work.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Huang, Robert; Masanet, Eric
This chapter focuses on IT measures in the data center and examines the techniques and analysis methods used to verify savings that result from improving the efficiency of two specific pieces of IT equipment: servers and data storage.
Substructures in Clusters of Galaxies
NASA Astrophysics Data System (ADS)
Lehodey, Brigitte Tome
2000-01-01
This dissertation presents two methods for the detection of substructures in clusters of galaxies and the results of their application to a group of four clusters. In chapters 2 and 3, we remember the main properties of clusters of galaxies and give the definition of substructures. We also try to show why the study of substructures in clusters of galaxies is so important for Cosmology. Chapters 4 and 5 describe these two methods, the first one, the adaptive Kernel, is applied to the study of the spatial and kinematical distribution of the cluster galaxies. The second one, the MVM (Multiscale Vision Model), is applied to analyse the cluster diffuse X-ray emission, i.e., the intracluster gas distribution. At the end of these two chapters, we also present the results of the application of these methods to our sample of clusters. In chapter 6, we draw the conclusions from the comparison of the results we obtain with each method. In the last chapter, we present the main conclusions of this work trying to point out possible developments. We close with two appendices in which we detail some questions raised in this work not directly linked to the problem of substructures detection.
Analysis of heavy oils: Method development and application to Cerro Negro heavy petroleum
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1989-12-01
On March 6, 1980, the US Department of Energy (DOE) and the Ministry of Energy and Mines of Venezuela (MEMV) entered into a joint agreement which included analysis of heavy crude oils from the Venezuelan Orinoco oil belt. The purpose of this report is to present compositional data and describe new analytical methods obtained from work on the Cerro Negro Orinoco belt crude oil since 1980. Most of the chapters focus on the methods rather than the resulting data on Cerro Negro oil, and results from other oils obtained during the verification of the method are included. In addition, publishedmore » work on analysis of heavy oils, tar sand bitumens, and like materials is reviewed, and the overall state of the art in analytical methodology for heavy fossil liquids is assessed. The various phases of the work included: distillation and determination of routine'' physical/chemical properties (Chapter 1); preliminary separation of >200{degrees} C distillates and the residue into acid, base, neutral, saturated hydrocarbon and neutral-aromatic concentrates (Chapter 2); further separation of acid, base, and neutral concentrates into subtypes (Chapters 3--5); and determination of the distribution of metal-containing compounds in all fractions (Chapter 6).« less
Electromagnetic Inverse Methods and Applications for Inhomogeneous Media Probing and Synthesis.
NASA Astrophysics Data System (ADS)
Xia, Jake Jiqing
The electromagnetic inverse scattering problems concerned in this thesis are to find unknown inhomogeneous permittivity and conductivity profiles in a medium from the scattering data. Both analytical and numerical methods are studied in the thesis. The inverse methods can be applied to geophysical medium probing, non-destructive testing, medical imaging, optical waveguide synthesis and material characterization. An introduction is given in Chapter 1. The first part of the thesis presents inhomogeneous media probing. The Riccati equation approach is discussed in Chapter 2 for a one-dimensional planar profile inversion problem. Two types of the Riccati equations are derived and distinguished. New renormalized formulae based inverting one specific type of the Riccati equation are derived. Relations between the inverse methods of Green's function, the Riccati equation and the Gel'fand-Levitan-Marchenko (GLM) theory are studied. In Chapter 3, the renormalized source-type integral equation (STIE) approach is formulated for inversion of cylindrically inhomogeneous permittivity and conductivity profiles. The advantages of the renormalized STIE approach are demonstrated in numerical examples. The cylindrical profile inversion problem has an application for borehole inversion. In Chapter 4 the renormalized STIE approach is extended to a planar case where the two background media are different. Numerical results have shown fast convergence. This formulation is applied to inversion of the underground soil moisture profiles in remote sensing. The second part of the thesis presents the synthesis problem of inhomogeneous dielectric waveguides using the electromagnetic inverse methods. As a particular example, the rational function representation of reflection coefficients in the GLM theory is used. The GLM method is reviewed in Chapter 5. Relations between modal structures and transverse reflection coefficients of an inhomogeneous medium are established in Chapter 6. A stratified medium model is used to derive the guidance condition and the reflection coefficient. Results obtained in Chapter 6 provide the physical foundation for applying the inverse methods for the waveguide design problem. In Chapter 7, a global guidance condition for continuously varying medium is derived using the Riccati equation. It is further shown that the discrete modes in an inhomogeneous medium have the same wave vectors as the poles of the transverse reflection coefficient. An example of synthesizing an inhomogeneous dielectric waveguide using a rational reflection coefficient is presented. A summary of the thesis is given in Chapter 8. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.).
This chapter provides a brief introduction to whole effluent toxicity (WET) testing and describes the regulatory background and context of WET testing. This chapter also describes the purpose of this document and outlines the issues addressed in each chapter.
Man Over Methods: Images of Educational Leadership.
ERIC Educational Resources Information Center
Bergsma, Harold; And Others
Papers written by education graduate students for a class on instructional leadership are organized into four chapters. Chapter 1, "History of the Nature of Leadership: 1900-1981," examines the nature of leadership in education according to major historical events and their effects on educational supervision. Chapter 2, "Dynamic…
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
A water budget is an accounting of water movement into and out of, and storage change within, some control volume. Universal and adaptable are adjectives that reflect key features of water-budget methods for estimating recharge. The universal concept of mass conservation of water implies that water-budget methods are applicable over any space and time scales (Healy et al., 2007). The water budget of a soil column in a laboratory can be studied at scales of millimeters and seconds. A water-budget equation is also an integral component of atmospheric general circulation models used to predict global climates over periods of decades or more. Water-budget equations can be easily customized by adding or removing terms to accurately portray the peculiarities of any hydrologic system. The equations are generally not bound by assumptions on mechanisms by which water moves into, through, and out of the control volume of interest. So water-budget methods can be used to estimate both diffuse and focused recharge, and recharge estimates are unaffected by phenomena such as preferential flow paths within the unsaturated zone.Water-budget methods represent the largest class of techniques for estimating recharge. Most hydrologic models are derived from a water-budget equation and can therefore be classified as water-budget models. It is not feasible to address all water-budget methods in a single chapter. This chapter is limited to discussion of the “residual” water-budget approach, whereby all variables in a water-budget equation, except for recharge, are independently measured or estimated and recharge is set equal to the residual. This chapter is closely linked with Chapter 3, on modeling methods, because the equations presented here form the basis of many models and because models are often used to estimate individual components in water-budget studies. Water budgets for streams and other surface-water bodies are addressed in Chapter 4. The use of soil-water budgets and lysimeters for determining potential recharge and evapotranspiration from changes in water storage is discussed in Chapter 5. Aquifer water-budget methods based on the measurement of groundwater levels are described in Chapter 6.
What are hierarchical models and how do we analyze them?
Royle, Andy
2016-01-01
In this chapter we provide a basic definition of hierarchical models and introduce the two canonical hierarchical models in this book: site occupancy and N-mixture models. The former is a hierarchical extension of logistic regression and the latter is a hierarchical extension of Poisson regression. We introduce basic concepts of probability modeling and statistical inference including likelihood and Bayesian perspectives. We go through the mechanics of maximizing the likelihood and characterizing the posterior distribution by Markov chain Monte Carlo (MCMC) methods. We give a general perspective on topics such as model selection and assessment of model fit, although we demonstrate these topics in practice in later chapters (especially Chapters 5, 6, 7, and 10 Chapter 5 Chapter 6 Chapter 7 Chapter 10)
Saving Money Through Energy Conservation.
ERIC Educational Resources Information Center
Presley, Michael H.; And Others
This publication is an introduction to personal energy conservation. The first chapter presents a rationale for conserving energy and points out that private citizens control about one third of this country's energy consumption. Chapters two and three show how to save money by saving energy. Chapter two discusses energy conservation methods in the…
A measurement of the mass of the top quark using the ideogram technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houben, Pieter Willem Huib
2009-06-03
This thesis describes a measurement of the mass of the top quark on data collected with the D0 detector at the Tevatron collider in the period from 2002 until 2006. The first chapter describes the Standard Model and the prominent role of the top quark mass. The second chapter gives a description of the D0 detector which is used for this measurement. After the pmore » $$\\bar{p}$$ collisions have been recorded, reconstruction of physics objects is required, which is described in Chapter 3. Chapter 4 describes how the interesting collisions in which top quarks are produced are separated from the `uninteresting' ones with a set of selection criteria. The method to extract the top quark mass from the sample of selected collisions (also called events), which is based on the ideogram technique, is explained in Chapter 5, followed in Chapter 6 by the description of the calibration of the method using simulation of our most precise knowledge of nature. Chapter 7 shows the result of the measurement together with some cross checks and an estimation of the uncertainty on this measurement. This thesis concludes with a constraint on the Higgs boson mass.« less
Thermodynamics fundamentals of energy conversion
NASA Astrophysics Data System (ADS)
Dan, Nicolae
The work reported in the chapters 1-5 focuses on the fundamentals of heat transfer, fluid dynamics, thermodynamics and electrical phenomena related to the conversion of one form of energy to another. Chapter 6 is a re-examination of the fundamental heat transfer problem of how to connect a finite-size heat generating volume to a concentrated sink. Chapter 1 extends to electrical machines the combined thermodynamics and heat transfer optimization approach that has been developed for heat engines. The conversion efficiency at maximum power is 1/2. When, as in specific applications, the operating temperature of windings must not exceed a specified level, the power output is lower and efficiency higher. Chapter 2 addresses the fundamental problem of determining the optimal history (regime of operation) of a battery so that the work output is maximum. Chapters 3 and 4 report the energy conversion aspects of an expanding mixture of hot particles, steam and liquid water. At the elemental level, steam annuli develop around the spherical drops as time increases. At the mixture level, the density decreases while the pressure and velocity increases. Chapter 4 describes numerically, based on the finite element method, the time evolution of the expanding mixture of hot spherical particles, steam and water. The fluid particles are moved in time in a Lagrangian manner to simulate the change of the domain configuration. Chapter 5 describes the process of thermal interaction between the molten material and water. In the second part of the chapter the model accounts for the irreversibility due to the flow of the mixture through the cracks of the mixing vessel. The approach presented in this chapter is based on exergy analysis and represents a departure from the line of inquiry that was followed in chapters 3-4. Chapter 6 shows that the geometry of the heat flow path between a volume and one point can be optimized in two fundamentally different ways. In the "growth" method the structure is optimized starting from the smallest volume element of fixed size. In "design" method the overall volume is fixed, and the designer works "inward" by increasing the internal complexity of the paths for heat flow.
1980-03-31
1.56 1.26 153 ~.Comparison with the method of Papper and Moler (1974) The method of calculation described in Chapter 3 and applied in this chapter was...digitization of the profiles. Using their method, Papper and Moler (private communication) have kindly performed calculations corresponding to those presented
Homeland Security Collaboration: Catch Phrase or Preeminent Organizational Construct?
2009-09-01
collaborative effort? C. RESEARCH METHODOLOGY This research project utilized a modified case study methodology. The traditional case study method ...discussing the research method , offering smart practices and culminate with findings and recommendations. Chapter II Homeland Security Collaboration...41 Centers for Regional Excellence, “Building Models.” 16 Chapter III Research Methodology: Modified Case Study Method is
P. B. Woodbury; D. A. Weinstein
2010-01-01
We reviewed probabilistic regional risk assessment methodologies to identify the methods that are currently in use and are capable of estimating threats to ecosystems from fire and fuels, invasive species, and their interactions with stressors. In a companion chapter, we highlight methods useful for evaluating risks from fire. In this chapter, we highlight methods...
NASA Astrophysics Data System (ADS)
Prasanna Kumar, S. S.; Patnaik, B. S. V.; Ramamurthi, K.
2018-04-01
The mitigation of blast waves propagating in air and interacting with rigid barriers and obstacles is numerically investigated using the mesh-free smoothed particle hydrodynamics method. A novel virtual boundary particle procedure with a skewed gradient wall boundary treatment is applied at the interfaces between air and rigid bodies. This procedure is validated with closed-form solutions for strong and weak shock reflection from rigid surfaces, supersonic flows over a wedge, formation of reflected, transverse, and Mach stem shocks, and also earlier experiments on interaction of a blast wave with concrete blocks. The mitigation of the overpressure and impulse transmitted to the protected structure due to an array of rigid obstacles of different shapes placed in the path of the blast wave is thereafter determined and discussed in the context of the existing experimental and numerical studies. It is shown that blockages having the shape of a right facing triangle or square placed in tandem or staggered provide better mitigation. The influence of the distance between the blockage array and protected structure is assessed, and the incorporation of a gap in the blockages is shown to improve the mitigation. The mechanisms responsible for the attenuation of air blast are identified through the simulations.
Bayesian Methods for the Physical Sciences. Learning from Examples in Astronomy and Physics.
NASA Astrophysics Data System (ADS)
Andreon, Stefano; Weaver, Brian
2015-05-01
Chapter 1: This chapter presents some basic steps for performing a good statistical analysis, all summarized in about one page. Chapter 2: This short chapter introduces the basics of probability theory inan intuitive fashion using simple examples. It also illustrates, again with examples, how to propagate errors and the difference between marginal and profile likelihoods. Chapter 3: This chapter introduces the computational tools and methods that we use for sampling from the posterior distribution. Since all numerical computations, and Bayesian ones are no exception, may end in errors, we also provide a few tips to check that the numerical computation is sampling from the posterior distribution. Chapter 4: Many of the concepts of building, running, and summarizing the resultsof a Bayesian analysis are described with this step-by-step guide using a basic (Gaussian) model. The chapter also introduces examples using Poisson and Binomial likelihoods, and how to combine repeated independent measurements. Chapter 5: All statistical analyses make assumptions, and Bayesian analyses are no exception. This chapter emphasizes that results depend on data and priors (assumptions). We illustrate this concept with examples where the prior plays greatly different roles, from major to negligible. We also provide some advice on how to look for information useful for sculpting the prior. Chapter 6: In this chapter we consider examples for which we want to estimate more than a single parameter. These common problems include estimating location and spread. We also consider examples that require the modeling of two populations (one we are interested in and a nuisance population) or averaging incompatible measurements. We also introduce quite complex examples dealing with upper limits and with a larger-than-expected scatter. Chapter 7: Rarely is a sample randomly selected from the population we wish to study. Often, samples are affected by selection effects, e.g., easier-to-collect events or objects are over-represented in samples and difficult-to-collect are under-represented if not missing altogether. In this chapter we show how to account for non-random data collection to infer the properties of the population from the studied sample. Chapter 8: In this chapter we introduce regression models, i.e., how to fit (regress) one, or more quantities, against each other through a functional relationship and estimate any unknown parameters that dictate this relationship. Questions of interest include: how to deal with samples affected by selection effects? How does a rich data structure influence the fitted parameters? And what about non-linear multiple-predictor fits, upper/lower limits, measurements errors of different amplitudes and an intrinsic variety in the studied populations or an extra source of variability? A number of examples illustrate how to answer these questions and how to predict the value of an unavailable quantity by exploiting the existence of a trend with another, available, quantity. Chapter 9: This chapter provides some advice on how the careful scientist should perform model checking and sensitivity analysis, i.e., how to answer the following questions: is the considered model at odds with the current available data (the fitted data), for example because it is over-simplified compared to some specific complexity pointed out by the data? Furthermore, are the data informative about the quantity being measured or are results sensibly dependent on details of the fitted model? And, finally, what about if assumptions are uncertain? A number of examples illustrate how to answer these questions. Chapter 10: This chapter compares the performance of Bayesian methods against simple, non-Bayesian alternatives, such as maximum likelihood, minimal chi square, ordinary and weighted least square, bivariate correlated errors and intrinsic scatter, and robust estimates of location and scale. Performances are evaluated in terms of quality of the prediction, accuracy of the estimates, and fairness and noisiness of the quoted errors. We also focus on three failures of maximum likelihood methods occurring with small samples, with mixtures, and with regressions with errors in the predictor quantity.
Sandia Simple Particle Tracking (Sandia SPT) v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony, Stephen M.
2015-06-15
Sandia SPT is designed as software to accompany a book chapter being published a methods chapter which provides an introduction on how to label and track individual proteins. The Sandia Simple Particle Tracking code uses techniques common to the image processing community, where its value is that it facilitates implementing the methods described in the book chapter by providing the necessary open-source code. The code performs single particle spot detection (or segmentation and localization) followed by tracking (or connecting the detected particles into trajectories). The book chapter, which along with the headers in each file, constitutes the documentation for themore » code is: Anthony, S.M.; Carroll-Portillo, A.; Timlon, J.A., Dynamics and Interactions of Individual Proteins in the Membrane of Living Cells. In Anup K. Singh (Ed.) Single Cell Protein Analysis Methods in Molecular Biology. Springer« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Violette, Daniel M.
Addressing other evaluation issues that have been raised in the context of energy efficiency programs, this chapter focuses on methods used to address the persistence of energy savings, which is an important input to the benefit/cost analysis of energy efficiency programs and portfolios. In addition to discussing 'persistence' (which refers to the stream of benefits over time from an energy efficiency measure or program), this chapter provides a summary treatment of these issues -Synergies across programs -Rebound -Dual baselines -Errors in variables (the measurement and/or accuracy of input variables to the evaluation).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Khawaja, M. Sami; Rushton, Josh
Evaluating an energy efficiency program requires assessing the total energy and demand saved through all of the energy efficiency measures provided by the program. For large programs, the direct assessment of savings for each participant would be cost-prohibitive. Even if a program is small enough that a full census could be managed, such an undertaking would almost always be an inefficient use of evaluation resources. The bulk of this chapter describes methods for minimizing and quantifying sampling error. Measurement error and regression error are discussed in various contexts in other chapters.
Development and application of QM/MM methods to study the solvation effects and surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dibya, Pooja Arora
2010-01-01
Quantum mechanical (QM) calculations have the advantage of attaining high-level accuracy, however QM calculations become computationally inefficient as the size of the system grows. Solving complex molecular problems on large systems and ensembles by using quantum mechanics still poses a challenge in terms of the computational cost. Methods that are based on classical mechanics are an inexpensive alternative, but they lack accuracy. A good trade off between accuracy and efficiency is achieved by combining QM methods with molecular mechanics (MM) methods to use the robustness of the QM methods in terms of accuracy and the MM methods to minimize themore » computational cost. Two types of QM combined with MM (QM/MM) methods are the main focus of the present dissertation: the application and development of QM/MM methods for solvation studies and reactions on the Si(100) surface. The solvation studies were performed using a discreet solvation model that is largely based on first principles called the effective fragment potential method (EFP). The main idea of combining the EFP method with quantum mechanics is to accurately treat the solute-solvent and solvent-solvent interactions, such as electrostatic, polarization, dispersion and charge transfer, that are important in correctly calculating solvent effects on systems of interest. A second QM/MM method called SIMOMM (surface integrated molecular orbital molecular mechanics) is a hybrid QM/MM embedded cluster model that mimics the real surface.3 This method was employed to calculate the potential energy surfaces for reactions of atomic O on the Si(100) surface. The hybrid QM/MM method is a computationally inexpensive approach for studying reactions on larger surfaces in a reasonably accurate and efficient manner. This thesis is comprised of four chapters: Chapter 1 describes the general overview and motivation of the dissertation and gives a broad background of the computational methods that have been employed in this work. Chapter 2 illustrates the methodology of the interface of the EFP method with the configuration interaction with single excitations (CIS) method to study solvent effects in excited states. Chapter 3 discusses the study of the adiabatic electron affinity of the hydroxyl radical in aqueous solution and in micro-solvated clusters using a QM/EFP method. Chapter 4 describes the study of etching and diffusion of oxygen atom on a reconstructed Si(100)-2 x 1 surface using a hybrid QM/MM embedded cluster model (SIMOMM). Chapter 4 elucidates the application of the EFP method towards the understanding of the aqueous ionization potential of Na atom. Finally, a general conclusion of this dissertation work and prospective future direction are presented in Chapter 6.« less
Improving Reading In Every Class. Abridged Edition.
ERIC Educational Resources Information Center
Thomas, Ellen Lamar; Robinson, H. Alan
This book suggests procedures not only for teaching the fundamental processes in reading but also for teaching reading in high school subject areas. Four chapters present methods for teaching vocabulary, comprehension, rate, and problem solving. Nine chapters are devoted to practical classroom methods for teaching mathematics, science, industrial…
Chapter 6. Landscape Analysis for Habitat Monitoring
Samuel A. Cushman; Kevin McGarigal; Kevin S. McKelvey; Christina D. Vojta; Claudia M. Regan
2013-01-01
The primary objective of this chapter is to describe standardized methods for measur¬ing and monitoring attributes of landscape pattern in support of habitat monitoring. This chapter describes the process of monitoring categorical landscape maps in which either selected habitat attributes or different classes of habitat quality are represented as different patch types...
Past, Present, and Future of Critical Quantitative Research in Higher Education
ERIC Educational Resources Information Center
Wells, Ryan S.; Stage, Frances K.
2014-01-01
This chapter discusses the evolution of the critical quantitative paradigm with an emphasis on extending this approach to new populations and new methods. Along with this extension of critical quantitative work, however, come continued challenges and tensions for researchers. This chapter recaps and responds to each chapter in the volume, and…
Chapter 5: Quantifying greenhouse gas sources and sinks in animal production systems
USDA-ARS?s Scientific Manuscript database
The purpose of this publication is to develop methods to quantify greenhouse gas emissions (GHG) from U.S. agriculture and forestry. This chapter provides guidance for reporting GHG emissions from animal production systems. In particular, it focuses on methods for estimating emissions from beef cat...
Using Mixed Methods to Assess Initiatives with Broad-Based Goals
ERIC Educational Resources Information Center
Inkelas, Karen Kurotsuchi
2017-01-01
This chapter describes a process for assessing programmatic initiatives with broad-ranging goals with the use of a mixed-methods design. Using an example of a day-long teaching development conference, this chapter provides practitioners step-by-step guidance on how to implement this assessment process.
Quasi-one dimensional (Q1D) nanostructures: Synthesis, integration and device application
NASA Astrophysics Data System (ADS)
Chien, Chung-Jen
Quasi-one-dimensional (Q1D) nanostructures such as nanotubes and nanowires have been widely regarded as the potential building blocks for nanoscale electronic, optoelectronic and sensing devices. In this work, the content can be divided into three categories: Nano-material synthesis and characterizations, alignment and integration, physical properties and application. The dissertation consists of seven chapters as following. Chapter 1 will give an introduction to low dimensional nano-materials. Chapter 2 explains the mechanism how Q1D nanostructure grows. Chapter 3 describes the methods how we horizontally and vertically align the Q1D nanostructure. Chapter 4 and 5 are the electrical and optical device characterization respectively. Chapter 6 demonstrates the integration of Q1D nanostructures and the device application. The last chapter will discuss the future work and conclusion of the thesis.
Spatial Dimension as a Variable in Quantum Mechanics
NASA Astrophysics Data System (ADS)
Doren, Douglas James
Several approximation methods potentially useful in electronic structure calculations are developed. These methods all treat the spatial dimension, D, as a variable. In an Introduction, the motivations for these methods are described, with special attention to the semiclassical 1/D expansion. Several terms in this expansion have been calculated for two-electron atoms. The results have qualitative appeal but poor convergence properties when D = 3. Chapter 1 shows that this convergence problem is due to singularities in the energy at D = 1 and a method of removing their effects is demonstrated. Chapter 2 treats several model problems, showing how to identify special dimensions at which the energy becomes singular or the Hamiltonian simplifies. Expansions are developed about these special finite values of D which are quite accurate at low order, regardless of the physical parameters of the Hamiltonian. In Chapter 3, expansions about singular points in the energy at finite values of D are used to resum the 1/D series in cases where its leading orders are not sufficient. This leads to a hybrid expansion which typically improves on both the 1/D and the finite D series. These methods are applied in Chapter 4 to two -electron atoms. The ground state energy of few-electron systems is dominated by the presence of a pole when D = 1. The residue of this pole is determined by the eigenvalue of a simple limiting Schrodinger equation. The limit and first order correction are determined for both unapproximated nonrelativistic two-electron atoms and the Hartree-Fock approximation to them. The hybrid expansion using only the first few terms in the 1/D series determines the energy at arbitrary D, providing estimates accurate to four or five figures when D = 3. Degeneracies between D = 3 states and those in nonphysical dimensions are developed in Chapter 5 which provide additional applications for this series. Chapter 6 illustrates these methods in an application to the H(' -) ion, an especially stringent test case. Proposals for future work in this field are described in the final chapter.
Bases of Radio Direction Finding, Part II
1977-12-22
of H-shaped system . Fundamental ind the equivalent diagrams of the piir of antennas are given in Fig. 7.12. For -alculation is assigned the frejuency...Geographic Names Transliteration System ......... ii Preface ...................................................... 2 Chapter 1. Problems of Radio Traffic...4 Chapter 2. Principles and Methods of Radio Traffic ......... 14 Chapter 3. Antenna Systems of Radio Direction Finders
Limited Democracy: Voice and Choice in the Language Methods Class.
ERIC Educational Resources Information Center
Andrews, Sharon Vincz
This paper discusses "Doing Theory in the Methods Class: Focused Reflection," a chapter from the book "Whole Language Voices in Teacher Education" edited by Kathryn F. Whitmore and Yetta Goodman. The book chapter focuses on guided classroom reflection as a key element in the education of preservice teachers, exploring whether…
Structural optimization: Status and promise
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.
Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)
Qualitative methods in quantum theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Migdal, A.B.
The author feels that the solution of most problems in theoretical physics begins with the application of qualitative methods - dimensional estimates and estimates made from simple models, the investigation of limiting cases, the use of the analytic properties of physical quantities, etc. This book proceeds in this spirit, rather than in a formal, mathematical way with no traces of the sweat involved in the original work left to show. The chapters are entitled Dimensional and model approximations, Various types of perturbation theory, The quasi-classical approximation, Analytic properties of physical quantities, Methods in the many-body problem, and Qualitative methods inmore » quantum field theory. Each chapter begins with a detailed introduction, in which the physical meaning of the results obtained in that chapter is explained in a simple way. 61 figures. (RWR)« less
NASA Astrophysics Data System (ADS)
Alyami, Saeed
Installation of photovoltaic (PV) units could lead to great challenges to the existing electrical systems. Issues such as voltage rise, protection coordination, islanding detection, harmonics, increased or changed short-circuit levels, etc., need to be carefully addressed before we can see a wide adoption of this environmentally friendly technology. Voltage rise or overvoltage issues are of particular importance to be addressed for deploying more PV systems to distribution networks. This dissertation proposes a comprehensive solution to deal with the voltage violations in distribution networks, from controlling PV power outputs and electricity consumption of smart appliances in real time to optimal placement of PVs at the planning stage. The dissertation is composed of three parts: the literature review, the work that has already been done and the future research tasks. An overview on renewable energy generation and its challenges are given in Chapter 1. The overall literature survey, motivation and the scope of study are also outlined in the chapter. Detailed literature reviews are given in the rest of chapters. The overvoltage and undervoltage phenomena in typical distribution networks with integration of PVs are further explained in Chapter 2. Possible approaches for voltage quality control are also discussed in this chapter, followed by the discussion on the importance of the load management for PHEVs and appliances and its benefits to electric utilities and end users. A new real power capping method is presented in Chapter 3 to prevent overvoltage by adaptively setting the power caps for PV inverters in real time. The proposed method can maintain voltage profiles below a pre-set upper limit while maximizing the PV generation and fairly distributing the real power curtailments among all the PV systems in the network. As a result, each of the PV systems in the network has equal opportunity to generate electricity and shares the responsibility of voltage regulation. The method does not require global information and can be implemented either under a centralized supervisory control scheme or in a distributed way via consensus control. Chapter 4 investigates autonomous operation schedules for three types of intelligent appliances (or residential controllable loads) without receiving external signals for cost saving and for assisting the management of possible photovoltaic generation systems installed in the same distribution network. The three types of controllable loads studied in the chapter are electric water heaters, refrigerators deicing loads, and dishwashers, respectively. Chapter 5 investigates the method to mitigate overvoltage issues at the planning stage. A probabilistic method is presented in the chapter to evaluate the overvoltage risk in a distribution network with different PV capacity sizes under different load levels. Kolmogorov--Smirnov test (K--S test) is used to identify the most proper probability distributions for solar irradiance in different months. To increase accuracy, an iterative process is used to obtain the maximum allowable injection of active power from PVs. Conclusion and discussions on future work are given in Chapter 6.
Steel Fibre Reinforced Concrete Simulation with the SPH Method
NASA Astrophysics Data System (ADS)
Hušek, Martin; Kala, Jiří; Král, Petr; Hokeš, Filip
2017-10-01
Steel fibre reinforced concrete (SFRC) is very popular in many branches of civil engineering. Thanks to its increased ductility, it is able to resist various types of loading. When designing a structure, the mechanical behaviour of SFRC can be described by currently available material models (with equivalent material for example) and therefore no problems arise with numerical simulations. But in many scenarios, e.g. high speed loading, it would be a mistake to use such an equivalent material. Physical modelling of the steel fibres used in concrete is usually problematic, though. It is necessary to consider the fact that mesh-based methods are very unsuitable for high-speed simulations with regard to the issues that occur due to the effect of excessive mesh deformation. So-called meshfree methods are much more suitable for this purpose. The Smoothed Particle Hydrodynamics (SPH) method is currently the best choice, thanks to its advantages. However, a numerical defect known as tensile instability may appear when the SPH method is used. It causes the development of numerical (false) cracks, making simulations of ductile types of failure significantly more difficult to perform. The contribution therefore deals with the description of a procedure for avoiding this defect and successfully simulating the behaviour of SFRC with the SPH method. The essence of the problem lies in the choice of coordinates and the description of the integration domain derived from them - spatial (Eulerian kernel) or material coordinates (Lagrangian kernel). The contribution describes the behaviour of both formulations. Conclusions are drawn from the fundamental tasks, and the contribution additionally demonstrates the functionality of SFRC simulations. The random generation of steel fibres and their inclusion in simulations are also discussed. The functionality of the method is supported by the results of pressure test simulations which compare various levels of fibre reinforcement of SFRC specimens.
An Analysis of U.S. Sex Education Programs and Evaluation Methods. Volume I.
ERIC Educational Resources Information Center
Kirby, Douglas; And Others
The volume, first in a series of five, presents an analysis of sex education programs in the United States. It is presented in six chapters. Chapter I provides a brief overview of sex education in the public schools and summarizes goals, forms, and prevalence of sex education. Chapter II reviews literature on the effects of school sex education…
Data on distribution and abundance: Monitoring for research and management [Chapter 6
Samuel A. Cushman; Kevin S. McKelvey
2010-01-01
In the first chapter of this book we identified the interdependence of method, data and theory as an important influence on the progress of science. The first several chapters focused mostly on progress in theory, in the areas of integrating spatial and temporal complexity into ecological analysis, the emergence of landscape ecology and its transformation into a multi-...
Field Techniques for Estimating Water Fluxes Between Surface Water and Ground Water
Rosenberry, Donald O.; LaBaugh, James W.
2008-01-01
This report focuses on measuring the flow of water across the interface between surface water and ground water, rather than the hydrogeological or geochemical processes that occur at or near this interface. The methods, however, that use hydrogeological and geochemical evidence to quantify water fluxes are described herein. This material is presented as a guide for those who have to examine the interaction of surface water and ground water. The intent here is that both the overview of the many available methods and the in-depth presentation of specific methods will enable the reader to choose those study approaches that will best meet the requirements of the environments and processes they are investigating, as well as to recognize the merits of using more than one approach. This report is designed to make the reader aware of the breadth of approaches available for the study of the exchange between surface and ground water. To accomplish this, the report is divided into four chapters. Chapter 1 describes many well-documented approaches for defining the flow between surface and ground waters. Subsequent chapters provide an in-depth presentation of particular methods. Chapter 2 focuses on three of the most commonly used methods to either calculate or directly measure flow of water between surface-water bodies and the ground-water domain: (1) measurement of water levels in well networks in combination with measurement of water level in nearby surface water to determine water-level gradients and flow; (2) use of portable piezometers (wells) or hydraulic potentiomanometers to measure hydraulic gradients; and (3) use of seepage meters to measure flow directly. Chapter 3 focuses on describing the techniques involved in conducting water-tracer tests using fluorescent dyes, a method commonly used in the hydrogeologic investigation and characterization of karst aquifers, and in the study of water fluxes in karst terranes. Chapter 4 focuses on heat as a tracer in hydrological investigations of the near-surface environment.
The study of volatile organic compounds in urban and indoor air
NASA Astrophysics Data System (ADS)
Clarkson, Paul Jonathan
Chapter 1 is a review of the literature concerning the study of volatile organic compounds in the atmosphere. It examines the basic chemistry of the atmosphere and the roles that organic compounds play in it. Also investigated are the methods of sampling and analysing the volatile organic compounds in the air, paying particular attention to the role of solid phase sampling. Chapter 1 also examines the role of volatile organic compounds on air quality. Chapter 2 describes the experimental procedures that were employed during the course of this research project. Chapter 3 examines a multi-method approach to the study of volatile organic compounds in urban and indoor air. The methods employed were capillary electrophoresis, high performance liquid chromatography and gas chromatography. Although good results were obtained for the various methods that were investigated Chapter 3 concludes that a more unified analytical approach is needed to the study of the air. Chapter 4 investigates the possibilities of using a unified approach to the study of VOC's. This is achieved by the development of an air sampling method that uses solid phase extraction cartridges. By investigating many aspects of air sampling mechanisms the results show that a simple yet efficient method for the sampling of VOC in air has been developed. The SPE method is a reusable, yet reliable method that by using sequential solvent desorption has been shown to exhibit some degree of selectivity. The solid phase that gave the best results was styrene-divinyl benzene however other phases were also investigated. The use of a single gas chromatography method was also investigated for the purpose of confirmatory identification of the VOC's. Various detection systems were used including MS and AED. It was shown that by optimising the GC's it was possible to get complimentary results. Also investigated was the possibility of compound tagging in an attempt to confirm the identity of several of the compounds found in the air. Chapter 5 is a theoretical discussion of the ways presenting the data obtained experimentally in an easy to understand way. Instead of targeting 7 or 8 compounds as being representative of air quality it is argued that by using a technique such as Air Fingerprinting, it is possible to show data that is indicative of the whole air sample. Using actual data it is possible to show the origin of the air sample in a simple yet effective way using air fingerprints.Also discussed is the Individual Component Air Quality Index, this is a method of quantifying air quality. By taking into account compound toxicity, atmospheric lifetime and UV exposure, the ICAQI, it is argued, is a technique that presents a more accurate picture of air quality.Chapter 6 concludes the thesis by drawing together the themes and issues that were raised.
Lakes and reservoirs—Guidelines for study design and sampling
,
2015-09-29
The “National Field Manual for the Collection of Water-Quality Data” (NFM) is an online report with separately published chapters that provides the protocols and guidelines by which U.S. Geological Survey personnel obtain the data used to assess the quality of the Nation’s surface-water and groundwater resources. Chapter A10 reviews limnological principles, describes the characteristics that distinguish lakes from reservoirs, and provides guidance for developing temporal and spatial sampling strategies and data-collection approaches to be used in lake and reservoir environmental investigations.Within this chapter are references to other chapters of the NFM that provide more detailed guidelines related to specific topics and more detailed protocols for the quality assurance and assessment of the lake and reservoir data. Protocols and procedures to address and document the quality of lake and reservoir investigations are adapted from, or referenced to, the protocols and standard operating procedures contained in related chapters of this NFM.Before 2017, the U.S. Geological Survey (USGS) “National Field Manual for the Collection of Water-Quality Data” (NFM) chapters were released in the USGS Techniques of Water-Resources Investigations series. Effective in 2018, new and revised NFM chapters are being released in the USGS Techniques and Methods series; this series change does not affect the content and format of the NFM. More information is in the general introduction to the NFM (USGS Techniques and Methods, book 9, chapter A0, 2018) at https://doi.org/10.3133/tm9A0. The authoritative current versions of NFM chapters are available in the USGS Publications Warehouse at https://pubs.er.usgs.gov. Comments, questions, and suggestions related to the NFM can be addressed to nfm-owq@usgs.gov.
ERIC Educational Resources Information Center
Danilova, L. A.
This four-chapter monograph, translated from a 1977 Russian book written originally in Russian for Russians, describes methodology and results of the study of cognitive activity in children with cerebral palsy. An initial chapter reviews research on impairments in cognitive activity and speech defects in such children and on methods of…
Chapter 14: Evaluating the Leaching of Biocides from Preservative-Treated Wood Products
Stan T. Lebow
2014-01-01
Leaching of biocides is an important consideration in the long term durability and any potential for environmental impact of treated wood products. This chapter discusses factors affecting biocide leaching, as well as methods of evaluating rate and quantity of biocide released. The extent of leaching is a function of preservative formulation, treatment methods, wood...
An, Hong; He, Ri-Hui; Zheng, Yun-Rong; Tao, Ran
2017-01-01
Cognitive-behavioral therapy (CBT) is the main method of psychotherapy generally accepted in the field of substance addiction and non-substance addiction. This chapter mainly introduces the methods and technology of cognitive-behavior therapy of substance addiction, especially in order to prevent relapse. In the cognitive-behavior treatment of non-substance addiction, this chapter mainly introduces gambling addiction and food addiction.
Participatory Visual Methods: Revisioning the Future of Adult Education
ERIC Educational Resources Information Center
Lawrence, Randee Lipson
2017-01-01
This chapter brings together significant themes in the previous chapters, including collaborative research partnerships, voice and agency, self-image, relationships, multiple ways of knowing, difficult conversations, social change, and alternative adult education.
Complex network problems in physics, computer science and biology
NASA Astrophysics Data System (ADS)
Cojocaru, Radu Ionut
There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe lattice at zero temperature and then we apply this formalism to the K-SAT problem defined in Chapter 1. The phase transition which physicists study often corresponds to a change in the computational complexity of the corresponding computer science problem. Chapter 3 presents phase transitions which are specific to the problems discussed in Chapter 1 and also known results for the K-SAT problem. We discuss the replica method and experimental evidences of replica symmetry breaking. The physics approach to hard problems is based on replica methods which are difficult to understand. In Chapter 4 we develop novel methods for studying hard problems using methods similar to the message passing techniques that were discussed in Chapter 2. Although we concentrated on the symmetric case, cavity methods show promise for generalizing our methods to the un-symmetric case. As has been highlighted by John Hopfield, several key features of biological systems are not shared by physical systems. Although living entities follow the laws of physics and chemistry, the fact that organisms adapt and reproduce introduces an essential ingredient that is missing in the physical sciences. In order to extract information from networks many algorithm have been developed. In Chapter 5 we apply polynomial algorithms like minimum spanning tree in order to study and construct gene regulatory networks from experimental data. As future work we propose the use of algorithms like min-cut/max-flow and Dijkstra for understanding key properties of these networks.
NASA Astrophysics Data System (ADS)
Yan, Zheng
Graphene, a two-dimensional sp2-bonded carbon material, has attracted enormous attention due to its excellent electrical, optical and mechanical properties. Recently developed chemical vapor deposition (CVD) methods could produce large-size and uniform polycrystalline graphene films, limited to gas carbon sources, metal catalyst substrates and degraded properties induced by grain boundaries. Meanwhile, pristine monolayer graphene exhibits a standard ambipolar behavior with a zero neutrality point in field-effect transistors (FETs), limiting its future electronic applications. This thesis starts with the investigation of CVD synthesis of pristine and N-doped graphene with controlled thickness using solid carbon sources on metal catalyst substrates (chapter 1), and then discusses the direct growth of bilayer graphene on insulating substrates, including SiO2, h-BN, Si3N4 and Al2O3, without needing further transfer-process (chapter 2). Chapter 3 discusses the synthesis of high-quality graphene single crystals and hexagonal onion-ring-like graphene domains, and also explores the basic growth mechanism of graphene on Cu substrates. To extend graphene's potential applications, both vertical and planar graphene-carbon nanotube hybrids are fabricated using CVD method and their interesting properties are investigated (chapter 4). Chapter 5 discusses how to use chemical methods to modulate graphene's electronic behaviors.
Image Coding Based on Address Vector Quantization.
NASA Astrophysics Data System (ADS)
Feng, Yushu
Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.
Aiding USAF/UPT (Undergraduate Pilot Training) Aircrew Scheduling Using Network Flow Models.
1986-06-01
51 3.4 Heuristic Modifications ............ 55 CHAPTER 4 STUDENT SCHEDULING PROBLEM (LEVEL 2) 4.0 Introduction 4.01 Constraints ............. 60 4.02...Covering" Complete Enumeration . . .. . 71 4.14 Heuristics . ............. 72 4.2 Heuristic Method for the Level 2 Problem 4.21 Step I ............... 73...4.22 Step 2 ............... 74 4.23 Advantages to the Heuristic Method. .... .. 78 4.24 Problems with the Heuristic Method. . ... 79 :,., . * CHAPTER5
Conceptual Chemical Process Design for Sustainability.
This chapter examines the sustainable design of chemical processes, with a focus on conceptual design, hierarchical and short-cut methods, and analyses of process sustainability for alternatives. The chapter describes a methodology for incorporating process sustainability analyse...
Overcoming Hurdles Implementing Multi-skilling Policies
2015-03-26
skilled workforce? Chapter II will communicate important concepts found in the literature on skill proficiency topics. These topics include skill...training methods that might improve learning and retention during the acquisition phase. 10 The active interlock modeling (AIM) protocol is a dyadic ...retention, as found in 43 Chapter 2. These techniques include dyadic training methods, overlearning, feedback, peer support, and managerial support
Chapter 6: quantifying greenhouse gas sources and sinks in managed forest systems
Coeli Hoover; Richard Birdsey; Bruce Goines; Peter Lahm; Yongming Fan; David Nowak; Stephen Prisley; Elizabeth Reinhardt; Ken Skog; David Skole; James Smith; Carl Trettin; Christopher Woodall
2014-01-01
This chapter provides guidance for reporting greenhouse gas (GHG) emissions associated with entity-level fluxes from the forestry sector. In particular, it focuses on methods for estimating carbon stocks and stock change from managed forest systems. Section 6.1 provides an overview of the sector. Section 6.2 describes the methods for forest carbon stock accounting....
NASA Astrophysics Data System (ADS)
McKechan, David J. A.
2010-11-01
This thesis concerns the use, in gravitational wave data analysis, of higher order wave form models of the gravitational radiation emitted by compact binary coalescences. We begin with an introductory chapter that includes an overview of the theory of general relativity, gravitational radiation and ground-based interferometric gravitational wave detectors. We then discuss, in Chapter 2, the gravitational waves emitted by compact binary coalescences, with an explanation of higher order waveforms and how they differ from leading order waveforms we also introduce the post-Newtonian formalism. In Chapter 3 the method and results of a gravitational wave search for low mass compact binary coalescences using a subset of LIGO's 5th science run data are presented and in the subsequent chapter we examine how one could use higher order waveforms in such analyses. We follow the development of a new search algorithm that incorporates higher order waveforms with promising results for detection efficiency and parameter estimation. In Chapter 5, a new method of windowing time-domain waveforms that offers benefit to gravitational wave searches is presented. The final chapter covers the development of a game designed as an outreach project to raise public awareness and understanding of the search for gravitational waves.
First-principles study of complex material systems
NASA Astrophysics Data System (ADS)
He, Lixin
This thesis covers several topics concerning the study of complex materials systems by first-principles methods. It contains four chapters. A brief, introductory motivation of this work will be given in Chapter 1. In Chapter 2, I will give a short overview of the first-principles methods, including density-functional theory (DFT), planewave pseudopotential methods, and the Berry-phase theory of polarization in crystallines insulators. I then discuss in detail the locality and exponential decay properties of Wannier functions and of related quantities such as the density matrix, and their application in linear-scaling algorithms. In Chapter 3, I investigate the interaction of oxygen vacancies and 180° domain walls in tetragonal PbTiO3 using first-principles methods. Our calculations indicate that the oxygen vacancies have a lower formation energy in the domain wall than in the bulk, thereby confirming the tendency of these defects to migrate to, and pin, the domain walls. The pinning energies are reported for each of the three possible orientations of the original Ti--O--Ti bonds, and attempts to model the results with simple continuum models are discussed. CaCu3Ti4O12 (CCTO) has attracted a lot of attention recently because it was found to have an enormous dielectric response over a very wide temperature range. In Chapter 4, I study the electronic and lattice structure, and the lattice dynamical properties, of this system. Our first-principles calculations together with experimental results point towards an extrinsic mechanism as the origin of the unusual dielectric response.
The metallic thread in a patchwork thesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hull, Emily A.
This thesis contains research that is being prepared for publication. Chapter 2 presents research on water and THF solvated macrocyclic Rh and Co compounds and the effects of different axial ligands (NO 2, NO, Cl, CH 3) on their optical activity. Chapter 3 involves the study of gas-phase Nb mono and dications with CO and CO 2. Chapter 4 is a study of reactions of CO and CO 2 with Ta mono and dications. Chapter 5 is a study on virtual orbitals, their usefulness, the use of basis sets in modeling them, and the inclusion of transition metals into themore » QUasi Atomic Minimal Basis (QUAMBO) method.68-72 Chapter 6 presents the conclusions drawn from the work presented in this dissertation.« less
BOOK REVIEW: Introduction to 3+1 Numerical Relativity
NASA Astrophysics Data System (ADS)
Gundlach, Carsten
2008-11-01
This is the first major textbook on the methods of numerical relativity. The selection of material is based on what is known to work reliably in astrophysical applications and would therefore be considered by many as the 'mainstream' of the field. This means spacelike slices, the BSSNOK or harmonic formulation of the Einstein equations, finite differencing for the spacetime variables, and high-resolution shock capturing methods for perfect fluid matter. (Arguably, pseudo-spectral methods also belong in this category, at least for elliptic equations, but are not covered in this book.) The account is self-contained, and comprehensive within its chosen scope. It could serve as a primer for the growing number of review papers on aspects of numerical relativity published in Living Reviews in Relativity (LRR). I will now discuss the contents by chapter. Chapter 1, an introduction to general relativity, is clearly written, but may be a little too concise to be used as a first text on this subject at postgraduate level, compared to the textbook by Schutz or the first half of Wald's book. Chapter 2 contains a good introduction to the 3+1 split of the field equations in the form mainly given by York. York's pedagogical presentation (in a 1979 conference volume) is still up to date, but Alcubierre makes a clearer distinction between the geometric split and its form in adapted coordinates, as well as filling in some derivations. Chapter 3 on initial data is close to Cook's 2001 LRR, but is beautifully unified by an emphasis on how different choices of conformal weights suit different purposes. Chapter 4 on gauge conditions covers a topic on which no review paper exists, and which is spread thinly over many papers. The presentation is both detailed and unified, making this an excellent resource also for experts. The chapter reflects the author's research interests while remaining canonical. Chapter 5 covers hyperbolic reductions of the field equations. Alcubierre's excellent presentation is less technical than Reula's 1998 LRR or the 1995 book by Gustafsson, Kreiss and Oliger, but covers the key ideas in application to the Einstein equations. The reviewer (admittedly riding a hobby-horse) would argue that the hyperbolicity of the ADM and BSSNOK equations should have been investigated without introducing a specific first-order reduction. Chapter 6 covers gauge problems in numerical black hole spacetimes, black hole excision, and apparent horizons. Like chapter 4 it is both exhaustive and pedagogical. Perhaps more space than necessary is given here to work the author was involved in, while the section on slice stretching could have been more detailed, given that there is no good overview in the literature. Chapter 7 on relativistic hydrodynamics is, quite simply, excellent. Among many other useful things it contains some elementary material on equations of state that is not written up at this level elsewhere, a good mini-introduction to weak solutions of conservation laws, and a brief review of imperfect fluids in GR (Israel--Stewart theory). This chapter complements Font's 2008 LRR. Chapter 8 on gravitational wave extraction provides a welcome pedagogical introduction to a topic in which the original research papers are less than inviting and where notation is not uniform. The mathematical techniques described here are in constant use in numerical relativity codes, but are never fully described in research papers. Chapter 9 on numerical methods covers finite difference and high-resolution shock capturing methods. It is similar in presentation to Leveque's 1992 book and Kreiss and Busenhart's 2001 book, but gives a good selection of that material, concisely presented. It certainly impresses the importance of convergence testing on the reader. Chapter 10 covers methods for spherically symmetric and axisymmetric spacetimes. The former is excellent, reflecting the author's recent research work. The axisymmetry section would have been better if it had been based on a formal Geroch reduction, the method that has been the key to recent progress. This book is bound to become a standard text for beginning graduate students. In an overview for this audience, I would have liked to see a little more detail on null slicings and on the conformal field equations, and brief introductions to the theory of elliptic equations and to pseudo-spectral and finite element methods. One may also regret the many typographical errors. Nevertheless, this excellent book fills a real gap, and will be hard to follow.
Statistical problems in measuring surface ozone and modelling its patterns
NASA Astrophysics Data System (ADS)
Hutchison, Paul Stewart
The Thesis examines ground level air pollution data supplied by ITE Bush, Penicuik, Midlothian, Scotland. There is a brief examination of sulphur dioxide concentration data, but the Thesis is primarily concerned with ozone. The diurnal behaviour of ozone is the major topic, and a new methodology of classification of 'ozone days' is introduced and discussed. In chapter 2, the inverse Gaussian distribution is considered and rejected as a possible alternative to the standard approach of using the lognormal as a model for the frequency distribution of observed sulphur dioxide concentrations. In chapter 3, the behaviour of digital gas pollution analysers is investigated by making use of data obtained from two such machines operating side by side. A time series model of the differences between the readings obtained from the two machines is considered, and possible effects on modelling discussed. In chapter 4, the changes in the diurnal behaviour of ozone over a year are examined. A new approach involving a distortion of the time axis is shown to give diurnal ozone curves more homogeneous properties and have beneficial effects for modelling purposes. Chapter 5 extends the analysis of the diurnal behaviour of ozone begun in chapter 4 by considering individual 'ozone days' and attempting to classify them as one of several typical 'types' of day. The time distortion method introduced in chapter 4 is used, and a new classification methodology is introduced for considering data of this type. The statistical properties of this method are discussed in chapter 6.
NASA Astrophysics Data System (ADS)
Furuichi, M.; Nishiura, D.
2015-12-01
Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).
Code of Federal Regulations, 2013 CFR
2013-01-01
... Test Procedure,” and Chapter 6, “Definitions and Acronyms,” of the EPA's “ENERGY STAR Testing Facility Guidance Manual: Building a Testing Facility and Performing the Solid State Test Method for ENERGY STAR... specified in Chapter 4, “Equipment Setup and Test Procedure,” of the EPA's “ENERGY STAR Testing Facility...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Test Procedure,” and Chapter 6, “Definitions and Acronyms,” of the EPA's “ENERGY STAR Testing Facility Guidance Manual: Building a Testing Facility and Performing the Solid State Test Method for ENERGY STAR... specified in Chapter 4, “Equipment Setup and Test Procedure,” of the EPA's “ENERGY STAR Testing Facility...
Code of Federal Regulations, 2014 CFR
2014-01-01
... Test Procedure,” and Chapter 6, “Definitions and Acronyms,” of the EPA's “ENERGY STAR Testing Facility Guidance Manual: Building a Testing Facility and Performing the Solid State Test Method for ENERGY STAR... specified in Chapter 4, “Equipment Setup and Test Procedure,” of the EPA's “ENERGY STAR Testing Facility...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Test Procedure,” and Chapter 6, “Definitions and Acronyms,” of the EPA's “ENERGY STAR Testing Facility Guidance Manual: Building a Testing Facility and Performing the Solid State Test Method for ENERGY STAR... specified in Chapter 4, “Equipment Setup and Test Procedure,” of the EPA's “ENERGY STAR Testing Facility...
40 CFR Table 5 to Subpart Ddddd of... - Performance Testing Requirements
Code of Federal Regulations, 2010 CFR
2010-07-01
... 2G in appendix A to part 60 of this chapter. c. Determine oxygen and carbon dioxide concentrations of...) (IBR, see § 63.14(i)). d. Measure the moisture content of the stack gas Method 4 in appendix A to part... stack gas Method 2, 2F, or 2G in appendix A to part 60 of this chapter. c. Determine oxygen and carbon...
Wavelet transform analysis of transient signals: the seismogram and the electrocardiogram
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anant, K.S.
1997-06-01
In this dissertation I quantitatively demonstrate how the wavelet transform can be an effective mathematical tool for the analysis of transient signals. The two key signal processing applications of the wavelet transform, namely feature identification and representation (i.e., compression), are shown by solving important problems involving the seismogram and the electrocardiogram. The seismic feature identification problem involved locating in time the P and S phase arrivals. Locating these arrivals accurately (particularly the S phase) has been a constant issue in seismic signal processing. In Chapter 3, I show that the wavelet transform can be used to locate both the Pmore » as well as the S phase using only information from single station three-component seismograms. This is accomplished by using the basis function (wave-let) of the wavelet transform as a matching filter and by processing information across scales of the wavelet domain decomposition. The `pick` time results are quite promising as compared to analyst picks. The representation application involved the compression of the electrocardiogram which is a recording of the electrical activity of the heart. Compression of the electrocardiogram is an important problem in biomedical signal processing due to transmission and storage limitations. In Chapter 4, I develop an electrocardiogram compression method that applies vector quantization to the wavelet transform coefficients. The best compression results were obtained by using orthogonal wavelets, due to their ability to represent a signal efficiently. Throughout this thesis the importance of choosing wavelets based on the problem at hand is stressed. In Chapter 5, I introduce a wavelet design method that uses linear prediction in order to design wavelets that are geared to the signal or feature being analyzed. The use of these designed wavelets in a test feature identification application led to positive results. The methods developed in this thesis; the feature identification methods of Chapter 3, the compression methods of Chapter 4, as well as the wavelet design methods of Chapter 5, are general enough to be easily applied to other transient signals.« less
ERIC Educational Resources Information Center
Chamberlain, Ed
A cost benefit study was conducted to determine the effectiveness of a computer assisted instruction/computer management system (CAI/CMS) as an alternative to conventional methods of teaching reading within Chapter 1 and DPPF funded programs of the Columbus (Ohio) Public Schools. The Chapter 1 funded Compensatory Language Experiences and Reading…
Transporte electronico en nanoestructuras de carbono
NASA Astrophysics Data System (ADS)
Jodar Ferrandez, Esther
The aim of this work is the study of the electronic transport properties in several structures made of carbon nanotubes. This dissertation is divided in four chapters: (1) Chapter 1: Carbon Nanotubes. This chapter is a brief review of the foundations of carbon nanotubes (CNT). Main properties of CNT are explained. The subject developed here is important for the understanding of the results obtained in the bulk of this thesis. We carry out, in the first part of this chapter, an historical review of the discovering of CNT, that includes the history of the discovering of fullerenes, the predecessors for carbon nanotubes. Afterwards, a revision of the different methods for synthesizing nanotubes is done. The main part of this chapter treats of the description of the geometry, properties and electronic structure of CNT. Many equations deduced here will be used later. Finally, we discuss some research lines related to carbon nanotubes. (2) Chapter 2: Theoretical and numerical method. In this chapter we describe the numerical method we have developed to obtain the results presented in this work. For this purpose it is necessary to describe previously the theoretical method on which our calculations are based. We extensively explain the Green's function and its properties. A large part of our calculations are based in the obtention the GF of the system under study. This chapter finishes with the application of the equations described in order to obtain electronic properties associated with pure carbon nanotubes as an example of use. Anyway, these previous results will be used later. (3) Chapter 3: Cavities made of nanotubes. We denote as a cavity to the structure formed with a carbon nanotube sandwiched between other two carbon nanotubes (contacts), provided that the central region is wider than these contacts. In this chapter we perform some calculations of the properties associated to the electronic transport in cavities, as the local density of states and the transmission function. We analyze the influence of the width of the cavity and the distance between them (in the case of multiple cavities). Some interesting results are obtained in these calculations which have been published in international journals (Jodar et al. 2006, Jodar y Perez-Garrido 2007). We emphasize the presence of quasi-localized states in the cavities, which affects to the transmission function, the behaviour of some cavities formed with semi-conductor as quantum dots, or the study of the evolution of the system with multiple cavities to the limit of infinite cavities. (4) Chapter 4: Bloch Oscillations. In this chapter we investigate the properties of carbon nanotubes under a constant electric field. This configuration show Bloch oscillations, according to the work of Bloch and Zener. We study here the dynamics of these oscillations for different geometries as a function of the electric field applied. Specially, the behaviour of the occupation probability and the averaged quadratic displacement as a function of time. We have not found bibliography that deals with this phenomenon in Carbon Nanotubes, which is the aim of this chapter. We first study the behaviour of electrons in pure carbon nanotubes in a constant electric field, for different lengths of the CNT and different values of the electric field applied. We show how wavefunctions oscillate with a period that coincides with that given by theoretical expressions of Bloch oscillations for linear chains of atoms. Besides, we show the different kind of behaviour of localized and extended waves. In the final part of this chapter we apply a constant electric field to the structure studied in the chapter 3, i.e., the cavity. We show in this case that, besides Bloch oscillations, electrons can be confined in certain regions only by inserting the nanotube in an electric field.
Leonard, Annemarie K; Loughran, Elizabeth A; Klymenko, Yuliya; Liu, Yueying; Kim, Oleg; Asem, Marwa; McAbee, Kevin; Ravosa, Matthew J; Stack, M Sharon
2018-01-01
This chapter highlights methods for visualization and analysis of extracellular matrix (ECM) proteins, with particular emphasis on collagen type I, the most abundant protein in mammals. Protocols described range from advanced imaging of complex in vivo matrices to simple biochemical analysis of individual ECM proteins. The first section of this chapter describes common methods to image ECM components and includes protocols for second harmonic generation, scanning electron microscopy, and several histological methods of ECM localization and degradation analysis, including immunohistochemistry, Trichrome staining, and in situ zymography. The second section of this chapter details both a common transwell invasion assay and a novel live imaging method to investigate cellular behavior with respect to collagen and other ECM proteins of interest. The final section consists of common electrophoresis-based biochemical methods that are used in analysis of ECM proteins. Use of the methods described herein will enable researchers to gain a greater understanding of the role of ECM structure and degradation in development and matrix-related diseases such as cancer and connective tissue disorders. © 2018 Elsevier Inc. All rights reserved.
DeBeer, Serena
2018-01-01
In this chapter, a brief overview of X-ray spectroscopic methods that may be utilized to obtain insight into the geometric and electronic structure of iron-sulfur proteins is provided. These methods include conventional methods, such as metal and ligand K-edge X-ray absorption, as well as more advanced methods including nonresonant and resonant X-ray emission. In each section, the basic information content of the spectra is highlighted and important experimental considerations are discussed. Throughout the chapter, recent applications to iron-sulfur-containing models and proteins are highlighted. © 2018 Elsevier Inc. All rights reserved.
Chemical and Physical Signatures for Microbial Forensics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cliff, John B.; Kreuzer, Helen W.; Ehrhardt, Christopher J.
Chemical and physical signatures for microbial forensics John Cliff and Helen Kreuzer-Martin, eds. Humana Press Chapter 1. Introduction: Review of history and statement of need. Randy Murch, Virginia Tech Chapter 2. The Microbe: Structure, morphology, and physiology of the microbe as they relate to potential signatures of growth conditions. Joany Jackman, Johns Hopkins University Chapter 3. Science for Forensics: Special considerations for the forensic arena - quality control, sample integrity, etc. Mark Wilson (retired FBI): Western Carolina University Chapter 4. Physical signatures: Light and electron microscopy, atomic force microscopy, gravimetry etc. Joseph Michael, Sandia National Laboratory Chapter 5. Lipids: FAME,more » PLFA, steroids, LPS, etc. James Robertson, Federal Bureau of Investigation Chapter 6. Carbohydrates: Cell wall components, cytoplasm components, methods Alvin Fox, University of South Carolina School of Medicine David Wunschel, Pacific Northwest National Laboratory Chapter 7. Peptides: Peptides, proteins, lipoproteins David Wunschel, Pacific Northwest National Laboratory Chapter 8. Elemental content: CNOHPS (treated in passing), metals, prospective cell types John Cliff, International Atomic Energy Agency Chapter 9. Isotopic signatures: Stable isotopes C,N,H,O,S, 14C dating, potential for heavy elements. Helen Kreuzer-Martin, Pacific Northwest National Laboratory Michaele Kashgarian, Lawrence Livermore National Laboratory Chapter 10. Extracellular signatures: Cellular debris, heme, agar, headspace, spent media, etc Karen Wahl, Pacific Northwest National Laboratory Chapter 11. Data Reduction and Integrated Microbial Forensics: Statistical concepts, parametric and multivariate statistics, integrating signatures Kristin Jarman, Pacific Northwest National Laboratory« less
Visualizing, Approximating, and Understanding Black-Hole Binaries
NASA Astrophysics Data System (ADS)
Nichols, David A.
Numerical-relativity simulations of black-hole binaries and advancements in gravitational-wave detectors now make it possible to learn more about the collisions of compact astrophysical bodies. To be able to infer more about the dynamical behavior of these objects requires a fuller analysis of the connection between the dynamics of pairs of black holes and their emitted gravitational waves. The chapters of this thesis describe three approaches to learn more about the relationship between the dynamics of black-hole binaries and their gravitational waves: modeling momentum flow in binaries with the Landau-Lifshitz formalism, approximating binary dynamics near the time of merger with post-Newtonian and black-hole-perturbation theories, and visualizing spacetime curvature with tidal tendexes and frame-drag vortexes. In Chapters 2--4, my collaborators and I present a method to quantify the flow of momentum in black-hole binaries using the Landau-Lifshitz formalism. Chapter 2 reviews an intuitive version of the formalism in the first-post-Newtonian approximation that bears a strong resemblance to Maxwell's theory of electromagnetism. Chapter 3 applies this approximation to relate the simultaneous bobbing motion of rotating black holes in the superkick configuration---equal-mass black holes with their spins anti-aligned and in the orbital plane---to the flow of momentum in the spacetime, prior to the black holes' merger. Chapter 4 then uses the Landau-Lifshitz formalism to explain the dynamics of a head-on merger of spinning black holes, whose spins are anti-aligned and transverse to the infalling motion. Before they merge, the black holes move with a large, transverse, velocity, which we can explain using the post-Newtonian approximation; as the holes merge and form a single black hole, we can use the Landau-Lifshitz formalism without any approximations to connect the slowing of the final black hole to its absorbing momentum density during the merger. In Chapters 5--7, we discuss using analytical approximations, such as post-Newtonian and black-hole-perturbation theories, to gain further understanding into how gravitational waves are generated by black-hole binaries. Chapter 5 presents a way of combining post-Newtonian and black-hole-perturbation theories---which we call the hybrid method---for head-on mergers of black holes. It was able to produce gravitational waveforms and gravitational recoils that agreed well with comparable results from numerical-relativity simulations. Chapter 6 discusses a development of the hybrid model to include a radiation-reaction force, which is better suited for studying inspiralling black-hole binaries. The gravitational waveform from the hybrid method for inspiralling mergers agreed qualitatively with that from numerical-relativity simulations; when applied to the superkick configuration, it gave a simplified picture of the formation of the large black-hole kick. Chapter 7 describes an approximate method of calculating the frequencies of the ringdown gravitational waveforms of rotating black holes (quasinormal modes). The method generalizes a geometric interpretation of black-hole quasinormal modes and explains a degeneracy in the spectrum of these modes. In Chapters 8--11, we describe a new way of visualizing spacetime curvature using tools called tidal tendexes and frame-drag vortexes. This relies upon a time-space split of spacetime, which allows one to break the vacuum Riemann curvature tensor into electric and magnetic parts (symmetric, trace-free tensors that have simple physical interpretations). The regions where the eigenvalues of these tensors are large form the tendexes and vortexes of a spacetime, and the integral curves of their eigenvectors are its tendex and vortex lines, for the electric and magnetic parts, respectively. Chapter 8 provides an overview of these visualization tools and presents initial results from numerical-relativity simulations. Chapter 9 uses topological properties of vortex and tendex lines to classify properties of gravitational waves far from a source. Chapter 10 describes the formalism in more detail, and discusses the vortexes and tendexes of multipolar spacetimes in linearized gravity about flat space. The chapter helps to explain how near-zone vortexes and tendexes become gravitational waves far from a weakly gravitating, time-varying source. Chapter 11 is a detailed investigation of the vortexes and tendexes of stationary and perturbed black holes. It develops insight into how perturbations of (strongly gravitating) black holes extend from near the horizon to become gravitational waves.
Bacterial molecular networks: bridging the gap between functional genomics and dynamical modelling.
van Helden, Jacques; Toussaint, Ariane; Thieffry, Denis
2012-01-01
This introductory review synthesizes the contents of the volume Bacterial Molecular Networks of the series Methods in Molecular Biology. This volume gathers 9 reviews and 16 method chapters describing computational protocols for the analysis of metabolic pathways, protein interaction networks, and regulatory networks. Each protocol is documented by concrete case studies dedicated to model bacteria or interacting populations. Altogether, the chapters provide a representative overview of state-of-the-art methods for data integration and retrieval, network visualization, graph analysis, and dynamical modelling.
Flight and Analytical Methods for Determining the Coupled Vibration Response of Tandem Helicopters
NASA Technical Reports Server (NTRS)
Yeates, John E , Jr; Brooks, George W; Houbolt, John C
1957-01-01
Chapter one presents a discussion of flight-test and analysis methods for some selected helicopter vibration studies. The use of a mechanical shaker in flight to determine the structural response is reported. A method for the analytical determination of the natural coupled frequencies and mode shapes of vibrations in the vertical plane of tandem helicopters is presented in Chapter two. The coupled mode shapes and frequencies are then used to calculate the response of the helicopter to applied oscillating forces.
NASA Astrophysics Data System (ADS)
Kong, Jing
This thesis includes 4 pieces of work. In Chapter 1, we present the work with a method for examining mortality as it is seen to run in families, and lifestyle factors that are also seen to run in families, in a subpopulation of the Beaver Dam Eye Study that has died by 2011. We find significant distance correlations between death ages, lifestyle factors, and family relationships. Considering only sib pairs compared to unrelated persons, distance correlation between siblings and mortality is, not surprisingly, stronger than that between more distantly related family members and mortality. Chapter 2 introduces a feature screening procedure with the use of distance correlation and covariance. We demonstrate a property for distance covariance, which is incorporated in a novel feature screening procedure based on distance correlation as a stopping criterion. The approach is further implemented to two real examples, namely the famous small round blue cell tumors data and the Cancer Genome Atlas ovarian cancer data Chapter 3 pays attention to the right censored human longevity data and the estimation of lifetime expectancy. We propose a general framework of backward multiple imputation for estimating the conditional lifetime expectancy function and the variance of the estimator in the right censoring setting and prove the properties of the estimator. In addition, we apply the method to the Beaver Dam eye study data to study human longevity, where the expected human lifetime are modeled with smoothing spline ANOVA based on the covariates including baseline age, gender, lifestyle factors and disease variables. Chapter 4 compares two imputation methods for right censored data, namely the famous Buckley-James estimator and the backward imputation method proposed in Chapter 3 and shows that backward imputation method is less biased and more robust with heterogeneity.
Aerobic and Electrochemical Oxidations with N-Oxyl Reagents
NASA Astrophysics Data System (ADS)
Miles, Kelsey C.
Selective oxidation of organic compounds represents a significant challenge for chemical transformations. Oxidation methods that utilize nitroxyl catalysts have become increasingly attractive and include Cu/nitroxyl and nitroxyl/NO x co-catalyst systems. Electrochemical activation of nitroxyls is also well known and offers an appealing alternative to the use of chemical co-oxidants. However, academic and industrial organic synthetic communities have not widely adopted electrochemical methods. Nitroxyl catalysts facilitate effective and selective oxidation of alcohols and aldehydes to ketones and carboxylic acids. Selective benzylic, allylic, and alpha-heteroatom C-H abstraction can also be achieved with nitroxyls and provides access to oxygenated products when used in combination with molecular oxygen as a radical trap. This thesis reports various chemical and electrochemical oxidation methods that were developed using nitroxyl mediators. Chapter 1 provides a short review on practical aerobic alcohol oxidation with Cu/nitroxyl and nitroxyl/NO x systems and emphasizes the utility of bicyclic nitroxyls as co-catalysts. In Chapter 2, the combination of these bicyclic nitroxyls with NOx is explored for development of a mild oxidation of alpha-chiral aryl aldehydes and showcases a sequential asymmetric hydroformylation/oxidation method. Chapter 3 reports the synthesis and characterization of two novel Cu/bicyclic nitroxyl complexes and the electronic structure analysis of these complexes. Chapter 4 highlights the electrochemical activation of various nitroxyls and reports an in-depth study on electrochemical alcohol oxidation and compares the reactivity of nitroxyls under electrochemical or chemical activation. N-oxyls can also participate in selective C-H abstraction, and Chapter 5 reports the chemical and electrochemical activation of N-oxyls for radical-mediated C-H oxygenation of (hetero)arylmethanes. For these electrochemical transformations, the development of user-friendly methods and analysis techniques is emphasized.
TOTAL CULTURABLE VIRUS QUANTAL ASSAY
This chapter describes a quantal method for assaying culturable human enteric viruses from water matrices. The assay differs from the plaque assay described in Chapter 10 (December 1987 Revision) in that it is based upon the direct microscopic viewing of cells for virus-induced ...
Magnetic Minerals in Soils and Paleosols as Recorders of Paleoclimate
NASA Astrophysics Data System (ADS)
Maxbauer, Daniel P.
It is a fundamental challenge for geologists to create quantitative estimates of rainfall and temperature in past climates. Yet, records of past climates are integral for understanding the complexities of earth system dynamics. The research presented in this dissertation begins to establish a framework for reconstructing paleoclimates using the magnetic properties of fossilized soils. Magnetic minerals are ubiquitous in soils, and their composition, grain size, and concentration is often directly related to the ambient climatic conditions that were present during soil formation. Using rock magnetic methods, it is possible to sensitively characterize the magnetic mineral assemblages in natural materials - including soils and paleosols. The fundamentals of rock magnetism and many of the common methods used in rock magnetic applications are presented in chapter 2 and chapter 3, respectively. Chapter 4 reviews the physical, chemical, and biological factors that affect magnetic mineral assemblages in soils, the magnetic methods we use to characterize them, and the known relationships between magnetic minerals in soils and climate. A critical component to developing replicable tools for reconstructing paleoclimate is developing analytical and statistical tools that are accessible to the greater community. Chapter 5 introduces a new model, MAX UnMix, that was developed as an open-source, online tool for rock magnetic data processing that is designed to be user-friendly and accessible. Two case studies, on both fossil (Chapter 7) and modern (Chapter 6) soils, are presented and discuss many issues related to applying magnetic paleoprecipitation proxies in deep time. Chapter 7 discusses difficulties in disentangling the effects of pedogenesis, diagenesis, and recent surficial weathering in Paleocene-Eocene ( 56-55 Ma) paleosols. Chapter 6 explores the relative influence of soil forming factors (vegetation vs. climate) on controlling the pedogenic formation of magnetic minerals in soils developing across the forest-to-prairie ecotone in NW Minnesota. The body of research presented in this dissertation provides many challenges to future workers, while at the same time highlighting that rock magnetism should be a useful tool for researchers interested in deep time paleoclimates moving forward.
Islamic values in the Kuwaiti curriculum
NASA Astrophysics Data System (ADS)
Alshahen, Ghanim A.
This study investigated the influence of Islamic values on the curriculum, in particular the Islamic studies and science curricula. Three questionnaires were developed, validated, and used to investigate teachers' and pupils' attitudes toward Islamic values in the curriculum. Four main sections deal with Islamic values in the Islamic studies and science curricula, namely: Islamic values in the textbook, teaching Islamic values, the relationship between Islamic values and the science curriculum, and the Islamic values model. Two instruments were used in this study: questionnaires and interviews. Both qualitative and quantitative data were generated from the sample, which consisted of Islamic studies and science teachers and supervisors in intermediate schools, and pupils studying in the eighth grade in intermediate schools. In the last case, the data were gathered by questionnaire only. The interviews and questionnaires provided explanatory data. The research was carried out in three phases, considering respectively 55 Islamic studies teachers, 55 science teachers who teach the eighth grade in intermediate schools, and 786 pupils who study in the eighth grade in 20 schools. In each school, the researcher selected two classes. This thesis consists of eight chapters. Chapter One provides a general introduction and highlights the general framework of this study. Chapter Two is concerned with the development of the education system in Kuwait and the objectives of the Islamic studies and science curricula in the intermediate stage. Chapter Three presents the conceptions of values, the Islamic values model, and Islamic values in the curriculum. Chapter Four describes the objectives of the study, and its research design methods and procedures used to develop the instruments. The sampling procedure, the data collection procedures, and the statistical methods used to analyse the data are also described. Chapter Five presents and interprets the findings of this study. Data analysis in this chapter deals with the Islamic studies and science teachers' questionnaires and both the teachers' and supervisors' interviews. The interview findings are dealt with according to the key themes. Chapter Seven discusses the main findings related to Islamic values in both curricula. Chapter Eight reflects on the main themes of the investigation as a whole. It gives a brief description of the aims and methods of the study and sets out the major findings, their importance, and limitations. Finally, the study concludes with several recommendations and suggestions for developing Islamic values in the curriculum.
Automatic Synthesis of Implementations for Abstract Data Types from Algebraic Specifications.
1982-06-01
second is io expect the user to fumish more information about the desired prMpetin of the porum IQ to guide the synthesis procedure. ; - 10- A third...of the fourth and the fiflh chapters. The sixth chapter describes the second stage. The last chapter gives the concluding remarks. .13 - 2. An Overview... second section gives a summary of the synthesis procedure. It points out the nontrivial issues involvcd in the method employed by the procedure for
NASA Astrophysics Data System (ADS)
Mishchenko, Michael I.
2017-01-01
The second - revised and enlarged - edition of this popular monograph is co-authored by Michael Kahnert and is published as Volume 145 of the Springer Series in Optical Sciences. As in the first edition, the main emphasis is on the mathematics of electromagnetic scattering and on numerically exact computer solutions of the frequency-domain macroscopic Maxwell equations for particles with complex shapes. The book is largely centered on Green-function solution of relevant boundary value problems and the T-matrix methodology, although other techniques (the method of lines, integral equation methods, and Lippmann-Schwinger equations) are also covered. The first four chapters serve as a thorough overview of key theoretical aspects of electromagnetic scattering intelligible to readers with undergraduate training in mathematics. A separate chapter provides an instructive analysis of the Rayleigh hypothesis which is still viewed by many as a highly controversial aspect of electromagnetic scattering by nonspherical objects. Another dedicated chapter introduces basic quantities serving as optical observables in practical applications. A welcome extension of the first edition is the new chapter on group theoretical aspects of electromagnetic scattering by particles with discrete symmetries. An essential part of the book is the penultimate chapter describing in detail popular public-domain computer programs mieschka and Tsym which can be applied to a wide range of particle shapes. The final chapter provides a general overview of available literature on electromagnetic scattering by particles and gives useful reading advice.
2010-06-01
9 C. Conservation of Momentum . . . . . . . . . . . . . . . . . . . . . 11 1. Gravity Effects . . . . . . . . . . . . . . . . . . . . . . . . . 12 2...describe the high-order spectral element method used to discretize the problem in space (up to 16th order polynomials ) in Chapter IV. Chapter V discusses...inertial frame. Body forces are those acting on the fluid volume that are proportional to the mass. The body forces considered here are gravity and
Ordinary differential equations.
Lebl, Jiří
2013-01-01
In this chapter we provide an overview of the basic theory of ordinary differential equations (ODE). We give the basics of analytical methods for their solutions and also review numerical methods. The chapter should serve as a primer for the basic application of ODEs and systems of ODEs in practice. As an example, we work out the equations arising in Michaelis-Menten kinetics and give a short introduction to using Matlab for their numerical solution.
[Fresh water macroinvertebrates of Costa Rica I].
Springer, Monika; Ramirez, Alonso; Hanson, Paul
2010-12-01
This is the first in a series of three volumes on the freshwater macroinvertebrates of Costa Rica. The present volume includes an introductory chapter summarizing the major types of freshwater environments, the biology of freshwater macroinvertebrates (habitats, food, respiration, osmoregulation, etc.), ecological and economic importance, conservation and a synopis of the major groups, followed by a simplified key. The next two chapters discuss collecting methods and biomonitoring. These are followed by chapters on mayflies (Ephemeroptera: 10 families), dragonflies (Odonata: 13 families), stoneflies (Plecoptera: 1 family) and caddisflies (Trichoptera: 15 families). Both in this volume and in those to follow, the chapters treating individual taxa include a summary of the natural history, importance, taxonomy, collecting methods, morphology and an illustrated key to the families; each family is discussed separately and an illustrated key to genera is provided; each chapter ends with a bibliography and a table listing all the genera with information on number of species, distribution, habitat and tolerance to water pollution. While the emphasis is on families and genera known from Costa Rica, additional taxa occurring elsewhere in Central America are mentioned. The present volume also includes numerous color plates of aquatic macroinvertebrates.
MPACT Theory Manual, Version 2.2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Downar, Thomas; Collins, Benjamin S.; Gehin, Jess C.
2016-06-09
This theory manual describes the three-dimensional (3-D) whole-core, pin-resolved transport calculation methodology employed in the MPACT code. To provide sub-pin level power distributions with sufficient accuracy, MPACT employs the method of characteristics (MOC) solutions in the framework of a 3-D coarse mesh finite difference (CMFD) formulation. MPACT provides a 3D MOC solution, but also a 2D/1D solution in which the 2D planar solution is provided by MOC and the axial coupling is resolved by one-dimensional (1-D) lower order (diffusion or P3) solutions. In Chapter 2 of the manual, the MOC methodology is described for calculating the regional angular and scalarmore » fluxes from the Boltzmann transport equation. In Chapter 3, the 2D/1D methodology is described, together with the description of the CMFD iteration process involving dynamic homogenization and solution of the multigroup CMFD linear system. A description of the MPACT depletion algorithm is given in Chapter 4, followed by a discussion of the subgroup and ESSM resonance processing methods in Chapter 5. The final Chapter 6 describes a simplified thermal hydraulics model in MPACT.« less
Explanatory chapter: introducing exogenous DNA into cells.
Koontz, Laura
2013-01-01
The ability to efficiently introduce DNA into cells is essential for many experiments in biology. This is an explanatory chapter providing an overview of the various methods for introducing DNA into bacteria, yeast, and mammalian cells. Copyright © 2013 Elsevier Inc. All rights reserved.
Evaluation Methods Sourcebook.
ERIC Educational Resources Information Center
Love, Arnold J., Ed.
The chapters commissioned for this book describe key aspects of evaluation methodology as they are practiced in a Canadian context, providing representative illustrations of recent developments in evaluation methodology as it is currently applied. The following chapters are included: (1) "Program Evaluation with Limited Fiscal and Human…
NASA Astrophysics Data System (ADS)
Ng, J.; Kingsbury, N. G.
2004-02-01
This book provides an overview of the theory and practice of continuous and discrete wavelet transforms. Divided into seven chapters, the first three chapters of the book are introductory, describing the various forms of the wavelet transform and their computation, while the remaining chapters are devoted to applications in fluids, engineering, medicine and miscellaneous areas. Each chapter is well introduced, with suitable examples to demonstrate key concepts. Illustrations are included where appropriate, thus adding a visual dimension to the text. A noteworthy feature is the inclusion, at the end of each chapter, of a list of further resources from the academic literature which the interested reader can consult. The first chapter is purely an introduction to the text. The treatment of wavelet transforms begins in the second chapter, with the definition of what a wavelet is. The chapter continues by defining the continuous wavelet transform and its inverse and a description of how it may be used to interrogate signals. The continuous wavelet transform is then compared to the short-time Fourier transform. Energy and power spectra with respect to scale are also discussed and linked to their frequency counterparts. Towards the end of the chapter, the two-dimensional continuous wavelet transform is introduced. Examples of how the continuous wavelet transform is computed using the Mexican hat and Morlet wavelets are provided throughout. The third chapter introduces the discrete wavelet transform, with its distinction from the discretized continuous wavelet transform having been made clear at the end of the second chapter. In the first half of the chapter, the logarithmic discretization of the wavelet function is described, leading to a discussion of dyadic grid scaling, frames, orthogonal and orthonormal bases, scaling functions and multiresolution representation. The fast wavelet transform is introduced and its computation is illustrated with an example using the Haar wavelet. The second half of the chapter groups together miscellaneous points about the discrete wavelet transform, including coefficient manipulation for signal denoising and smoothing, a description of Daubechies’ wavelets, the properties of translation invariance and biorthogonality, the two-dimensional discrete wavelet transforms and wavelet packets. The fourth chapter is dedicated to wavelet transform methods in the author’s own specialty, fluid mechanics. Beginning with a definition of wavelet-based statistical measures for turbulence, the text proceeds to describe wavelet thresholding in the analysis of fluid flows. The remainder of the chapter describes wavelet analysis of engineering flows, in particular jets, wakes, turbulence and coherent structures, and geophysical flows, including atmospheric and oceanic processes. The fifth chapter describes the application of wavelet methods in various branches of engineering, including machining, materials, dynamics and information engineering. Unlike previous chapters, this (and subsequent) chapters are styled more as literature reviews that describe the findings of other authors. The areas addressed in this chapter include: the monitoring of machining processes, the monitoring of rotating machinery, dynamical systems, chaotic systems, non-destructive testing, surface characterization and data compression. The sixth chapter continues in this vein with the attention now turned to wavelets in the analysis of medical signals. Most of the chapter is devoted to the analysis of one-dimensional signals (electrocardiogram, neural waveforms, acoustic signals etc.), although there is a small section on the analysis of two-dimensional medical images. The seventh and final chapter of the book focuses on the application of wavelets in three seemingly unrelated application areas: fractals, finance and geophysics. The treatment on wavelet methods in fractals focuses on stochastic fractals with a short section on multifractals. The treatment on finance touches on the use of wavelets by other authors in studying stock prices, commodity behaviour, market dynamics and foreign exchange rates. The treatment on geophysics covers what was omitted from the fourth chapter, namely, seismology, well logging, topographic feature analysis and the analysis of climatic data. The text concludes with an assortment of other application areas which could only be mentioned in passing. Unlike most other publications in the subject, this book does not treat wavelet transforms in a mathematically rigorous manner but rather aims to explain the mechanics of the wavelet transform in a way that is easy to understand. Consequently, it serves as an excellent overview of the subject rather than as a reference text. Keeping the mathematics to a minimum and omitting cumbersome and detailed proofs from the text, the book is best-suited to those who are new to wavelets or who want an intuitive understanding of the subject. Such an audience may include graduate students in engineering and professionals and researchers in engineering and the applied sciences.
NASA Astrophysics Data System (ADS)
Russell, Greg
The work described in this dissertation was motivated by a desire to better understand the cellular pathology of ischemic stroke. Two of the three bodies of research presented herein address and issue directly related to the investigation of ischemic stroke through the use of diffusion weighted magnetic resonance imaging (DWMRI) methods. The first topic concerns the development of a computationally efficient finite difference method, designed to evaluate the impact of microscopic tissue properties on the formation of DWMRI signal. For the second body of work, the effect of changing the intrinsic diffusion coefficient of a restricted sample on clinical DWMRI experiments is explored. The final body of work, while motivated by the desire to understand stroke, addresses the issue of acquiring large amounts of MRI data well suited for quantitative analysis in reduced scan time. In theory, the method could be used to generate quantitative parametric maps, including those depicting information gleaned through the use of DWMRI methods. Chapter 1 provides an introduction to several topics. A description of the use of DWMRI methods in the study of ischemic stroke is covered. An introduction to the fundamental physical principles at work in MRI is also provided. In this section the means by which magnetization is created in MRI experiments, how MRI signal is induced, as well as the influence of spin-spin and spin-lattice relaxation are discussed. Attention is also given to describing how MRI measurements can be sensitized to diffusion through the use of qualitative and quantitative descriptions of the process. Finally, the reader is given a brief introduction to the use of numerical methods for solving partial differential equations. In Chapters 2, 3 and 4, three related bodies of research are presented in terms of research papers. In Chapter 2, a novel computational method is described. The method reduces the computation resources required to simulate DWMRI experiments. In Chapter 3, a detailed study on how changes in the intrinsic intracellular diffusion coefficient may influence clinical DWMRI experiments is described. In Chapter 4, a novel, non-steady state quantitative MRI method is described.
Handbook of Special Education.
ERIC Educational Resources Information Center
Kauffman, James M., Ed.; Hallahan, Daniel P., Ed.
Intended to serve as a basic reference work for students and professionals in special education, the book contains 34 author contributed chapters concerned with the conceptual foundations of special education, service delivery systems, curriculum and methods, and child and child/environmental management. Chapters have the following titles and…
Standard methods for tracheal mite research
USDA-ARS?s Scientific Manuscript database
This chapter, for the COLOSS Beebook from the Bee Research Center in Switzerland, summarizes all the current information about the tracheal mite (Acarapis woodi) infesting honey bees (Apis mellifera). The chapter covers the effects on bees, its life history, and its range, as well as the identifica...
DOT National Transportation Integrated Search
1994-02-01
This report describes the data collection procedures, the data analysis methods, and the results gained from the on-site evaluations. The content of the report is as follows: Chapter 2 - State Profiles. This chapter includes descriptions of the organ...
Simulation of phase equilibria
NASA Astrophysics Data System (ADS)
Martin, Marcus Gary
The focus of this thesis is on the use of configurational bias Monte Carlo in the Gibbs ensemble. Unlike Metropolis Monte Carlo, which is reviewed in chapter I, configurational bias Monte Carlo uses an underlying Markov chain transition matrix which is asymmetric in such a way that it is more likely to attempt to move to a molecular conformation which has a lower energy than to one with a higher energy. Chapter II explains how this enables efficient simulation of molecules with complex architectures (long chains and branched molecules) for coexisting fluid phases (liquid, vapor, or supercritical), and also presents several of our recent extensions to this method. In chapter III we discuss the development of the Transferable Potentials for Phase Equilibria United Atom (TraPPE-UA) force field which accurately describes the fluid phase coexistence for linear and branched alkanes. Finally, in the fourth chapter the methods and the force field are applied to systems ranging from supercritical extraction to gas chromatography to illustrate the power and versatility of our approach.
Yakima River Species Interactions Studies, Annual Report 1993.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pearsons, Todd N.
Species interactions research was initiated in 1989 to investigate ecological interactions among fish in response to proposed supplementation of salmon and steelhead in the upper Yakima River basin. Data have been collected prior to supplementation to characterize the rainbow trout population, predict the potential interactions that may occur as a result of supplementation, and develop methods to monitor interactions. Major topics of this report are associated with the life history of rainbow trout, interactions experimentation, and methods for sampling. This report is organized into nine chapters with a general introduction preceding the first chapter and a general discussion following themore » last chapter. This annual report summarizes data collected primarily by the Washington Department of Fish and Wildlife (WDFW) between January 1 and December 31, 1993 in the upper Yakima basin above Roza Dam, however these data were compared to data from previous years to identify preliminary trends and patterns. Major preliminary findings from each of the chapters included in this report are described.« less
VIII. THE PAST, PRESENT, AND FUTURE OF DEVELOPMENTAL METHODOLOGY.
Little, Todd D; Wang, Eugene W; Gorrall, Britt K
2017-06-01
This chapter selectively reviews the evolution of quantitative practices in the field of developmental methodology. The chapter begins with an overview of the past in developmental methodology, discussing the implementation and dissemination of latent variable modeling and, in particular, longitudinal structural equation modeling. It then turns to the present state of developmental methodology, highlighting current methodological advances in the field. Additionally, this section summarizes ample quantitative resources, ranging from key quantitative methods journal articles to the various quantitative methods training programs and institutes. The chapter concludes with the future of developmental methodology and puts forth seven future innovations in the field. The innovations discussed span the topics of measurement, modeling, temporal design, and planned missing data designs. Lastly, the chapter closes with a brief overview of advanced modeling techniques such as continuous time models, state space models, and the application of Bayesian estimation in the field of developmental methodology. © 2017 The Society for Research in Child Development, Inc.
Software Safety Analysis of a Flight Guidance System
NASA Technical Reports Server (NTRS)
Butler, Ricky W. (Technical Monitor); Tribble, Alan C.; Miller, Steven P.; Lempia, David L.
2004-01-01
This document summarizes the safety analysis performed on a Flight Guidance System (FGS) requirements model. In particular, the safety properties desired of the FGS model are identified and the presence of the safety properties in the model is formally verified. Chapter 1 provides an introduction to the entire project, while Chapter 2 gives a brief overview of the problem domain, the nature of accidents, model based development, and the four-variable model. Chapter 3 outlines the approach. Chapter 4 presents the results of the traditional safety analysis techniques and illustrates how the hazardous conditions associated with the system trace into specific safety properties. Chapter 5 presents the results of the formal methods analysis technique model checking that was used to verify the presence of the safety properties in the requirements model. Finally, Chapter 6 summarizes the main conclusions of the study, first and foremost that model checking is a very effective verification technique to use on discrete models with reasonable state spaces. Additional supporting details are provided in the appendices.
Molecular Dynamics Studies of Self-Assembling Biomolecules and DNA-functionalized Gold Nanoparticles
NASA Astrophysics Data System (ADS)
Cho, Vince Y.
This thesis is organized as following. In Chapter 2, we use fully atomistic MD simulations to study the conformation of DNA molecules that link gold nanoparticles to form nanoparticle superlattice crystals. In Chapter 3, we study the self-assembly of peptide amphiphiles (PAs) into a cylindrical micelle fiber by using CGMD simulations. Compared to fully atomistic MD simulations, CGMD simulations prove to be computationally cost-efficient and reasonably accurate for exploring self-assembly, and are used in all subsequent chapters. In Chapter 4, we apply CGMD methods to study the self-assembly of small molecule-DNA hybrid (SMDH) building blocks into well-defined cage-like dimers, and reveal the role of kinetics and thermodynamics in this process. In Chapter 5, we extend the CGMD model for this system and find that the assembly of SMDHs can be fine-tuned by changing parameters. In Chapter 6, we explore superlattice crystal structures of DNA-functionalized gold nanoparticles (DNA-AuNP) with the CGMD model and compare the hybridization.
Grid sensitivity for aerodynamic optimization and flow analysis
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1993-01-01
After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.
Department of Defense Suicide Event Report Calendar Year 2013 Annual Report
2014-07-24
suicide attempt DoDSERs, the most common method was drug/alcohol overdose . Prescription and over-the- MARINE CORPS DoDSER RESULTS The DoDSER system...24 JUL 2014 2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Department of Defense Suicide Event Report Calendar...LEFT BLANK 1CALENDAR YEAR 2013 ANNUAL REPORT Chapter 1: Suicide Rates Chapter 1 SUICIDE RATES 3CALENDAR YEAR 2013 ANNUAL REPORT Chapter 2: DoDSER
2013-12-01
Programming code in the Python language used in AIS data preprocessing is contained in Appendix A. The MATLAB programming code used to apply the Hough...described in Chapter III is applied to archived AIS data in this chapter. The implementation of the method, including programming techniques used, is...is contained in the second. To provide a proof of concept for the algorithm described in Chapter III, the PYTHON programming language was used for
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Gowans, Dakers; Telarico, Chad
The Commercial and Industrial Lighting Evaluation Protocol (the protocol) describes methods to account for gross energy savings resulting from the programmatic installation of efficient lighting equipment in large populations of commercial, industrial, and other nonresidential facilities. This protocol does not address savings resulting from changes in codes and standards, or from education and training activities. A separate Uniform Methods Project (UMP) protocol, Chapter 3: Commercial and Industrial Lighting Controls Evaluation Protocol, addresses methods for evaluating savings resulting from lighting control measures such as adding time clocks, tuning energy management system commands, and adding occupancy sensors.
Robust Hybrid Finite Element Methods for Antennas and Microwave Circuits
NASA Technical Reports Server (NTRS)
Gong, J.; Volakis, John L.
1996-01-01
One of the primary goals in this dissertation is concerned with the development of robust hybrid finite element-boundary integral (FE-BI) techniques for modeling and design of conformal antennas of arbitrary shape. Both the finite element and integral equation methods will be first overviewed in this chapter with an emphasis on recently developed hybrid FE-BI methodologies for antennas, microwave and millimeter wave applications. The structure of the dissertation is then outlined. We conclude the chapter with discussions of certain fundamental concepts and methods in electromagnetics, which are important to this study.
Determining if an mRNA is a Substrate of Nonsense-Mediated mRNA Decay in Saccharomyces cerevisiae.
Johansson, Marcus J O
2017-01-01
Nonsense-mediated mRNA decay (NMD) is a conserved eukaryotic quality control mechanism which triggers decay of mRNAs harboring premature translation termination codons. In this chapter, I describe methods for monitoring the influence of NMD on mRNA abundance and decay rates in Saccharomyces cerevisiae. The descriptions include detailed methods for growing yeast cells, total RNA isolation, and Northern blotting. Although the chapter focuses on NMD, the methods can be easily adapted to assess the effect of other mRNA decay pathways.
Galerkin Method for Nonlinear Dynamics
NASA Astrophysics Data System (ADS)
Noack, Bernd R.; Schlegel, Michael; Morzynski, Marek; Tadmor, Gilead
A Galerkin method is presented for control-oriented reduced-order models (ROM). This method generalizes linear approaches elaborated by M. Morzyński et al. for the nonlinear Navier-Stokes equation. These ROM are used as plants for control design in the chapters by G. Tadmor et al., S. Siegel, and R. King in this volume. Focus is placed on empirical ROM which compress flow data in the proper orthogonal decomposition (POD). The chapter shall provide a complete description for construction of straight-forward ROM as well as the physical understanding and teste
Empirical Force Fields for Mechanistic Studies of Chemical Reactions in Proteins.
Das, A K; Meuwly, M
2016-01-01
Following chemical reactions in atomistic detail is one of the most challenging aspects of current computational approaches to chemistry. In this chapter the application of adiabatic reactive MD (ARMD) and its multistate version (MS-ARMD) are discussed. Both methods allow to study bond-breaking and bond-forming processes in chemical and biological processes. Particular emphasis is put on practical aspects for applying the methods to investigate the dynamics of chemical reactions. The chapter closes with an outlook of possible generalizations of the methods discussed. © 2016 Elsevier Inc. All rights reserved.
Fundraising for Early Childhood Programs: Getting Started and Getting Results.
ERIC Educational Resources Information Center
Finn, Matia
Designed to assist practitioners serving young children and their families, this book contains information about methods of raising money and managing nonprofit organizations. Following the first chapter's introductory definition of important terms associated with the fundraising process, chapter 2 discusses some prerequisite steps required before…
Educational Evaluation: Analysis and Responsibility.
ERIC Educational Resources Information Center
Apple, Michael W., Ed.; And Others
This book presents controversial aspects of evaluation and aims at broadening perspectives and insights in the evaluation field. Chapter 1 criticizes modes of evaluation and the basic rationality behind them and focuses on assumptions that have problematic consequences. Chapter 2 introduces concepts of evaluation and examines methods of grading…
ERIC Educational Resources Information Center
Wu, Yenna
1991-01-01
Exploration of and comparisons between structural, stylistic, and linguistic similarities and differences in two modern Chinese semiautobiographical texts points out both authors' methods for depicting the ironies within their socio-political and ideological conditions. (19 references) (CB)
Chapter 4 of Assessing the Multiple Benefits of Clean Energy helps state states understand the methods, models, opportunities, and issues associated with assessing the GHG, air pollution, air quality, and human health benefits of clean energy options.
Guide to Federal Resources for the Developmentally Disabled.
ERIC Educational Resources Information Center
Russem, Wendy, Ed.; And Others
The guide presents information on available federal resources to improve services for developmentally disabled persons. An introductory chapter provides an overview of the creation and evolution of the Developmental Disabilities Program. Chapter two focuses on federal funding and appropriations, including methods of awarding grants and contracts.…
Design and Implementation of an Innovative Residential PV System
NASA Astrophysics Data System (ADS)
Najm, Elie Michel
This work focuses on the design and implementation of an innovative residential PV system. In chapter one, after an introduction related to the rapid growth of solar systems' installations, the most commonly used state of the art solar power electronics' configurations are discussed, which leads to introducing the proposed DC/DC parallel configuration. The advantages and disadvantages of each of the power electronics' configurations are deliberated. The scope of work in the power electronics is defined in this chapter to be related to the panel side DC/DC converter. System integration and mechanical proposals are also within the scope of work and are discussed in later chapters. Operation principle of a novel low cost PV converter is proposed in chapter 2. The proposal is based on an innovative, simplified analog implementation of a master/slave methodology resulting in an efficient, soft-switched interleaved variable frequency flybacks, operating in the boundary conduction mode (BCM). The scheme concept and circuit configuration, operation principle and theoretical waveforms, design equations, and design considerations are presented. Furthermore, design examples are also given, illustrating the significance of the newly derived frequency equation for flybacks operating in BCM. In chapters 3, 4, and 5, the design implementation and optimization of the novel DC/DC converter illustrated in chapter 2 are discussed. In chapter 3, a detailed variable frequency BCM flyback design model leading to optimizing the component selections and transformer design, detailed in chapter 4, is presented. Furthermore, in chapter 4, the method enabling the use of lower voltage rating switching devices is also discussed. In chapter 5, circuitry related to Start-UP, drive for the main switching devices, zero-voltage-switching (ZVS) as well as turn OFF soft switching and interleaving control are fully detailed. The experimental results of the proposed DC/DC converter are presented in chapter 6. In chapter 7, a novel integration method is proposed for the residential PV solar system. The proposal presents solutions to challenges experimented in the implementation of today's approaches. Faster installation time, easier system grounding, and integration of the power electronics in order to reduce the number of connectors' and system cost are detailed. Installers with special skills as well as special tools are not required for implementing the proposed system integration. Photos of the experimental results related to the installation of a 3kW system, which was fully completed in less than an hour and a half, are also presented.
Experimental design methodologies in the optimization of chiral CE or CEC separations: an overview.
Dejaegher, Bieke; Mangelings, Debby; Vander Heyden, Yvan
2013-01-01
In this chapter, an overview of experimental designs to develop chiral capillary electrophoresis (CE) and capillary electrochromatographic (CEC) methods is presented. Method development is generally divided into technique selection, method optimization, and method validation. In the method optimization part, often two phases can be distinguished, i.e., a screening and an optimization phase. In method validation, the method is evaluated on its fit for purpose. A validation item, also applying experimental designs, is robustness testing. In the screening phase and in robustness testing, screening designs are applied. During the optimization phase, response surface designs are used. The different design types and their application steps are discussed in this chapter and illustrated by examples of chiral CE and CEC methods.
Asteroseismology: Data Analysis Methods and Interpretation for Space and Ground-based Facilities
NASA Astrophysics Data System (ADS)
Campante, T. L.
2012-06-01
This dissertation has been submitted to the Faculdade de Ciências da Universidade do Porto in partial fulfillment of the requirements for the PhD degree in Astronomy. The scientific results presented herein follow from the research activity performed under the supervision of Dr. Mário João Monteiro at the Centro de Astrofísica da Universidade do Porto and Dr. Hans Kjeldsen at the Institut for Fysik og Astronomi, Aarhus Universitet. The dissertation is composed of three chapters and a list of appendices. Chapter 1 serves as an unpretentious and rather general introduction to the field of asteroseismology of solar-like stars. It starts with a historical account of the field of asteroseismology followed by a general review of the basic physics and properties of stellar pulsations. Emphasis is then naturally placed on the stochastic excitation of stellar oscillations and on the potential of asteroseismic inference. The chapter closes with a discussion about observational techniques and the observational status of the field. Chapter 2 is devoted to the subject of data analysis in asteroseismology. This is an extensive subject, therefore, a compilation is presented of the relevant data analysis methods and techniques employed contemporarily in asteroseismology of solar-like stars. Special attention has been drawn to the subject of statistical inference both from the competing Bayesian and frequentist perspectives. The chapter ends with a description of the implementation of a pipeline for mode parameter analysis of Kepler data. In the course of these two first chapters, reference is made to a series of articles led by the author (or otherwise having greatly benefited from his contribution) that can be found in Appendices A to E. Chapter 3 then goes on to present a series of additional published results.
Baryons, universe and everything in between
NASA Astrophysics Data System (ADS)
Ho, Shirley
2008-06-01
This thesis is a tour of topics in cosmology, unified by their diversity and pursuits in better understanding of our Universe. The first chapter measures the Integrated Sachs-Wolfe effect as a function of redshift utilizing a large range of large scale structure observations and the cosmic microwave background. We combine the ISW likelihood function with weak lensing of CMB (which is described in Chapter 2) and CMB powerspectrum to constrain the equation of state of dark energy and the curvature of the Universe. The second chapter investigates the correlation of gravitational lensing of the cosmic microwave background (CMB) with several tracers of large-scale structure, and we find evidence for a positive cross-correlation at the 2.5s level. The third chapter explores the statistical properties of Luminous Red Galaxies in a sample of X-ray selected galaxy clusters, including the halo occupation distribution, how Poisson is the satellite distribution of LRGs and the radial profile of LRGs within clusters. The forth chapter explores the idea of using multiplicity of galaxies to understand their merging timescales. We find that (by using the multiplicity function of LRGs in Chapter 3) Massive halos (~ 10 14 M [Special characters omitted.] ) at low redshift have, for example, been bombarded by several ~ 10 13 M [Special characters omitted.] halos throughout their history and these accreted LRGs merge on relatively short timescales (~ 2 Gyr). The fifth chapter presents a new method for generating a template for the kinematic Sunyaev-Zel'dovich effect that can be used to detect the missing baryons. We assessed the feasibility of the method by investigating combinations of differeng galaxy surveys and CMB observations and find that we can detect the gas-momentum kSZ correlation, and thus the ionized gas, at significant signal-to-noise level.
NASA Astrophysics Data System (ADS)
Grandner, Jessica Marie
Computational methods were used to determine the mechanisms and selectivities of organometallic-catalyzed reactions. The first half of the dissertation focuses on the study of metathesis catalysts in collaboration with the Grubbs group at CalTech. Chapter 1 describes the studies of the decomposition modes of several ruthenium-based metathesis catalysts. These studies were performed to better understand the decomposition of such catalysts in order to prevent decomposition (Chapter 1.2) or utilize decomposed catalysts for alternative reactions (Chapter 1.1). Chapter 2.1 describes the computational investigation of the origins of stereoretentive metathesis with ruthenium-based metathesis catalysts. These findings were then used to computationally design E-selective metathesis catalysts (Chapter 2.2). While the first half of the dissertation was centered around ruthenium catalysts, the second half of the dissertation pertains to iron-catalyzed reaction, in particular, iron-catalyzed reactions by P450 enzymes. The elements of Chapter 3 concentrate on the stereo- and chemo-selectivity of P450-catalyzed C-H hydroxylations. By combining multiple computational methods, the inherent activity of the iron-oxo catalyst and the influence of the active site on such reactions were illuminated. These discoveries allow for the engineering of new substrates and mutant enzymes for tailored C-H hydroxylation. While the mechanism of C-H hydroxylations catalyzed by P450 enzymes has been well studied, there are several P450-catalyzed transformations for which the mechanism is unknown. The components of Chapter 4 describe the use of computations to determine the mechanisms of complex, multi-step reactions catalyzed by P450s. The determination of these mechanisms elucidates how these enzymes react with various functional groups and substrate architectures and allows for a better understanding of how drug-like compounds may be broken down by human P450s.
The Kelvin-Helmholtz instability of boundary-layer plasmas in the kinetic regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinbusch, Benedikt, E-mail: b.steinbusch@fz-juelich.de; Gibbon, Paul, E-mail: p.gibbon@fz-juelich.de; Department of Mathematics, Centre for Mathematical Plasma Astrophysics, Katholieke Universiteit Leuven
2016-05-15
The dynamics of the Kelvin-Helmholtz instability are investigated in the kinetic, high-frequency regime with a novel, two-dimensional, mesh-free tree code. In contrast to earlier studies which focused on specially prepared equilibrium configurations in order to compare with fluid theory, a more naturally occurring plasma-vacuum boundary layer is considered here with relevance to both space plasma and linear plasma devices. Quantitative comparisons of the linear phase are made between the fluid and kinetic models. After establishing the validity of this technique via comparison to linear theory and conventional particle-in-cell simulation for classical benchmark problems, a quantitative analysis of the more complexmore » magnetized plasma-vacuum layer is presented and discussed. It is found that in this scenario, the finite Larmor orbits of the ions result in significant departures from the effective shear velocity and width underlying the instability growth, leading to generally slower development and stronger nonlinear coupling between fast growing short-wavelength modes and longer wavelengths.« less
Code of Federal Regulations, 2014 CFR
2014-04-01
... be identified from the universe of all NMS securities as defined in § 242.600 of this chapter that... identified from the universe of all NMS securities as defined in § 242.600 of this chapter that are common...
Code of Federal Regulations, 2011 CFR
2011-04-01
... be identified from the universe of all NMS securities as defined in § 242.600 of this chapter that... identified from the universe of all NMS securities as defined in § 242.600 of this chapter that are common...
Code of Federal Regulations, 2013 CFR
2013-04-01
... be identified from the universe of all NMS securities as defined in § 242.600 of this chapter that... identified from the universe of all NMS securities as defined in § 242.600 of this chapter that are common...
Code of Federal Regulations, 2012 CFR
2012-04-01
... be identified from the universe of all NMS securities as defined in § 242.600 of this chapter that... identified from the universe of all NMS securities as defined in § 242.600 of this chapter that are common...
Code of Federal Regulations, 2010 CFR
2010-04-01
... be identified from the universe of all NMS securities as defined in § 242.600 of this chapter that... identified from the universe of all NMS securities as defined in § 242.600 of this chapter that are common...
Fostering Sustainable Behavior: An Introduction to Community-Based Social Marketing.
ERIC Educational Resources Information Center
McKenzie-Mohr, Doug; Smith, William
This book discusses incorporating community-based social marketing techniques programs. The first chapter explains why programs that rely heavily on conventional methods to promote behavior change are often ineffective, and introduces community-based social marketing as an attractive alternative for the delivery of programs. Chapter 2 describes…
Transition Literature Review: Educational, Employment, and Independent Living Outcomes. Volume 3.
ERIC Educational Resources Information Center
Harnisch, Delwyn L.; Fisher, Adrian T.
This review focuses on both published and unpublished literature in the areas of education, employment, and independent living outcomes across 13 handicapping conditions. Preliminary chapters describe the database system used to manage the literature identified, and discuss research methods in transition literature. Subsequent chapters then review…
Political Education for Teenagers: Aims, Content and Methods.
ERIC Educational Resources Information Center
Langeveld, Willem
The problems, practices, objectives, and desirability of political education in the secondary school social studies curriculum is evaluated. The author suggests that political education should be a compulsory subject in junior and senior high schools. The document is presented in eight chapters. Chapter I explores the relationship between…
How Children Learn Mathematics, Teaching Implications of Piaget's Research.
ERIC Educational Resources Information Center
Copeland, Richard W.
Included are the standard topics presented in the undergraduate and/or graduate course on methods of teaching mathematics in elementary education. Chapter 1 describes the historical development of learning theories, including Piaget's. Chapter 2 contains a biographical sketch of Piaget and an explanation of his theory of cognitive development.…
Conducting On-Farm Animal Research: Procedures & Economic Analysis.
ERIC Educational Resources Information Center
Amir, Pervaiz; Knipscheer, Hendrik C.
This book is intended to give animal scientists elementary tools to perform on-farm livestock analysis and to provide crop-oriented farming systems researchers with methods for conducting animal research. Chapter 1 describes farming systems research as a systems approach to on-farm animal research. Chapter 2 outlines some important…
Soft Paths: How To Enjoy the Wilderness without Harming It.
ERIC Educational Resources Information Center
Hampton, Bruce; Cole, David
This outdoor-education book describes methods of hiking and camping that minimize the human impact upon the natural environment. Each chapter offers the rationale behind recommended practices, based on the best scientific research on recreational impact. The first chapter, "The Case for Minimum Impact," describes increasing tourist use…
Opportunities in Training & Development Careers. VGM Opportunities Series.
ERIC Educational Resources Information Center
Gordon, Edward E.; Petrini, Catherine M.; Campagna, Ann P.
This volume is a resource for those who want to explore opportunities in training and development careers. Chapter 1 covers the evolution of training and the future of education at work. Chapter 2 considers trainers' roles; program design and development; needs assessment; development of program objectives; program content, training methods,…
47 CFR 90.165 - Procedures for mutually exclusive applications.
Code of Federal Regulations, 2011 CFR
2011-10-01
... grant, pursuant to § 1.935 of this chapter. (1) Selection methods. In selecting the application to grant, the Commission may use competitive bidding, random selection, or comparative hearings, depending on... chapter, either before or after employing selection procedures. (3) Type of filing group used. Except as...
17 CFR 45.4 - Swap data reporting: continuation data.
Code of Federal Regulations, 2014 CFR
2014-04-01
... swap data repository as set forth in this section. This obligation commences on the applicable... swap data set forth in part 43 of this chapter; and, where applicable, swap dealers, major swap... traders set forth in parts 17 and 18 of this chapter. (a) Continuation data reporting method. For each...
17 CFR 45.4 - Swap data reporting: continuation data.
Code of Federal Regulations, 2013 CFR
2013-04-01
... swap data repository as set forth in this section. This obligation commences on the applicable... swap data set forth in part 43 of this chapter; and, where applicable, swap dealers, major swap... traders set forth in parts 17 and 18 of this chapter. (a) Continuation data reporting method. For each...
17 CFR 45.4 - Swap data reporting: continuation data.
Code of Federal Regulations, 2012 CFR
2012-04-01
... swap data repository as set forth in this section. This obligation commences on the applicable... swap data set forth in part 43 of this chapter; and, where applicable, swap dealers, major swap... traders set forth in parts 17 and 18 of this chapter. (a) Continuation data reporting method. For each...
18 CFR 385.2003 - Specifications (Rule 2003).
Code of Federal Regulations, 2010 CFR
2010-04-01
... paper. (c) Filing via the Internet. (1) All documents filed under this Chapter may be filed via the Internet except those listed by the Secretary. Except as otherwise specifically provided in this Chapter, filing via the Internet is in lieu of other methods of filing. Internet filings must be made in...
Step by Step: A Guide to Stepfamily Living.
ERIC Educational Resources Information Center
Martin, Don; Martin, Maggie
This book describes the difficulties from past marriages that people bring into the stepfamily, and explores methods of preparing for remarriage, and for involving children effectively in these new relationships. The first chapter focuses on understanding the stepfamily. The second chapter provides information about coping with divorce, and the…
BOOK REVIEW Handbook of Physics in Medicine and Biology Handbook of Physics in Medicine and Biology
NASA Astrophysics Data System (ADS)
Tabakov, Slavik
2010-11-01
This is a multi-author handbook (66 authors) aiming to describe various applications of physics to medicine and biology, from anatomy and physiology to medical equipment. This unusual reference book has 44 chapters organized in seven sections: 1. Anatomical physics; 2. Physics of perception; 3. Biomechanics; 4. Electrical physics; 5. Diagnostic physics; 6. Physics of accessory medicine; 7. Physics of bioengineering. Each chapter has separate page numbering, which is inconvenient but understandable with the number of authors. Similarly there is some variation in the emphasis of chapters: for some the emphasis is more technical and for others clinical. Each chapter has a separate list of references. The handbook includes hundreds of diagrams, images and tables, making it a useful tool for both medical physicists/engineers and other medical/biology specialists. The first section (about 40 pages) includes five chapters on physics of the cell membrane; protein signaling; cell biology and biophysics of the cell membrane; cellular thermodynamics; action potential transmission and volume conduction. The physics of these is well explained and illustrated with clear diagrams and formulae, so it could be a suitable reference for physicists/engineers. The chapters on cellular thermodynamics and action potential transmission have a very good balance of technical/clinical content. The second section (about 85 pages) includes six chapters on medical decision making; senses; somatic senses: touch and pain; hearing; vision; electroreception. Again these are well illustrated and a suitable reference for physicists/engineers. The chapter on hearing stands out with good balance and treatment of material, but some other chapters contain less physics and are close to typical physiological explanations. One could query the inclusion of the chapter on medical decision making, which also needs more detail. The third section (about 80 pages) includes eight chapters on biomechanics; artificial muscle; cardiovascular system; control of cardiac output and arterial blood pressure regulation; fluid dynamics of the cardiovascular system; fluid dynamics; modeling and simulation of the cardiovascular system to determine work using bond graphs; anatomy and physics of respiration. The diagrams and data in this section could be used as reference material, but some chapters (such as that on the cardiovascular system) again take the form of physiological explanations. The best chapters in this section are on fluid dynamics and modeling. The fourth section (about 30 pages) includes two chapters on electrodes and recording of bioelectrical signals: theory and practice. Both chapters deal with electrodes and are well written and illustrated reference materials. This section could have been larger but the equipment associated with bioelectrical signals (such as ECG and EEG) is described in the next section. The fifth section (about 210 pages) includes 19 chapters on medical sensing and imaging; electrocardiogram: electrical information retrieval and diagnostics from the beating heart; electroencephalography: basic concepts and brain applications; bioelectric impedance analysis; x-ray and computed tomography; confocal microscopy; magnetic resonance imaging; positron emission tomography; in vivo fluorescence imaging and spectroscopy; optical coherence tomography; ultrasonic imaging; near-field imaging; atomic force microscopy; scanning ion conductance microscopy; quantitative thermographic imaging; intracoronary thermography; schlieren imaging: optical techniques to visualize thermal interactions with biological tissues; helium ion microscopy; electron microscopy: SEM/TEM. This is by far the largest section covering various methods and medical equipment and the variation in emphasis/quality is more prominent. The chapters on ECG and EEG are again more physiological with less physics, but the chapter on bioelectric impedance analysis is a good interdisciplinary article. The imaging chapters also vary in style and quality: while those on MRI and ultrasound provide a suitable introduction to the methods, the chapters on x-ray and PET need more detail. However this section includes some methods/equipment rarely featured in medical physics/engineering books (such as OCT or HIM). From this point of view the last eight chapters in the section will be a very useful reference for various specialists. The sixth section (about 30 pages) includes three chapters on lab-on-a-chip; the biophysics of DNA microarrays; nuclear medicine. While the first two could provide an interesting reference, the chapter on nuclear medicine needs much more detail. The last (seventh) section (15 pages) has only one chapter on biophysics of regenerative medicine, which is a good introduction, emphasizing biochemical factors important for improving/replacing tissues or tissue structures. The book ends with an index covering about 1400 terms. The handbook will be useful for the preparation of teaching materials and for undergraduate students, but should be complemented with more detailed/specific reference materials such as the Encyclopedia of Medical Devices and Instrumentation, the Encyclopedia of Medical Physics Emitel, or others. Parts of the handbook would be less suitable for more demanding readers (such as trainee medical physicists or radiologists, for example). In conclusion, the Handbook of Physics in Medicine and Biology includes materials that are rarely combined together, which strengthens its interdisciplinary approach and makes it an additional reference for a departmental library.
Planning and simulation of medical robot tasks.
Raczkowsky, J; Bohner, P; Burghart, C; Grabowski, H
1998-01-01
Complex techniques for planning and performing surgery revolutionize medical interventions. In former times preoperative planning of interventions usually took place in the surgeons mind. Today's new computer techniques allow the surgeon to discuss various operation methods for a patient and to visualize them three-dimensionally. The use of computer assisted surgical planning helps to get better results of a treatment and supports the surgeon before and during the surgical intervention. In this paper we are presenting our planning and simulation system for operations in maxillo-facial surgery. All phases of a surgical intervention are supported. Chapter 1 gives a description of the medical motivation for our planning system and its environment. In Chapter 2 the basic components are presented. The planning system is depicted in Chapter 3 and a simulation of a robot assisted surgery can be found in Chapter 4. Chapter 5 concludes the paper and gives a survey about our future work.
Clinical guide to periodontology: part 3. Multidisciplinary integrated treatment.
Palmer, R M; Ide, M; Floyd, P D
2014-05-01
The establishment of periodontal health should be a primary aim in all treatment plans. The methods by which this can be achieved have been dealt with in previous chapters, but there are a number of situations where integration of these treatment methods with other dental disciplines needs to be clarified. To simplify matters this chapter will consider periodontal implications in three main areas: treatment of drifted anterior teeth, pre-restorative procedures and replacement of missing teeth.
NASA Astrophysics Data System (ADS)
Rowell, Eric Martin
The primary goal of this research is to advance methods for deriving fine-grained, scalable, wildland fuels attributes in 3-dimensions using terrestrial and airborne laser scanning technology. It is fundamentally a remote sensing research endeavor applied to the problem of fuels characterization. Advancements in laser scanning are beginning to have significant impacts on a range of modeling frameworks in fire research, especially those utilizing 3-dimensional data and benefiting from efficient data scaling. The pairing of laser scanning and fire modeling is enabling advances in understanding how fuels variability modulates fire behavior and effects. This dissertation details the development of methods and techniques to characterize and quantify surface fuelbeds using both terrestrial and airborne laser scanning. The primary study site is Eglin Airforce Base, Florida, USA, which provides a range of fuel types and conditions in a fire-adapted landscape along with the multi-disciplinary expertise, logistical support, and prescribed fire necessary for detailed characterization of fire as a physical process. Chapter 1 provides a research overview and discusses the state of fuels science and the related needs for highly resolved fuels data in the southeastern United States. Chapter 2, describes the use of terrestrial laser scanning for sampling fuels at multiple scales and provides analysis of the spatial accuracy of fuelbed models in 3-D. Chapter 3 describes the development of a voxel-based occupied volume method for predicting fuel mass. Results are used to inform prediction of landscape-scale fuel load using airborne laser scanning metrics as well as to predict post-fire fuel consumption. Chapter 4 introduces a novel fuel simulation approach which produces spatially explicit, statistically-defensible estimates of fuel properties and demonstrates a pathway for resampling observed data. This method also can be directly compared to terrestrial laser scanning data to assess how energy interception of the laser pulse affects characterization of the fuelbed. Chapter 5 discusses the contribution of this work to fire science and describes ongoing and future research derived from this work. Chapters 2 and 4 have been published in International Journal of Wildland Fire and Canadian Journal of Remote Sensing, respectively, and Chapter 3 is in preparation for publication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driskell, Jeremy Daniel
2006-08-09
Immunoassays have been utilized for the detection of biological analytes for several decades. Many formats and detection strategies have been explored, each having unique advantages and disadvantages. More recently, surface-enhanced Raman scattering (SERS) has been introduced as a readout method for immunoassays, and has shown great potential to meet many key analytical figures of merit. This technology is in its infancy and this dissertation explores the diversity of this method as well as the mechanism responsible for surface enhancement. Approaches to reduce assay times are also investigated. Implementing the knowledge gained from these studies will lead to a more sensitivemore » immunoassay requiring less time than its predecessors. This dissertation is organized into six sections. The first section includes a literature review of the previous work that led to this dissertation. A general overview of the different approaches to immunoassays is given, outlining the strengths and weaknesses of each. Included is a detailed review of binding kinetics, which is central for decreasing assay times. Next, the theoretical underpinnings of SERS is reviewed at its current level of understanding. Past work has argued that surface plasmon resonance (SPR) of the enhancing substrate influences the SERS signal; therefore, the SPR of the extrinsic Raman labels (ERLs) utilized in our SERS-based immunoassay is discussed. Four original research chapters follow the Introduction, each presented as separate manuscripts. Chapter 2 modifies a SERS-based immunoassay previously developed in our group, extending it to the low-level detection of viral pathogens and demonstrating its versatility in terms of analyte type, Chapter 3 investigates the influence of ERL size, material composition, and separation distance between the ERLs and capture substrate on the SERS signal. This chapter links SPR with SERS enhancement factors and is consistent with many of the results from theoretical treatments of SPR and SERS. Chapter 4 introduces a novel method of reducing sample incubation time via capture substrate rotation. Moreover, this work led to a method of virus quantification without the use of standards. Chapter 5 extends the methodology developed in Chapter 4 to both the antigen and ERL labeling step to perform assays with improved analytical performance in less time than can be accomplished in diffusion controlled assays. This dissertation concludes with a general summary and speculates on the future of this exciting approach to carrying out immunoassays.« less
Open quantum systems and error correction
NASA Astrophysics Data System (ADS)
Shabani Barzegar, Alireza
Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC) that applies to any linear map, in particular maps that are not completely positive (CP). This is a complementary to the second chapter which is published in [Shabani and Lidar, 2007]. In the last chapter 7 before the conclusion, a formulation for evaluating the performance of quantum error correcting codes for a general error model is presented, also published in [Shabani, 2005]. In this formulation, the correlation between errors is quantified by a Hamiltonian description of the noise process. In particular, we consider Calderbank-Shor-Steane codes and observe a better performance in the presence of correlated errors depending on the timing of the error recovery.
New Methods for Tracking Galaxy and Black Hole Evolution Using Post-Starburst Galaxies
NASA Astrophysics Data System (ADS)
French, Katheryn Decker
2017-08-01
Galaxies in transition from star-forming to quiescence are a natural laboratory for exploring the processes responsible for this evolution. Using a sample of post-starburst galaxies identified to have recently experienced a recent burst of star formation that has now ended, I explore both the fate of the molecular gas that drives star formation and the increased rate of stars disrupted by the central supermassive black hole. Chapter 1 provides an introduction to galaxy evolution through the post-starburst phase and to tidal disruption events, which surprisingly favor post-starburst galaxy hosts. In Chapter 2, I present a survey of the molecular gas properties of 32 post-starburst galaxies traced by CO (1-0) and CO (2-1). In order to accurately put galaxies on an evolutionary sequence, we must select likely progenitors and descendants. We do this by identifying galaxies with similar starburst properties, such as the amount of mass produced in the burst and the burst duration. In Chapter 3, I describe a method to determine the starburst properties and the time elapsed since the starburst ended, and discuss trends in the molecular gas properties of these galaxies with time. In Chapter 4, I present the results of followup observations with ALMA of HCN (1-0) and HCO+ (1-0) in two post-starburst galaxies. CO (1-0) is detected in over half (17/32) the post-starburst sample and the molecular gas mass traced by CO declines on ˜100 Myr timescales after the starburst has ended. HCN (1-0) is not detected in either galaxy targeted, indicating the post-starbursts are now quiescent because of a lack of the denser molecular gas traced by HCN. In Chapter 5 I quantify the increase in TDE rate in quiescent galaxies with strong Balmer absorption to be 30 - 200x higher than in normal galaxies. Using the stellar population fitting method from Chapter 3, I examine possible reasons for the increased TDE rate in post-starburst galaxies in Chapter 6. The TDE rate could be boosted due to a binary supermassive black hole coalescing after a major merger or an increased density of stars or gas remaining near the nucleus after the starburst has ended. In Chapter 7, I present a summary of the findings of this dissertation and an outlook for future work.
Computational Nuclear Physics and Post Hartree-Fock Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lietz, Justin; Sam, Novario; Hjorth-Jensen, M.
We present a computational approach to infinite nuclear matter employing Hartree-Fock theory, many-body perturbation theory and coupled cluster theory. These lectures are closely linked with those of chapters 9, 10 and 11 and serve as input for the correlation functions employed in Monte Carlo calculations in chapter 9, the in-medium similarity renormalization group theory of dense fermionic systems of chapter 10 and the Green's function approach in chapter 11. We provide extensive code examples and benchmark calculations, allowing thereby an eventual reader to start writing her/his own codes. We start with an object-oriented serial code and end with discussions onmore » strategies for porting the code to present and planned high-performance computing facilities.« less
Estimating abundance: Chapter 27
Royle, J. Andrew
2016-01-01
This chapter provides a non-technical overview of ‘closed population capture–recapture’ models, a class of well-established models that are widely applied in ecology, such as removal sampling, covariate models, and distance sampling. These methods are regularly adopted for studies of reptiles, in order to estimate abundance from counts of marked individuals while accounting for imperfect detection. Thus, the chapter describes some classic closed population models for estimating abundance, with considerations for some recent extensions that provide a spatial context for the estimation of abundance, and therefore density. Finally, the chapter suggests some software for use in data analysis, such as the Windows-based program MARK, and provides an example of estimating abundance and density of reptiles using an artificial cover object survey of Slow Worms (Anguis fragilis).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popov, V.N.
This book is the first of its kind to be published in the Soviet Union. It consists of two parts. The first part contains three chaptersi The first chapter discusses the causes of radioactive elements contained in water. The second chapter deals with the problem of various types of natural radioactive water. The third chapter is devoted to hydrogeological conditions which lead to the formation of uranium deposits. The second part consists of six chapters dealing with radiohydrogeological methods of investigation. The book contains both theoretical and a large aumber of experimental data which were selected by the authors onmore » the strength of their many years of experience. It is a drawback of this book that the text was not sufficiently well revised and corrected. (TCO)« less
Current Status of the Polyamine Research Field
Pegg, Anthony E.; Casero, Robert A.
2013-01-01
This chapter provides an overview of the polyamine field and introduces the 32 other chapters that make up this volume. These chapters provide a wide range of methods, advice, and background relevant to studies of the function of polyamines, the regulation of their content, their role in disease, and the therapeutic potential of drugs targeting polyamine content and function. The methodology provided in this new volume will enable laboratories already working in this area to expand their experimental techniques and facilitate the entry of additional workers into this rapidly expanding field. PMID:21318864
Development of a Carbon Nanotube-Based Micro-CT and its Applications in Preclinical Research
NASA Astrophysics Data System (ADS)
Burk, Laurel May
Due to the dependence of researchers on mouse models for the study of human disease, diagnostic tools available in the clinic must be modified for use on these much smaller subjects. In addition to high spatial resolution, cardiac and lung imaging of mice presents extreme temporal challenges, and physiological gating methods must be developed in order to image these organs without motion blur. Commercially available micro-CT imaging devices are equipped with conventional thermionic x-ray sources and have a limited temporal response and are not ideal for in vivo small animal studies. Recent development of a field-emission x-ray source with carbon nanotube (CNT) cathode in our lab presented the opportunity to create a micro-CT device well-suited for in vivo lung and cardiac imaging of murine models for human disease. The goal of this thesis work was to present such a device, to develop and refine protocols which allow high resolution in vivo imaging of free-breathing mice, and to demonstrate the use of this new imaging tool for the study many different disease models. In Chapter 1, I provide background information about x-rays, CT imaging, and small animal micro-CT. In Chapter 2, CNT-based x-ray sources are explained, and details of a micro-focus x-ray tube specialized for micro-CT imaging are presented. In Chapter 3, the first and second generation CNT micro-CT devices are characterized, and successful respiratory- and cardiac-gated live animal imaging on normal, wild-type mice is achieved. In Chapter 4, respiratory-gated imaging of mouse disease models is demonstrated, limitations to the method are discussed, and a new contactless respiration sensor is presented which addresses many of these limitations. In Chapter 5, cardiac-gated imaging of disease models is demonstrated, including studies of aortic calcification, left ventricular hypertrophy, and myocardial infarction. In Chapter 6, several methods for image and system improvement are explored, and radiation therapy-related micro-CT imaging is present. Finally, in Chapter 7 I discuss future directions for this research and for the CNT micro-CT.
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Koppen, Sandra V.; Ely, Jay J.; Williams, Reuben A.; Smith, Laura J.; Salud, Maria Theresa P.
2004-01-01
This document summarizes the safety analysis performed on a Flight Guidance System (FGS) requirements model. In particular, the safety properties desired of the FGS model are identified and the presence of the safety properties in the model is formally verified. Chapter 1 provides an introduction to the entire project, while Chapter 2 gives a brief overview of the problem domain, the nature of accidents, model based development, and the four-variable model. Chapter 3 outlines the approach. Chapter 4 presents the results of the traditional safety analysis techniques and illustrates how the hazardous conditions associated with the system trace into specific safety properties. Chapter 5 presents the results of the formal methods analysis technique model checking that was used to verify the presence of the safety properties in the requirements model. Finally, Chapter 6 summarizes the main conclusions of the study, first and foremost that model checking is a very effective verification technique to use on discrete models with reasonable state spaces. Additional supporting details are provided in the appendices.
NASA Astrophysics Data System (ADS)
Moomaw, Ronald L.
According to its abstract, this book attempts ‘an assessment of various water conservation measures aimed at reducing residential water usage.’ Its intent is to develop a research program whose ‘ultimate goal is to engender a conservation ethic among water users and managers and develop a predictable array of conservation methodologies. …’ Professor Flack indeed has presented an excellent assessment of conservation methodologies, but I believe that the proposed research program is too limited.Following a brief introductory chapter, chapter II presents an extensive review of the water conservation literature published in the 1970's and earlier. It and chapter III, which describes Flack's systematic comparison of the technical, economic, and political aspects of each conservation methodology, are the heart of the book. Chapter IV is a brief discussion and analysis of conservation programs (with examples) that a water utility might adopt. Chapter V is essentially a pilot study of methods of assessing political and social feasibility. Finally, a set of recommendations is presented in chapter VI. All in all, this book is a nice blend of literature review and original research that deals with an important issue.
Nielsen, Henrik
2017-01-01
Many computational methods are available for predicting protein sorting in bacteria. When comparing them, it is important to know that they can be grouped into three fundamentally different approaches: signal-based, global-property-based and homology-based prediction. In this chapter, the strengths and drawbacks of each of these approaches is described through many examples of methods that predict secretion, integration into membranes, or subcellular locations in general. The aim of this chapter is to provide a user-level introduction to the field with a minimum of computational theory.
A Mixed Method Case Study on Learner Engagement in e-Learning Professional Training
ERIC Educational Resources Information Center
Zhao, Jane Yanfeng
2014-01-01
Previous research showed that learners' reluctance in participating in e-Learning training is a major obstacle in achieving training objectives. This study focused on learners' e-Learning engagement in professional training in the chapter of the American Society for Training and Development (ASTD). The participants were 21 chapter members.…
From Serrano to Serrano. Report No. FA.
ERIC Educational Resources Information Center
Education Commission of the States, Denver, CO. Dept. of Research and Information Services.
This report examines various school finance issues raised by the California case of Serrano v. Priest. Chapter 1 focuses on the issue of local control; it discusses four methods of providing state aid to education in terms of how they affect local control of schools. Chapter 2 analyzes different remedies for inequitable distribution of funds and…
Developing Academic Skills through Multigenre Autobiography
ERIC Educational Resources Information Center
Bickens, Sarah; Bittman, Franny; Connor, David J.
2013-01-01
This article provides an overview of the Autobiography Project, listing the topics of the ten chapters and the targeted skills that accompany them. The authors discuss the purposes of each chapter and describe the methods incorporated to promote the four broad components of literacy. This unit also addresses almost all components of the Common…
The Writing Laboratory: Organization, Management, and Methods.
ERIC Educational Resources Information Center
Steward, Joyce S.; Croft, Mary K.
The four chapters of this book move from the history, philosophy, and approaches that writing laboratories encompass to a look at the many facets of their organization before treating in detail the actual teaching process and the practical elements of writing laboratory management. Chapter one notes the growth of writing labs and discusses…
Newton Methods for Large Scale Problems in Machine Learning
ERIC Educational Resources Information Center
Hansen, Samantha Leigh
2014-01-01
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korte, Andrew R
This thesis presents efforts to improve the methodology of matrix-assisted laser desorption ionization-mass spectrometry imaging (MALDI-MSI) as a method for analysis of metabolites from plant tissue samples. The first chapter consists of a general introduction to the technique of MALDI-MSI, and the sixth and final chapter provides a brief summary and an outlook on future work.
Schools As Post-Disaster Shelters: Planning and Management Guidelines for Districts and Sites.
ERIC Educational Resources Information Center
California State Office of Emergency Services, Sacramento.
This guidebook outlines a method for preparing school facilities and personnel in the event that schools are needed for disaster shelters. It serves as a blueprint for planning and preparedness. Chapter 1 provides descriptions of actual incidents in which California schools served as emergency shelters. Chapter 2 describes schools' legal…
Gene sequence analyses and other DNA-based methods for yeast species recognition
USDA-ARS?s Scientific Manuscript database
DNA sequence analyses, as well as other DNA-based methodologies, have transformed the way in which yeasts are identified. The focus of this chapter will be on the resolution of species using various types of DNA comparisons. In other chapters in this book, Rozpedowska, Piškur and Wolfe discuss mul...
Custodial Staffing Guidelines for Educational Facilities, Second Edition.
ERIC Educational Resources Information Center
APPA: Association of Higher Education Facilities Officers, Alexandria, VA.
The 20 chapters of this guide to custodial staffing in educational facilities are grouped into five parts addressing: (1) staffing, (2) evaluation, (3) special considerations, (4) staff development tools, and (5) case studies. The five chapters on staffing are all by Jack C. Dudley and are titled: "General Methods"; "The Mathematics of Change";…
47 CFR 22.131 - Procedures for mutually exclusive applications.
Code of Federal Regulations, 2011 CFR
2011-10-01
... excluded by that grant, pursuant to § 1.945 of this chapter. (1) Selection methods. In selecting the... under § 1.945 of this chapter, either before or after employing selection procedures. (3) Type of filing... Commission may attempt to resolve the mutual exclusivity by facilitating a settlement between the applicants...
Doing Quantitative Research in Education with SPSS
ERIC Educational Resources Information Center
Muijs, Daniel
2004-01-01
This book looks at quantitative research methods in education. The book is structured to start with chapters on conceptual issues and designing quantitative research studies before going on to data analysis. While each chapter can be studied separately, a better understanding will be reached by reading the book sequentially. This book is intended…
Synthesis of new nanocrystal materials
NASA Astrophysics Data System (ADS)
Hassan, Yasser Hassan Abd El-Fattah
Colloidal semiconductor nanocrystals (NCs) have sparked great excitement in the scientific community in last two decades. NCs are useful for both fundamental research and technical applications in various fields owing to their size and shape-dependent properties and their potentially inexpensive and excellent chemical processability. These NCs are versatile fluorescence probes with unique optical properties, including tunable luminescence, high extinction coefficient, broad absorption with narrow photoluminescence, and photobleaching resistance. In the past few years, a lot of attention has been given to nanotechnology based on using these materials as building blocks to design light harvesting assemblies. For instant, the pioneering applications of NCs are light-emitting diodes, lasers, and photovoltaic devices. Synthesis of the colloidal stable semiconductor NCs using the wet method of the pyrolysis of organometallic and chalcogenide precursors, known as hot-injection approach, is the chart-topping preparation method in term of high quality and monodisperse sized NCs. The advancement in the synthesis of these artificial materials is the core step toward their applications in a broad range of technologies. This dissertation focuses on exploring various innovative and novel synthetic methods of different types of colloidal nanocrystals, both inorganic semiconductors NCs, also known as quantum dots (QDs), and organic-inorganic metal halide-perovskite materials, known as perovskites. The work presented in this thesis focuses on pursuing fundamental understanding of the synthesis, material properties, photophysics, and spectroscopy of these nanostructured semiconductor materials. This thesis contains 6 chapters and conclusions. Chapters 1?3 focus on introducing theories and background of the materials being synthesized in the thesis. Chapter 4 demonstrates our synthesis of colloidal linker--free TiO2/CdSe NRs heterostructures with CdSe QDs grown in the presence of TiO2 NRs using seeded--growth type colloidal injection approach. Chapter 5 explores a novel approach of directly synthesized CdSe NCs with electroactive ligands. The last Chapter focuses on a new class of perovskites. I describe my discovery of a (bottom-up) simple method to synthesize colloidally stable methyl ammonium lead halide perovskite nanocrystals seeded from high quality PbX2 NCs with a pre-targeted size. This chapter reports advances in preparation of both these materials (PbX2, and lead halide perovskite NCs).
An investigation of the vortex method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pryor, Jr., Duaine Wright
The vortex method is a numerical scheme for solving the vorticity transport equation. Chorin introduced modern vortex methods. The vortex method is a Lagrangian, grid free method which has less intrinsic diffusion than many grid schemes. It is adaptive in the sense that elements are needed only where the vorticity is non-zero. Our description of vortex methods begins with the point vortex method of Rosenhead for two dimensional inviscid flow, and builds upon it to eventually cover the case of three dimensional slightly viscous flow with boundaries. This section gives an introduction to the fundamentals of the vortex method. Thismore » is done in order to give a basic impression of the previous work and its line of development, as well as develop some notation and concepts which will be used later. The purpose here is not to give a full review of vortex methods or the contributions made by all the researchers in the field. Please refer to the excellent review papers in Sethian and Gustafson, chapters 1 Sethian, 2 Hald, 3 Sethian, 8 Chorin provide a solid introduction to vortex methods, including convergence theory, application in two dimensions and connection to statistical mechanics and polymers. Much of the information in this review is taken from those chapters, Chorin and Marsden and Batchelor, the chapters are also useful for their extensive bibliographies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The literature review and empirical analyses presented in this report were undertaken, for the most part, between August and October 1983. They are not comprehensive. No primary data were gathered, nor were any formal surveys conducted. Additionally, because construction of a repository at Yucca Mountain, if that site is selected for a repository, is not scheduled to begin until 1993, engineering design and planned physical appearance of the repository are very preliminary. Therefore, specific design features or visual appearance were not addressed in the analyses. Finally, because actual transportation routes have not been designated, impacts on tourism generated specifically bymore » transportation activities are not considered separately. Chapter 2 briefly discusses possible means by which a repository could impact tourism in the Las Vegas area. Chapter 3 presents a review of previous research on alternative methods for predicting the response of people to potential hazards. A review of several published studies where these methods have been applied to facilities and activities associated with radioactive materials is included in Chapter 3. Chapter 4 contains five case studies of tourism impacts associated with past events that were perceived by the public to represent safety hazards. These perceptions of safety hazards were evidenced by news media coverage. These case studies were conducted specifically for this report. Conclusions of this preliminary analysis regarding the potential impact on tourism in the Las Vegas area of a repository at Yucca Mountain are in Chapter 5. Recommendations for further research are contained in Chapter 6.« less
Minerals Yearbook, volume II, Area Reports—Domestic
,
2018-01-01
The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.
Minerals Yearbook, volume I, Metals and Minerals
,
2018-01-01
The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.
Minerals Yearbook, volume III, Area Reports—International
,
2018-01-01
The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.
Theory and modeling of particles with DNA-mediated interactions
NASA Astrophysics Data System (ADS)
Licata, Nicholas A.
2008-05-01
In recent years significant attention has been attracted to proposals which utilize DNA for nanotechnological applications. Potential applications of these ideas range from the programmable self-assembly of colloidal crystals, to biosensors and nanoparticle based drug delivery platforms. In Chapter I we introduce the system, which generically consists of colloidal particles functionalized with specially designed DNA markers. The sequence of bases on the DNA markers determines the particle type. Due to the hybridization between complementary single-stranded DNA, specific, type-dependent interactions can be introduced between particles by choosing the appropriate DNA marker sequences. In Chapter II we develop a statistical mechanical description of the aggregation and melting behavior of particles with DNA-mediated interactions. In Chapter III a model is proposed to describe the dynamical departure and diffusion of particles which form reversible key-lock connections. In Chapter IV we propose a method to self-assemble nanoparticle clusters using DNA scaffolds. A natural extension is discussed in Chapter V, the programmable self-assembly of nanoparticle clusters where the desired cluster geometry is encoded using DNA-mediated interactions. In Chapter VI we consider a nanoparticle based drug delivery platform for targeted, cell specific chemotherapy. In Chapter VII we present prospects for future research: the connection between DNA-mediated colloidal crystallization and jamming, and the inverse problem in self-assembly.
Bioorganic Chemistry: Peptides and Proteins (edited by Sidney M. Hecht)
NASA Astrophysics Data System (ADS)
Anthony-Cahill, Spencer
1999-07-01
Sidney M. Hecht, Ed. Oxford University Press: New York, 1998. 532 pp. ISBN 0-19-508468-3. $75.00. The second volume in the Bioorganic Chemistry series edited by Sidney Hecht is an outstanding addition to the collections of all scientists who teach and/or do research in the field of protein chemistry. The coverage of current research is up to date and thus the book is of great relevance to all chemists with interest in proteins, not just to academicians. As an instructor I found numerous references to current research, which I have included in my lecture notes for the undergraduate Biochemistry course and a senior-level Protein Engineering course taught at WWU. In addition to the chapters covering a broad spectrum of protein chemistry, there are two chapters (protein structural analysis, site-directed mutagenesis) which are excellent introductions to laboratory procedures in protein chemistry and molecular biology. The first chapter is an overview of basic protein biochemistry and serves as an introduction to the rest of the book. This chapter is dispensable for readers familiar with introductory biochemistry. The chapter on chemical synthesis of peptides is an exhaustive review of solution and solid-phase methods, with numerous references. I was struck by the abundance of figures showing structures of reactants but the general lack of organic chemical mechanisms. This is true for the rest of the book as well. Presumably the chemistry is known to the intended reader (grad students, advanced undergrads); however, as a devoted pusher of electrons, I was expecting to see more mechanisms in this and subsequent chapters. Instructors will have to present this aspect of the chemistry in lecture. The relevance of peptide chemistry is underscored by accompanying chapters on peptide hormones and peptidomimetics. Taken together these three chapters provide an excellent introduction to pharmaceutical peptide chemistry. The chapter on total synthesis of proteins is one of my favorites. It outlines elegant synthetic approaches to the formidable problem of generating long peptides and is very readable. Complementing the chemical synthetic strategies is a chapter on recombinant methods for protein synthesis. Again, I found this to be an excellent review of methods that have become the sine qua non of protein structure-function studies. The application of site-directed mutagenesis to support protein biophysical studies is illustrated with relevant examples from the author's laboratory. The chapter Structural Analysis of Proteins is an informative review of lab procedures for analyzing primary sequence and posttranslational modifications. It might well serve as a lab manual, as in many cases recipes for a particular procedure are given in the text. At 70 pages the chapter on protein structure is the longest in the book. It is impressive in its level of detail while maintaining readability. This chapter not only provides an excellent introduction to protein structure in general but also highlights the interplay between computational methods (modeling, refinement) and classification of structural motifs that supports structure prediction. Four chapters further illustrate the diversity of research in the protein field. These topics include antibody catalysis, DNA-binding proteins that require zinc, the use of enzymes in organic synthesis, and protein-based materials research. Finally, two chapters deserve special mention as outstanding treatments of important theoretical concepts. The chapters on protein folding and proton transfer to and from carbon by enzymes stand out in my mind as excellent qualitative introductions to complex topics. Both are succinct, lucid presentations of the relevant theoretical considerations, with ample references to the primary literature for those seeking more quantitative development of the topics. This is an outstanding collection of reviews. If you are a peptide or protein chemist or a reader with a general interest in proteins, you will benefit from reading all or most of this book. Each chapter stands on its own, so the order of coverage during an academic term depends on the preference of the instructor. I have only minor suggestions for improvement. I found roughly a dozen typos in the figures and in the text. I prefer references at the end of each chapter rather than all together at the back of the book. The book would be enhanced by the inclusion of mechanisms for many of the cited reactions. Cofactor chemistry, metabolic pathway elucidation (xenobiotic biosynthesis), and enzyme mimics (other than antibodies) are not covered in this volume. It is debatable whether they should be. In the final analysis the editor had to make choices about what to include and he made very good ones. Perhaps some of the elegant synthetic chemistry being developed to elucidate biosynthetic pathways and enzyme mechanisms will appear in subsequent volumes. In my mind that is classical bioorganic chemistry and worthy of inclusion. In the meantime, Professor Hecht is to be congratulated for assembling yet another fine edition of readable and relevant Bioorganic Chemistry.
Axial Crushing of Thin-Walled Columns with Octagonal Section: Modeling and Design
NASA Astrophysics Data System (ADS)
Liu, Yucheng; Day, Michael L.
This chapter focus on numerical crashworthiness analysis of straight thinwalled columns with octagonal cross sections. Two important issues in this analysis are demonstrated here: computer modeling and crashworthiness design. In the first part, this chapter introduces a method of developing simplified finite element (FE) models for the straight thin-walled octagonal columns, which can be used for the numerical crashworthiness analysis. Next, this chapter performs a crashworthiness design for such thin-walled columns in order to maximize their energy absorption capability. Specific energy absorption (SEA) is set as the design objective, side length of the octagonal cross section and wall thickness are selected as design variables, and maximum crushing force (Pm) occurs during crashes is set as design constraint. Response surface method (RSM) is employed to formulate functions for both SEA and Pm.
Marketing in nursing organizations.
Chambers, S B
1989-05-01
The purpose of chapter 3 is to provide a conceptual framework for understanding marketing. Although it is often considered to be, marketing is not really a new activity for nursing organizations. What is perhaps new to most nursing organizations is the conduct of marketing activities as a series of interrelated events that are part of a strategic marketing process. The increasingly volatile nursing environment requires a comprehensive approach to marketing. This chapter presents definitions of marketing, the marketing mix, the characteristics of nonprofit marketing, the relationship of strategic planning and strategic marketing, portfolio analysis, and a detailed description of the strategic marketing process. While this chapter focuses on marketing concepts, essential components, and presentation of the strategic marketing process, chapter 4 presents specific methods and techniques for implementing the strategic marketing process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W.; Agnew, Ken; Goldberg, Mimi
Whole-building retrofits involve the installation of multiple measures. Whole-building retrofit programs take many forms. With a focus on overall building performance, these programs usually begin with an energy audit to identify cost-effective energy efficiency measures for the home. Measures are then installed, either at no cost to the homeowner or partially paid for by rebates and/or financing. The methods described here may also be applied to evaluation of single-measure retrofit programs. Related methods exist for replace-on-failure programs and for new construction, but are not the subject of this chapter.
Quantum Approximate Methods for the Atomistic Modeling of Multicomponent Alloys. Chapter 7
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge; Mosca, Hugo; Gargano, pablo; Noebe, Ronald D.; Abel, Phillip
2007-01-01
This chapter describes the role of quantum approximate methods in the understanding of complex multicomponent alloys at the atomic level. The need to accelerate materials design programs based on economical and efficient modeling techniques provides the framework for the introduction of approximations and simplifications in otherwise rigorous theoretical schemes. As a promising example of the role that such approximate methods might have in the development of complex systems, the BFS method for alloys is presented and applied to Ru-rich Ni-base superalloys and also to the NiAI(Ti,Cu) system, highlighting the benefits that can be obtained from introducing simple modeling techniques to the investigation of such complex systems.
ERIC Educational Resources Information Center
Employment and Training Administration (DOL), Washington, DC.
This report presents a final assessment of the early implementation of the School-to-Work (STW)/Youth Apprenticeship Demonstration programs and participants. Chapter I describes the evolution of STW policy. Chapter II discusses marketing methods, the student selection process and selection criteria, reasons for student participation, and number…
Chapter 2 - An overview of the LANDFIRE Prototype Project
Matthew G. Rollins; Robert E. Keane; Zhiliang Zhu; James P. Menakis
2006-01-01
This chapter describes the background and design of the Landscape Fire and Resource Management Planning Tools Prototype Project, or LANDFIRE Prototype Project, which was a sub-regional, proof-of-concept effort designed to develop methods and applications for providing the high-resolution data (30-m pixel) needed to support wildland fire management and to implement the...
Memory, Meaning, & Method: A View of Language Teaching. Second Edition.
ERIC Educational Resources Information Center
Stevick, Earl W.
The revised second edition of a 1976 book explores the literature of research on memory, creation of meaning in language learning, and second language teaching methodology, incorporating results of recent work in those areas. Each of the 12 chapters begins with a series of questions to be addressed and ends with further questions. Chapter topics…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-29
... analyze sugar, sugar syrups and confectionary products under Chapter 17 of the Harmonized Tariff Schedule... to analyze sugar, sugar syrups and confectionary products under Chapter 17 of the Harmonized Tariff... methods only: (1) Polarization of Raw Sugar, ICUMSA GS 1/2/3-1; (2) The Determination of the Polarization...
ERIC Educational Resources Information Center
Lindsay, Jeanne Warren; McCullough, Sally
Written for teenage parents, this book is designed to help them use appropriate methods of discipline for their infants and toddlers. Chapter 1, "Discipline Is Important!" defines discipline and discusses the importance of setting limits. Chapter 2, "Infants and Discipline," concerns the importance of parents disciplining…
Drawing from the Well. Oral History and Folk Arts in the Classroom and Community.
ERIC Educational Resources Information Center
Silnutzer, Randi, Ed.; Watrous, Beth Eildin, Ed.
Each chapter of this document describes a different project and approach for introducing students (elementary to high school) to oral history and folk arts. All chapters use a standard format in which a general overview of the project, describing themes, philosophies, and methods are followed by sample lesson plans, teacher guidelines, and student…
The Notional-Functional Approach: Teaching the Real Language in Its Natural Context.
ERIC Educational Resources Information Center
Laine, Elaine
This study of the notional-functional approach to second language teaching reviews the history and theoretical background of the method, current issues, and implementation of a notional-functional syllabus. Chapter 1 discusses the history and theory of the approach and the organization and advantages of the notional-functional syllabus. Chapter 2…
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Gallagher, Suzy; Granier, Martin
1984-01-01
A project is described which has as its goal the production of a set of system-independent, discipline-independent, transportable college level courses to educate science and engineering students in the use of large-scale information storage and retrieval systems. This project is being conducted with the cooperation and sponsorship of NASA by R and D teams at the University of Southwest Louisiana and Southern University. Chapter 1 is an introduction, providing an overview and a listing of the management phases. Chapter 2 furnishes general information regarding accomplishments in areas under development. Chapter 3 deals with the development of the course materials by presenting a series of diagrams and keys to depict the progress and interrelationships of various tasks and sub-tasks. Chapter 4 presents plans for activities to be conducted to complete and deliver course materials. The final chapter is a summary of project objectives, methods, plans, and accomplishments.
Measurement and modeling of unsaturated hydraulic conductivity: Chapter 21
Perkins, Kim S.; Elango, Lakshmanan
2011-01-01
This chapter will discuss, by way of examples, various techniques used to measure and model hydraulic conductivity as a function of water content, K(). The parameters that describe the K() curve obtained by different methods are used directly in Richards’ equation-based numerical models, which have some degree of sensitivity to those parameters. This chapter will explore the complications of using laboratory measured or estimated properties for field scale investigations to shed light on how adequately the processes are represented. Additionally, some more recent concepts for representing unsaturated-zone flow processes will be discussed.
NASA Astrophysics Data System (ADS)
Cowan, James
This chapter summarizes and explains key concepts of building acoustics. These issues include the behavior of sound waves in rooms, the most commonly used rating systems for sound and sound control in buildings, the most common noise sources found in buildings, practical noise control methods for these sources, and the specific topic of office acoustics. Common noise issues for multi-dwelling units can be derived from most of the sections of this chapter. Books can be and have been written on each of these topics, so the purpose of this chapter is to summarize this information and provide appropriate resources for further exploration of each topic.
Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries
NASA Astrophysics Data System (ADS)
Perez, Hector Eduardo
This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro-thermal-aging battery model, where electrical and aging sub-models depend upon the core temperature captured by a two-state thermal sub-model. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting highly nonlinear six-state optimal control problem. Charge time and health degradation are therefore optimally traded off, subject to both electrical and thermal constraints. Minimum-time, minimum-aging, and balanced charge scenarios are examined in detail. Sensitivities to the upper voltage bound, ambient temperature, and cooling convection resistance are investigated as well. Experimental results are provided to compare the tradeoffs between a balanced and traditional charge protocol. Chapter 6: This chapter provides concluding remarks on the findings of this dissertation and a discussion of future work.
Vibrations of double-nanotube systems with mislocation via a newly developed van der Waals model
NASA Astrophysics Data System (ADS)
Kiani, Keivan
2015-06-01
This study deals with transverse vibrations of two adjacent-parallel-mislocated single-walled carbon nanotubes (SWCNTs) under various end conditions. These tubes interact with each other and their surrounding medium through the intertube van der Waals (vdW) forces, and existing bonds between their atoms and those of the elastic medium. The elastic energy of such forces due to the deflections of nanotubes is appropriately modeled by defining a vdW force density function. In the previous works, vdW forces between two identical tubes were idealized by a uniform form of this function. The newly introduced function enables us to investigate the influences of both intertube free distance and longitudinal mislocation on the natural transverse frequencies of the nanosystem which consists of two dissimilar tubes. Such crucial issues have not been addressed yet, even for simply supported tubes. Using nonlocal Timoshenko and higher-order beam theories as well as Hamilton's principle, the strong form of the equations of motion is established. Seeking for an explicit solution to these integro-partial differential equations is a very problematic task. Thereby, an energy-based method in conjunction with an efficient meshfree method is proposed and the nonlocal frequencies of the elastically embedded nanosystem are determined. For simply supported nanosystems, the predicted first five frequencies of the proposed model are checked with those of assumed mode method, and a reasonably good agreement is achieved. Through various studies, the roles of the tube's length ratio, intertube free space, mislocation, small-scale effect, slenderness ratio, radius of SWCNTs, and elastic constants of the elastic matrix on the natural frequencies of the nanosystem with various end conditions are explained. The limitations of the nonlocal Timoshenko beam theory are also addressed. This work can be considered as a vital step towards better realizing of a more complex system that consists of vertically aligned SWCNTs of various lengths.
NASA Astrophysics Data System (ADS)
Etcheverry, Jose R.
This dissertation explores the potential of renewable energy and efficiency strategies to solve the energy challenges faced by the people living in the biosphere reserve of El Vizcaino, which is located in the North Pacific region of the Mexican state of Baja California Sur. This research setting provides a practical analytical milieu to understand better the multiple problems faced by practitioners and agencies trying to implement sustainable energy solutions in Mexico. The thesis starts with a literature review (chapter two) that examines accumulated international experience regarding the development of renewable energy projects as a prelude to identifying the most salient implementation barriers impeding this type of initiatives. Two particularly salient findings from the literature review include the importance of considering gender issues in energy analysis and the value of using participatory research methods. These findings informed fieldwork design and the analytical framework of the dissertation. Chapter three surveys electricity generation as well as residential and commercial electricity use in nine coastal communities located in El Vizcaino. Chapter three summarizes the fieldwork methodology used, which relies on a mix of qualitative and quantitative research methods that aim at enabling a gender-disaggregated analysis to describe more accurately local energy uses, needs, and barriers. Chapter four describes the current plans of the state government, which are focused in expanding one of the state's diesel-powered electricity grids to El Vizcaino. The Chapter also examines the potential for replacing diesel generators with a combination of renewable energy systems and efficiency measures in the coastal communities sampled. Chapter five analyzes strategies to enable the implementation of sustainable energy approaches in El Vizcaino. Chapter five highlights several international examples that could be useful to inform organizational changes at the federal and state level aimed at fostering renewable energy and efficiency initiatives that enhance energy security, protect the environment, and also increase economic opportunities in El Vizcaino and elsewhere in Mexico. Chapter six concludes the thesis by providing: a summary of all key findings, a broad analysis of the implications of the research, and an overview of future lines of inquiry.
Walsh, Daniel P.
2012-01-01
The purpose of this document is to provide wildlife management agencies with the foundation upon which they can build scientifically rigorous and cost-effective surveillance and monitoring programs for chronic wasting disease (CWD) or refine their existing programs. The first chapter provides an overview of potential demographic and spatial risk factors of susceptible wildlife populations that may be exploited for CWD surveillance and monitoring. The information contained in this chapter explores historic as well as recent developments in our understanding of CWD disease dynamics. It also contains many literature references for readers who may desire a more thorough review of the topics or CWD in general. The second chapter examines methods for enhancing efforts to detect CWD on the landscape where it is not presently known to exist and focuses on the efficiency and cost-effectiveness of the surveillance program. Specifically, it describes the means of exploiting current knowledge of demographic and spatial risk factors, as described in the first chapter, through a two-stage surveillance scheme that utilizes traditional design-based sampling approaches and novel statistical methods to incorporate information about the attributes of the landscape, environment, populations and individual animals into CWD surveillance activities. By accounting for these attributes, efficiencies can be gained and cost-savings can be realized. The final chapter is unique in relation to the first two chapters. Its focus is on designing programs to monitor CWD once it is discovered within a jurisdiction. Unlike the prior chapters that are more detailed or prescriptive, this chapter by design is considerably more general because providing comprehensive direction for creating monitoring programs for jurisdictions without consideration of their monitoring goals, sociopolitical constraints, or their biological systems, is not possible. Therefore, the authors draw upon their collective experiences implementing disease-monitoring programs to present the important questions to consider, potential tools, and various strategies for those wildlife management agencies endeavoring to create or maintain a CWD monitoring program. Its intent is to aid readers in creating efficient and cost-effective monitoring programs, while avoiding potential pitfalls. It is hoped that these three chapters will be useful tools for wildlife managers struggling to implement efficient and effective CWD disease management programs.
Molecular Simulation of Adsorption in Zeolites
NASA Astrophysics Data System (ADS)
Bai, Peng
Zeolites are a class of crystalline nanoporous materials that are widely used as catalysts, sorbents, and ion-exchangers. Zeolites have revolutionized the petroleum industry and have fueled the 20th-century automobile culture, by enabling numerous highly-efficient transformations and separations in oil refineries. They are also posed to play an important role in many processes of biomass conversion. One of the fundamental principles in the field of zeolites involves the understanding and tuning of the selectivity for different guest molecules that results from the wide variety of pore architectures. The primary goal of my dissertation research is to gain such understanding via computer simulations and eventually to reach the level of predictive modeling. The dissertation starts with a brief introduction of the applications of zeolites and computer modeling techniques useful for the study of zeolitic systems. Chapter 2 then describes an effort to improve simulation efficiency, which is essential for many challenging adsorption systems. Chapter 3 studies a model system to demonstrate the applicability and capability of the method used for the majority of this work, configurational-bias Monte Carlo simulations in the Gibbs ensemble (CBMC-GE). After these methodological developments, Chapter 4 and 5 report a systematic parametrization of a new transferable force field for all-silica zeolites, TraPPE-zeo, and a subsequent, relatively ad-hoc extension to cation-exchanged aluminosilicates. The CBMC-GE method and the TraPPE-zeo force field are then combined to investigate some complex adsorption systems, such as linear and branched C6-C 9 alkanes in a hierarchical microporous/mesoporous material (Chapter 6), the multi-component adsorption of aqueous alcohol solutions (Chapter 7) and glucose solutions (Chapter 8). Finally, Chapter 9 describes an endeavor to screen a large number of zeolites with the purpose of finding better materials for two energy-related applications, ethanol/water separation and hydrocarbon iso-dewaxing.
NASA Astrophysics Data System (ADS)
Soligo, Riccardo
In this work, the insight provided by our sophisticated Full Band Monte Carlo simulator is used to analyze the behavior of state-of-art devices like GaN High Electron Mobility Transistors and Hot Electron Transistors. Chapter 1 is dedicated to the description of the simulation tool used to obtain the results shown in this work. Moreover, a separate section is dedicated the set up of a procedure to validate to the tunneling algorithm recently implemented in the simulator. Chapter 2 introduces High Electron Mobility Transistors (HEMTs), state-of-art devices characterized by highly non linear transport phenomena that require the use of advanced simulation methods. The techniques for device modeling are described applied to a recent GaN-HEMT, and they are validated with experimental measurements. The main techniques characterization techniques are also described, including the original contribution provided by this work. Chapter 3 focuses on a popular technique to enhance HEMTs performance: the down-scaling of the device dimensions. In particular, this chapter is dedicated to lateral scaling and the calculation of a limiting cutoff frequency for a device of vanishing length. Finally, Chapter 4 and Chapter 5 describe the modeling of Hot Electron Transistors (HETs). The simulation approach is validated by matching the current characteristics with the experimental one before variations of the layouts are proposed to increase the current gain to values suitable for amplification. The frequency response of these layouts is calculated, and modeled by a small signal circuit. For this purpose, a method to directly calculate the capacitance is developed which provides a graphical picture of the capacitative phenomena that limit the frequency response in devices. In Chapter 5 the properties of the hot electrons are investigated for different injection energies, which are obtained by changing the layout of the emitter barrier. Moreover, the large signal characterization of the HET is shown for different layouts, where the collector barrier was scaled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W.; Romberger, Jeff
The HVAC Controls Evaluation Protocol is designed to address evaluation issues for direct digital controls/energy management systems/building automation systems (DDC/EMS/BAS) that are installed to control heating, ventilation, and air-conditioning (HVAC) equipment in commercial and institutional buildings. (This chapter refers to the DDC/EMS/BAS measure as HVAC controls.) This protocol may also be applicable to industrial facilities such as clean rooms and labs, which have either significant HVAC equipment or spaces requiring special environmental conditions.
U 3Si 2 Fabrication and Testing for Implementation into the BISON Fuel Performance Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knight, Travis W.
A creep test stand was designed and constructed for compressive creep testing of U 3Si 2 pellets. This is described in Chapter 3. Creep testing of U 3Si 2 pellets was completed. In total, 13 compressive creep tests of U 3Si 2 pellets was successfully completed. This is reported in Chapter 3. Secondary creep model of U 3Si 2 was developed and implemented in BISON. This is described in Chapter 4. Properties of U 3Si 2 were implemented in BISON. This is described in Chapter 4. A resonant frequency and damping analyzer (RFDA) using impulse excitation technique (IET) was setup,more » tested, and used to analyze U 3Si 2 samples to measure Young’s and Shear Moduli which were then used to calculate the Poisson ratio for U 3Si 2. This is described in Chapter 5. Characterization of U 3Si 2 samples was completed. Samples were prepared and analyzed by XRD, SEM, and optical microscopy. Grain size analysis was conducted on images. SEM with EDS was used to analyze second phase precipitates. Impulse excitation technique was used to determine the Young’s and Shear Moduli of a tile specimen which allowed for the determination of the Poisson ratio. Helium pycnometry and mercury intrusion porosimetry was performed and used with image analysis to determine porosity size distribution. Vickers microindentation characterization method was used to evaluate the mechanical properties of U 3Si 2 including toughness, hardness, and Vickers hardness. Electrical resistivity measurement was done using the four-point probe method. This is reported in Chapter 5.« less
Minerals Yearbook, volume III, Area Reports—International—Africa and the Middle East
,
2018-01-01
The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.
Minerals Yearbook, volume III, Area Reports—International—Asia and the Pacific
Geological Survey, U.S.
2018-01-01
The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.
Minerals Yearbook, volume III, Area Reports—International—Latin America and Canada
,
2018-01-01
The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.
Minerals Yearbook, volume III, Area Reports—International—Europe and Central Eurasia
Geological Survey, U.S.
2018-01-01
The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.
Magnetic spectroscopy and microscopy of functional materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, Catherine Ann
2011-05-01
Heusler intermetallics Mn 2Y Ga and X 2MnGa (X; Y =Fe, Co, Ni) undergo tetragonal magnetostructural transitions that can result in half metallicity, magnetic shape memory, or the magnetocaloric effect. Understanding the magnetism and magnetic behavior in functional materials is often the most direct route to being able to optimize current materials for todays applications and to design novel ones for tomorrow. Synchrotron soft x-ray magnetic spectromicroscopy techniques are well suited to explore the the competing effects from the magnetization and the lattice parameters in these materials as they provide detailed element-, valence-, and site-specifc information on the coupling ofmore » crystallographic ordering and electronic structure as well as external parameters like temperature and pressure on the bonding and exchange. Fundamental work preparing the model systems of spintronic, multiferroic, and energy-related compositions is presented for context. The methodology of synchrotron spectroscopy is presented and applied to not only magnetic characterization but also of developing a systematic screening method for future examples of materials exhibiting any of the above effects. The chapter progression is as follows: an introduction to the concepts and materials under consideration (Chapter 1); an overview of sample preparation techniques and results, and the kinds of characterization methods employed (Chapter 2); spectro- and microscopic explorations of X 2MnGa/Ge (Chapter 3); spectroscopic investigations of the composition series Mn 2Y Ga to the logical Mn 3Ga endpoint (Chapter 4); and a summary and overview of upcoming work (Chapter 5). Appendices include the results of a Think Tank for the Graduate School of Excellence MAINZ (Appendix A) and details of an imaging project now in progress on magnetic reversal and domain wall observation in the classical Heusler material Co 2FeSi (Appendix B).« less
Hassan, Ghada S
2013-01-01
This chapter includes the aspects of Menadione (vitamin K). The drug is synthesized by the use of itaconic acid obtained through Friedel-Craft condensation or by direct oxidation of the 2-methyl-1,4-naphthquinone. Vitamin K generally maintains healthy blood clotting and prevents excessive bleeding and hemorrhage, it is also important for maintaining healthy bone structure and for carbohydrate storage in the body. In addition, it is given to newborn babies born in hospitals to prevent the development of life-threatening bleeding caused by low prothrombin levels. The chapter discusses the drug metabolism and pharmacokinetics and presents various method of analysis of this drug such as compendial tests, electrochemical analysis, spectroscopic analysis, and chromatographic techniques of separation. It also discusses its physical properties such as solubility characteristics, X-ray powder diffraction pattern, and thermal methods of analysis. The chapter is concluded with a discussion on its biological properties such as activity, toxicity, and safety. Copyright © 2013 Elsevier Inc. All rights reserved.
Reactivity of lignin and lignans: Correlation with molecular orbital calculations
Thomas Elder
2010-01-01
To date, and as can be seen from the other chapters of this text, the structure and chemistry of lignin have been described in terms of results from a wide range of chemical or spectroscopic methods to construct a mosaic picture of the polymer. The current chapter continues this process by describing past, present and potential applications of electronic structure...
ERIC Educational Resources Information Center
Silard, John; And Others
In this study, focus is upon the question of the standard for educational expenditure rather than on the alternative taxing methods for securing school district funding equalization. Chapter I begins by examining the major issues vital to urban education which the "Serrano" principle leaves unresolved. Then in Chapter II, particular elements of…
Alternative Methods of Base Level Demand Forecasting for Economic Order Quantity Items,
1975-12-01
Note .. . . . . . . . . . . . . . . . . . . . . . . . 21 AdaptivC Single Exponential Smooti-ing ........ 21 Choosing the Smoothiing Constant... methodology used in the study, an analysis of results, .And a detailed summary. Chapter I. Methodology , contains a description o the data, a...Chapter IV. Detailed Summary, presents a detailed summary of the findings, lists the limitations inherent in the 7’" research methodology , and
Chapter 7: Selecting tree species for reforestation of Appalachian mined lands
V. Davis; J.A. Burger; R. Rathfon; C.E. Zipper
2017-01-01
The Forestry Reclamation Approach (FRA) is a method for reclaiming coal-mined land to forested postmining land uses under the federal Surface Mining Control and Reclamation Act of 1977 (SMCRA) (Chapter 2, this volume). Step 4 of the FRA is to plant native trees for commercial timber value, wildlife habitat, soil stability, watershed protection, and other environmental...
ERIC Educational Resources Information Center
Smith, Ruth S.
This guide provides guidelines for promoting and attracting users to a church library. The first of six chapters, "Promoting Library Use," discusses stages of promotion, methods of attracting the attention and interest of prospective users, and how to involve members of the congregation in library activities; the second chapter, "Publicizing the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This volume contains chapter 4 and Appendix 4A which include descriptions of use of adjacent land and water (within miles of the proposed site), baseline ecology, air quality, meteorology, noise, hydrology, water quality, geology, soils and socio-economic factors. Appendix 4A includes detailed ecological surveys made in the area including the methods used. (LTN)
ERIC Educational Resources Information Center
National Planning Association, Washington, DC.
Focusing on data needs and methods for manpower planning and manpower projections, this document is one in a series of six volumes reporting the results of the National Manpower Survey (NMS) of the Criminal Justice System. Chapter 1 of five chapters discusses the role and objectives of criminal justice manpower planning at different levels of…
Moment Method Solutions for Radiation and Scattering from Arbitrarily Shaped Surfaces.
1981-02-01
IBM -370/168. A. Monopole Antenna on a Disk The study of the monopole antenna on a circular disk is of inter- est since it leads to the understanding...34 . . ._"-", - CHAPTER V ANALYSIS OF MICRUSI- itP ANTL-NNAS This chapter will present an analysis of the microstrip antenna. Surface-patch dipole modes are used to
He, Ri-Hui; Tao, Ran
2017-01-01
This chapter focuses on psychotherapy of substance and non-substance addiction (see Cognitive Behavioral Therapy in Chap. 16 ) and introduces the latest advances, mainly in the mindfulness-based relapse prevention, PITDH, and points out that complete elimination of psychological addiction is hopefully to become the target and core of the psychotherapy of addiction disorder. This chapter also introduces methods and progress of various types of substance and non-substance addiction.
ERIC Educational Resources Information Center
Scribner, Jay D., Ed.
This book consists of 11 chapters that discuss various concerns of importance in the field of the politics of education and describe some of the current research efforts in the field. The individual chapters include "The Politics of Education: An Introduction," by Jay Scribner and Richard Englert; "Methods and Conceptualizations of…
Chapter A5. Processing of Water Samples
Wilde, Franceska D.; Radtke, Dean B.; Gibs, Jacob; Iwatsubo, Rick T.
1999-01-01
The National Field Manual for the Collection of Water-Quality Data (National Field Manual) describes protocols and provides guidelines for U.S. Geological Survey (USGS) personnel who collect data used to assess the quality of the Nation's surface-water and ground-water resources. This chapter addresses methods to be used in processing water samples to be analyzed for inorganic and organic chemical substances, including the bottling of composite, pumped, and bailed samples and subsamples; sample filtration; solid-phase extraction for pesticide analyses; sample preservation; and sample handling and shipping. Each chapter of the National Field Manual is published separately and revised periodically. Newly published and revised chapters will be announced on the USGS Home Page on the World Wide Web under 'New Publications of the U.S. Geological Survey.' The URL for this page is http:/ /water.usgs.gov/lookup/get?newpubs.
Synthesis and Application of Graphene Based Nanomaterials
NASA Astrophysics Data System (ADS)
Peng, Zhiwei
Graphene, a two-dimensional sp2-bonded carbon material, has recently attracted major attention due to its excellent electrical, optical and mechanical properties. Depending on different applications, graphene and its derived hybrid nanomaterials can be synthesized by either bottom-up chemical vapor deposition (CVD) methods for electronics, or various top-down chemical reaction methods for energy generation and storage devices. My thesis begins with the investigation of CVD synthesis of graphene thin films in Chapter 1, including the direct growth of bilayer graphene on insulating substrates and synthesis of "rebar graphene": a hybrid structure with graphene and carbon or boron nitride nanotubes. Chapter 2 discusses the synthesis of nanoribbon-shaped materials and their applications, including splitting of vertically aligned multi-walled carbon nanotube carpets for supercapacitors, synthesis of dispersable ferromagnetic graphene nanoribbon stacks with enhanced electrical percolation properties in magnetic field, graphene nanoribbon/SnO 2 nanocomposite for lithium ion batteries, and enhanced electrocatalysis for hydrogen evolution reactions from WS2 nanoribbons. Next, Chapter 3 discusses graphene coated iron oxide nanomaterials and their use in energy storage applications. Finally, Chapter 4 introduces the development, characterization, and fabrication of laser induced graphene and its application as supercapacitors.
Bit patterned media with composite structure for microwave assisted magnetic recording
NASA Astrophysics Data System (ADS)
Eibagi, Nasim
Patterned magnetic nano-structures are under extensive research due to their interesting emergent physics and promising applications in high-density magnetic data storage, through magnetic logic to bio-magnetic functionality. Bit-patterned media is an example of such structures which is a leading candidate to reach magnetic densities which cannot be achieved by conventional magnetic media. Patterned arrays of complex heterostructures such as exchange-coupled composites are studied in this thesis as a potential for next generation of magnetic recording media. Exchange-coupled composites have shown new functionality and performance advantages in magnetic recording and bit patterned media provide unique capability to implement such architectures. Due to unique resonant properties of such structures, their possible application in spin transfer torque memory and microwave assisted switching is also studied. This dissertation is divided into seven chapters. The first chapter covers the history of magnetic recording, the need to increase magnetic storage density, and the challenges in the field. The second chapter introduces basic concepts of magnetism. The third chapter explains the fabrication methods for thin films and various lithographic techniques that were used to pattern the devices under study for this thesis. The fourth chapter introduces the exchanged coupled system with the structure of [Co/Pd] / Fe / [Co/Pd], where the thickness of Fe is varied, and presents the magnetic properties of such structures using conventional magnetometers. The fifth chapter goes beyond what is learned in the fourth chapter and utilizes polarized neutron reflectometry to study the vertical exchange coupling and reversal mechanism in patterned structures with such structure. The sixth chapter explores the dynamic properties of the patterned samples, and their reversal mechanism under microwave field. The final chapter summarizes the results and describes the prospects for future applications of these structures.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.
NASA Astrophysics Data System (ADS)
Gourgoulhon, Eric
2011-04-01
Numerical relativity is one of the major fields of contemporary general relativity and is developing continually. Yet three years ago, no textbook was available on this subject. The first textbook devoted to numerical relativity, by Alcubierre, appeared in 2008 [1] (cf the CQG review [2]). Now comes the second book, by Baumgarte and Shapiro, two well known players in the field. Inevitably, the two books have some common aspects (otherwise they would not deal with the same topic!). For instance the titles of the first four chapters of Baumgarte and Shapiro are very similar to those of Alcubierre. This arises from some logic inherent to the subject: chapter 1 recaps basic GR, chapter 2 introduces the 3+1 formalism, chapter 3 focuses on the initial data and chapter 4 on the choice of coordinates for the evolution. But there are also many differences between the two books, which actually make them complementary. At first glance the differences are the size (720 pages for Baumgarte and Shapiro vs 464 pages for Alcubierre) and the colour figures in Baumgarte and Shapiro. Regarding the content, Baumgarte and Shapiro address many topics which are not present in Alcubierre's book, such as magnetohydrodynamics, radiative transfer, collisionless matter, spectral methods, rotating stars and post-Newtonian approximation. The main difference regards binary systems: virtually absent from Alcubierre's book (except for binary black hole initial data), they occupy not less than five chapters in Baumgarte and Shapiro's book. In contrast, gravitational wave extraction, various hyperbolic formulations of Einstein's equations and the high-resolution shock-capturing schemes are treated in more depth by Alcubierre. In the first four chapters mentioned above, some distinctive features of Baumgarte and Shapiro's book are the beautiful treatment of Oppenheimer-Snyder collapse in chapter 1, the analogy with Maxwell's equations when discussing the constraints and the evolution equations in chapter 2 and the nice illustration of the 3+1 formalism by different slicings of Schwarzschild spacetime. Chapter 3, devoted to initial data, presents the York-Lichnerowicz conformal method with many details and examples, along with its descendants (extended conformal thin-sandwich). A very instructive illustration is provided by a boosted black hole. This chapter also introduces the recent waveless approximation and presents a rather detailed discussion of mass, momentum and angular momentum in the initial data. Chapter 4 contains a very pedagogical discussion of the choice of coordinates, via the lapse and shift functions, again with many examples. In particular, it provides the derivation of all maximal slicings of Schwarzschild spacetime, which is hardly found in any textbook. Chapter 5, devoted to matter sources, goes well beyond the ideal fluid generally discussed in the context of 3+1 numerical relativity: it also covers dissipative fluids, radiation hydrodynamics, collisionless matter and scalar fields. Chapter 6 provides a self-consistent introduction to the two main numerical methods used in numerical relativity: finite differences and spectral methods. It is followed by a very nice chapter about the various horizons involved in black hole spacetimes: event and apparent horizons, as well as dynamical and isolated horizons. One may, however, regret that there is no mention of Hayward's trapping horizons, which embody both dynamical and isolated horizons. Chapter 8 discusses in depth spherical spacetimes, including dynamical slicings of Schwarzschild, gravitational collapse of collisionless matter (26 pages!), collapse of fluid stars and scalar fields and critical phenomena. The main outcome of numerical relativity, gravitational waves, are introduced in a very pedagogical way in chapter 9, with the basic theory and a review of the astrophysical sources and detectors. Chapter 10, entirely devoted to the axisymmetric collapse of collisionless clusters, reflects clearly the research work of one of the authors, but it is also an opportunity to discuss the Cosmic Censorship conjecture and the Hoop conjecture. Chapter 11 presents the basics of hyperbolic systems and focuses on the famous BSSN formalism employed in most numerical codes. The electromagnetism analogy introduced in chapter 2 is developed, providing some very useful insight. The remainder of the book is devoted to the collapse of rotating stars (chapter 14) and to the coalescence of binary systems of compact objects, either neutron stars or black holes (chapters 12, 13, 15, 16 and 17). This is a unique introduction and review of results about the expected main sources of gravitational radiation. It includes a detailed presentation of the major triumph of numerical relativity: the successful computation of binary black hole merger. I think that Baumgarte and Shapiro have accomplished a genuine tour de force by writing such a comprehensive and self-contained textbook on a highly evolving subject. The primary value of the book is to be extremely pedagogical. The style is definitively at the textbook level and not that of a review article. One may point out the use of boxes to recap important results and the very instructive aspect of many figures, some of them in colour. There are also numerous exercises in the main text, to encourage the reader to find some useful results by himself. The pedagogical trend is manifest up to the book cover, with the subtitle explaining what the title means! Another great value of the book is indisputably its encyclopedic aspect, making it a very good starting point for research on many topics of modern relativity. I have no doubt that Baumgarte and Shapiro's monograph will quicken considerably the learning phase of any master or PhD student beginning numerical relativity. It will also prove to be very valuable for all researchers of the field and should become a major reference. Beyond numerical relativity, the richness and variety of examples are such that the reading of the book will be highly profitable to any person interested in black hole physics or relativistic astrophysics. This is not the least among all the merits of this superb book. References [1] Alcubierre M 2008 Introduction to 3+1 Numerical Relativity (Oxford: Oxford University Press) [2] Gundlach C 2008 Review of Introduction to 3+1 Numerical Relativity Class. Quantum Grav. 278 1270
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mort, Dan
Estimated energy savings are calculated as the difference between the energy use during the baseline period and the energy use during the post installation period of the EEM. This chapter describes the physical properties measured in the process of evaluating EEMs and the specific metering methods for several types of measurements. Skill-level requirements and other operating considerations are discussed, including where, when, and how often measurements should be made. The subsequent section identifies metering equipment types and their respective measurement accuracies. This is followed by sections containing suggestions regarding proper data handling procedures and the categorization and definition of severalmore » load types.« less
Chapter 9: Planting hardwood tree seedlings on reclaimed mine land in the Appalachian region
V. Davis; J. Franklin; C. Zipper; P. Angel
2017-01-01
The Forestry Reclamation Approach (FRA) is a method of reclaiming surface coal mines to forested postmining land use (Chapter 2, this volume). "Use proper tree planting techniques" is Step 5 of the FRA; when used with the other FRA steps, proper tree planting can help to ensure successful reforestation. Proper care and planting of tree seedlings is essential...
Physical Test Validation for Job Selection. Chapter 5
2000-09-21
Borgs perceived exertion andlpain scaling method. Champaign: Human Kinetics . 17. Brooks, G., & Fahey, T (1984). Exercise physiology: Human bioenergetics...Eds.), Measurement concepts in physical education and exercise science. Champaign: Human Kinetics . 44. Jackson, A. S., Blair, S. N., Mahar, M. T...Chapter 5: Physical Test Evaluation for Job Selection 94. Wilmore, J. H., & Costill, D. L. (1994). Physiology of sport and exercise. Champaign, IL: Human
Chapter 5: Application of state-and-transition models to evaluate wildlife habitat
Anita T. Morzillo; Pamela Comeleo; Blair Csuti; Stephanie Lee
2014-01-01
Wildlife habitat analysis often is a central focus of natural resources management and policy. State-and-transition models (STMs) allow for simulation of landscape level ecological processes, and for managers to test âwhat ifâ scenarios of how those processes may affect wildlife habitat. This chapter describes the methods used to link STM output to wildlife habitat to...
ERIC Educational Resources Information Center
Hannan, Michael T.; Tuma, Nancy Brandon
This document is part of a series of chapters described in SO 011 759. Working from the premise that temporal analysis is indispensable for the study of change, the document examines major alternatives in research design of this nature. Five sections focus on the features, advantages, and limitations of temporal analysis. Four designs which…
NASA Technical Reports Server (NTRS)
1972-01-01
A survey of nondestructive evaluation (NDE) technology, which is discussed in terms of popular demands for a greater degree of quality, reliability, and safety in industrial products, is presented as an overview of the NDE field to serve the needs of middle management. Three NDE methods are presented: acoustic emission, the use of coherent (laser)light, and ultrasonic holography.
Estimating the number of animals in wildlife populations
Lancia, R.A.; Kendall, W.L.; Pollock, K.H.; Nichols, J.D.; Braun, Clait E.
2005-01-01
INTRODUCTION In 1938, Howard M. Wight devoted 9 pages, which was an entire chapter in the first wildlife management techniques manual, to what he termed 'census' methods. As books and chapters such as this attest, the volume of literature on this subject has grown tremendously. Abundance estimation remains an active area of biometrical research, as reflected in the many differences between this chapter and the similar contribution in the previous manual. Our intent in this chapter is to present an overview of the basic and most widely used population estimation techniques and to provide an entree to the relevant literature. Several possible approaches could be taken in writing a chapter dealing with population estimation. For example, we could provide a detailed treatment focusing on statistical models and on derivation of estimators based on these models. Although a chapter using this approach might provide a valuable reference for quantitative biologists and biometricians, it would be of limited use to many field biologists and wildlife managers. Another approach would be to focus on details of actually applying different population estimation techniques. This approach would include both field application (e.g., how to set out a trapping grid or conduct an aerial survey) and detailed instructions on how to use the resulting data with appropriate estimation equations. We are reluctant to attempt such an approach, however, because of the tremendous diversity of real-world field situations defined by factors such as the animal being studied, habitat, available resources, and because of our resultant inability to provide detailed instructions for all possible cases. We believe it is more useful to provide the reader with the conceptual basis underlying estimation methods. Thus, we have tried to provide intuitive explanations for how basic methods work. In doing so, we present relevant estimation equations for many methods and provide citations of more detailed treatments covering both statistical considerations and field applications. We have chosen to present methods that are representative of classes of estimators, rather than address every available method. Our hope is that this chapter will provide the reader with enough background to make an informed decision about what general method(s) will likely perform well in any particular field situation. Readers with a more quantitative background may then be able to consult detailed references and tailor the selected method to suit their particular needs. Less quantitative readers should consult a biometrician, preferably one with experience in wildlife studies, for this 'tailoring,' with the hope they will be able to do so with a basic understanding of the general method, thereby permitting useful interaction and discussion with the biometrician. SUMMARY Estimating the abundance or density of animals in wild populations is not a trivial matter. Virtually all techniques involve the basic problem of estimating the probability of seeing, capturing, or otherwise detecting animals during some type of survey and, in many cases, sampling concerns as well. In the case of indices, the detection probability is assumed to be constant (but unknown). We caution against use of indices unless this assumption can be verified for the comparison(s) of interest. In the case of population estimation, many methods have been developed over the years to estimate the probability of detection associated with various kinds of count statistics. Techniques range from complete counts, where sampling concerns often dominate, to incomplete counts where detection probabilities are also important. Some examples of the latter are multiple observers, removal methods, and capture-recapture. Before embarking on a survey to estimate the size of a population, one must understand clearly what information is needed and for what purpose the information will be used. The key to derivin
AGR-1 Thermocouple Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Einerson
2012-05-01
This report documents an effort to analyze measured and simulated data obtained in the Advanced Gas Reactor (AGR) fuel irradiation test program conducted in the INL's Advanced Test Reactor (ATR) to support the Next Generation Nuclear Plant (NGNP) R&D program. The work follows up on a previous study (Pham and Einerson, 2010), in which statistical analysis methods were applied for AGR-1 thermocouple data qualification. The present work exercises the idea that, while recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, results of the numerical simulations can be used in combination with the statistical analysis methods tomore » further improve qualification of measured data. Additionally, the combined analysis of measured and simulation data can generate insights about simulation model uncertainty that can be useful for model improvement. This report also describes an experimental control procedure to maintain fuel target temperature in the future AGR tests using regression relationships that include simulation results. The report is organized into four chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program, AGR-1 test configuration and test procedure, overview of AGR-1 measured data, and overview of physics and thermal simulation, including modeling assumptions and uncertainties. A brief summary of statistical analysis methods developed in (Pham and Einerson 2010) for AGR-1 measured data qualification within NGNP Data Management and Analysis System (NDMAS) is also included for completeness. Chapters 2-3 describe and discuss cases, in which the combined use of experimental and simulation data is realized. A set of issues associated with measurement and modeling uncertainties resulted from the combined analysis are identified. This includes demonstration that such a combined analysis led to important insights for reducing uncertainty in presentation of AGR-1 measured data (Chapter 2) and interpretation of simulation results (Chapter 3). The statistics-based simulation-aided experimental control procedure described for the future AGR tests is developed and demonstrated in Chapter 4. The procedure for controlling the target fuel temperature (capsule peak or average) is based on regression functions of thermocouple readings and other relevant parameters and accounting for possible changes in both physical and thermal conditions and in instrument performance.« less
Dynamics of Marine Microbial Metabolism and Physiology at Station ALOHA
NASA Astrophysics Data System (ADS)
Casey, John R.
Marine microbial communities influence global biogeochemical cycles by coupling the transduction of free energy to the transformation of Earth's essential bio-elements: H, C, N, O, P, and S. The web of interactions between these processes is extraordinarily complex, though fundamental physical and thermodynamic principles should describe its dynamics. In this collection of 5 studies, aspects of the complexity of marine microbial metabolism and physiology were investigated as they interact with biogeochemical cycles and direct the flow of energy within the Station ALOHA surface layer microbial community. In Chapter 1, and at the broadest level of complexity discussed, a method to relate cell size to metabolic activity was developed to evaluate allometric power laws at fine scales within picoplankton populations. Although size was predictive of metabolic rates, within-population power laws deviated from the broader size spectrum, suggesting metabolic diversity as a key determinant of microbial activity. In Chapter 2, a set of guidelines was proposed by which organic substrates are selected and utilized by the heterotrophic community based on their nitrogen content, carbon content, and energy content. A hierarchical experimental design suggested that the heterotrophic microbial community prefers high nitrogen content but low energy density substrates, while carbon content was not important. In Chapter 3, a closer look at the light-dependent dynamics of growth on a single organic substrate, glycolate, suggested that growth yields were improved by photoheterotrophy. The remaining chapters were based on the development of a genome-scale metabolic network reconstruction of the cyanobacterium Prochlorococcus to probe its metabolic capabilities and quantify metabolic fluxes. Findings described in Chapter 4 pointed to evolution of the Prochlorococcus metabolic network to optimize growth at low phosphate concentrations. Finally, in Chapter 5 and at the finest scale of complexity, a method was developed to predict hourly changes in both physiology and metabolic fluxes in Prochlorococcus by incorporating gene expression time-series data within the metabolic network model. Growth rates predicted by this method more closely matched experimental data, and diel changes in elemental composition and the energy content of biomass were predicted. Collectively, these studies identify and quantify the potential impact of variations in metabolic and physiological traits on the melee of microbial community interactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kambeitz, Manuel
This thesis presents an analysis of excited states of B0, B+ and B0 s mesons, decaying to B mesons while emitting a pion or kaon. They are reconstructed from their decay products and a selection is performed to discard wrongly reconstructed B(s) mesons with the multivariate analysis software NeuroBayes, as described in chapter 5. In the training process, the sPlot method and measured and simulated data are used. Chapter 6 describes how the properties of excited B(s) are determined by an unbinned maximum likelihood t to their mass spectra. The systematic uncertainties determined in this analysis are described in chaptermore » 7. The results of this thesis are presented in chapter 8 and a conclusion is given in chapter 9. The results shown in this thesis have been published before in [1].« less
Chapter A7. Biological Indicators
Myers, Donna N.; Wilde, Franceska D.
2003-01-01
The National Field Manual for the Collection of Water-Quality Data (National Field Manual) provides guidelines and standard procedures for U.S. Geological Survey (USGS) personnel who collect data used to assess the quality of the Nation's surface-water and ground-water resources. This chapter of the manual includes procedures for the (1) determination of biochemical oxygen demand using a 5-day bioassay test; (2) collection, identification, and enumeration of fecal indicator bacteria; (3) collection of samples and information on two laboratory methods for fecal indicator viruses (coliphages); and (4) collection of samples for protozoan pathogens. Each chapter of the National Field Manual is published separately and revised periodically. Newly published and revised chapters are posted on the World Wide Web on the USGS page 'National Field Manual for the Collection of Water-Quality Data.' The URL for this page is http://pubs.water.usgs.gov/twri9A/ (accessed November 25, 2003).
Grammar, Punctuation, and Capitalization: a Handbook for Technical Writers and Editors
NASA Technical Reports Server (NTRS)
Mccaskill, Mary K.
1990-01-01
Writing problems are addressed which are often encountered in technical documents and preferences are indicated (Langley's) when authorities do not agree. It is directed toward professional writers, editors, and proofreaders. Those whose profession lies in other areas (for example, research or management), but who have occasion to write or review others' writing will also find this information useful. A functional attitude toward grammar and punctuation is presented. Chapter 1 on grammar presents grammatical problems related to each part of speech. Chapter 2 on sentence structure concerns syntax, that is, effective arrangement of words, with emphasis on methods of revision to improve writing effectiveness. Chapter 3 addresses punctuation marks, presenting their function, situations when they are required or incorrect, and situations when they are appropriate but optional. Chapter 4 presents capitalization, which is mostly a matter of editorial style and preference rather than a matter of generally accepted rules. An index and glossary are included.
Wetland Hydrology | Science Inventory | US EPA
This chapter discusses the state of the science in wetland hydrology by touching upon the major hydraulic and hydrologic processes in these complex ecosystems, their measurement/estimation techniques, and modeling methods. It starts with the definition of wetlands, their benefits and types, and explains the role and importance of hydrology on wetland functioning. The chapter continues with the description of wetland hydrologic terms and related estimation and modeling techniques. The chapter provides a quick but valuable information regarding hydraulics of surface and subsurface flow, groundwater seepage/discharge, and modeling groundwater/surface water interactions in wetlands. Because of the aggregated effects of the wetlands at larger scales and their ecosystem services, wetland hydrology at the watershed scale is also discussed in which we elaborate on the proficiencies of some of the well-known watershed models in modeling wetland hydrology. This chapter can serve as a useful reference for eco-hydrologists, wetland researchers and decision makers as well as watershed hydrology modelers. In this chapter, the importance of hydrology for wetlands and their functional role are discussed. Wetland hydrologic terms and the major components of water budget in wetlands and how they can be estimated/modeled are also presented. Although this chapter does not provide a comprehensive coverage of wetland hydrology, it provides a quick understanding of the basic co
Wynn, Jeff; Orris, Greta J.; Dunlap, Pamela; Cocker, Mark D.; Bliss, James D.
2016-03-23
Chapter 1 of this report provides an overview of the history of the CASB and summarizes evaporite potash deposition, halokinesis, and dissolution processes that have affected the current distribution of potash-bearing salt in the CASB. Chapter 2 describes the Gissar tract, an uplifted region that contains a mix of stratabound and halokinetic potash deposits and all of the discovered and exploited potash deposits of the CASB. Chapter 3 describes the Amu Darya tract, where evaporite deposits remain flat-lying and undeformed since their original deposition. Chapter 4 describes the highly deformed and compressed Afghan-Tajik tract and what is known of the deeply-buried Jurassic salt. Chapter 5 describes the spatial databases included with this report, which contain a collection of CASB potash information. Appendixes A and B summarize descriptive models for stratabound and halokinetic potash-bearing salt deposits, respectively. Appendix C summarizes the AGE method used to evaluate the Gissar and Amu Darya tracts. Appendixes D and E contain grade and thickness data for the Gissar and Amu Darya tracts. Appendix F provides the SYSTAT script used to estimate undiscovered K2 O in a CASB tract. Appendix G provides a potash glossary, and appendix H provides biographies of assessment participants.
NASA Astrophysics Data System (ADS)
Kumagai, Takashi
2015-08-01
Hydrogen(H)-bond dynamics are involved in many elementary processes in chemistry and biology. Because of its fundamental importance, a variety of experimental and theoretical approaches have been employed to study the dynamics in gas, liquid, solid phases, and their interfaces. This review describes the recent progress of direct observation and control of H-bond dynamics in several model systems on a metal surface by using low-temperature scanning tunneling microscopy (STM). General aspects of H-bond dynamics and the experimental methods are briefly described in chapter 1 and 2. In the subsequent four chapters, I present direct observation of an H-bond exchange reaction within a single water dimer (chapter 3), a symmetric H bond (chapter 4) and H-atom relay reactions (chapter 5) within water-hydroxyl complexes, and an intramolecular H-atom transfer reaction (tautomerization) within a single porphycene molecule (chapter 6). These results provide novel microscopic insights into H-bond dynamics at the single-molecule level, and highlight significant impact on the process from quantum effects, namely tunneling and zero-point vibration, resulting from the small mass of H atom. Additionally, local environmental effect on H-bond dynamics is also examined by using atom/molecule manipulation with the STM.
Optical and Photothermal Behaviors of Colloidal and Self-Assembled Magnetic-Plasmonic Nanostructures
NASA Astrophysics Data System (ADS)
Liu, Kai
This dissertation is based on numerous efforts in exploring the capabilties of numerical simulation for investigating novel optical phenomena in different colloidal plasmonic systems. The dissertation includes five chapters. Chapter 1 contains a general introduction to the fundamentals of plasmonic behaviors in colloidal clusters and bottom-up self-assembly methods for manufacturing colloidal clusters which include magnetic based and DNA-assisted pathways. Chapter 2 presents a systematic comparison of optical and thermodynamic properties of near-infrared colloidal nanoparticles, including SiO2 Au core-shell, Au nanocage and Au nanorod, and an example of the nanobubble-based photothermal therapy application. In Chapter 3, a optical phenomenon named Fano resonance is demonstrated in a colloidal heptamer design which consists of seven Fe 3O4 Au core-shell nanoparticles. The incorporation of the magnetic core enables a magnetic-assisted self-assembly process which will be discussed after the photonic analysis. In Chapter 4, the optical behaviors in a 1D magnetic-plasmonic chain are explored. A demonstration of the magnetic-based self-assembly of this 1D chain is given. Chapter 5 is focused on the study of the chiral optical responses in a helical nanoscale system which follows a 3D helical arrangement of Fe3O4 Au core-shell nanoparticles.
DTK C/Fortran Interface Development for NEAMS FSI Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slattery, Stuart R.; Lebrun-Grandie, Damien T.
This report documents the development of DataTransferKit (DTK) C and Fortran interfaces for fluid-structure-interaction (FSI) simulations in NEAMS. In these simulations, the codes Nek5000 and Diablo are being coupled within the SHARP framework to study flow-induced vibration (FIV) in reactor steam generators. We will review the current Nek5000/Diablo coupling algorithm in SHARP and the current state of the solution transfer scheme used in this implementation. We will then present existing DTK algorithms which may be used instead to provide an improvement in both flexibility and scalability of the current SHARP implementation. We will show how these can be used withinmore » the current FSI scheme using a new set of interfaces to the algorithms developed by this work. These new interfaces currently expose the mesh-free solution transfer algorithms in DTK, a C++ library, and are written in C and Fortran to enable coupling of both Nek5000 and Diablo in their native Fortran language. They have been compiled and tested on Cooley, the test-bed machine for Mira at ALCF.« less
SeaWiFS Postlaunch Calibration and Validation Analyses
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); McClain, Charles R.; Ainsworth, Ewa J.; Barnes, Robert A.; Eplee, Robert E., Jr.; Patt, Frederick S.; Robinson, Wayne D.; Wang, Menghua; Bailey, Sean W.
2000-01-01
The effort to resolve data quality issues and improve on the initial data evaluation methodologies of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Project was an extensive one. These evaluations have resulted, to date, in three major reprocessings of the entire data set where each reprocessing addressed the data quality issues that could be identified up to the time of each reprocessing. The number of chapters (21) needed to document this extensive work in the SeaWiFS Postlaunch Technical Report Series requires three volumes. The chapters in Volumes 9, 10, and 11 are in a logical order sequencing through sensor calibration, atmospheric correction, masks and flags, product evaluations, and bio-optical algorithms. The first chapter of Volume 9 is an overview of the calibration and validation program, including a table of activities from the inception of the SeaWiFS Project. Chapter 2 describes the fine adjustments of sensor detector knee radiances, i.e., radiance levels where three of the four detectors in each SeaWiFS band saturate. Chapters 3 and 4 describe the analyses of the lunar and solar calibration time series, respectively, which are used to track the temporal changes in radiometric sensitivity in each band. Chapter 5 outlines the procedure used to adjust band 7 relative to band 8 to derive reasonable aerosol radiances in band 7 as compared to those in band 8 in the vicinity of Lanai, Hawaii, the vicarious calibration site. Chapter 6 presents the procedure used to estimate the vicarious calibration gain adjustment factors for bands 1-6 using the waterleaving radiances from the Marine Optical Buoy (MOBY) offshore of Lanai. Chapter 7 provides the adjustments to the coccolithophore flag algorithm which were required for improved performance over the prelaunch version. Chapter 8 is an overview of the numerous modifications to the atmospheric correction algorithm that have been implemented. Chapter 9 describes the methodology used to remove artifacts of sun glint contamination for portions of the imagery outside the sun glint mask. Finally, Chapter 10 explains a modification to the ozone interpolation method to account for actual time differences between the SeaWiFS and Total Ozone Mapping Spectrometer (TOMS) orbits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kondo, Shinichiro
The format of this dissertation is as follows. In the remainder of Chapter 1, brief introductions and reviews are given to the topics of frustration, heavy fermions and spinels including the precedent work of LiV 2O 4. In Chapter 2, as a general overview of this work the important publication in Physical Review Letters by the author of this dissertation and collaborators regarding the discovery of the heavy fermion behavior in LiV 2O 4 is introduced [removed for separate processing]. The preparation methods employed by the author for nine LiV 2O 4 and two Li 1+xTi 2-xO 4 (x =more » 0 and 1/3) polycrystalline samples are introduced in Chapter 3. The subsequent structural characterization of the LiV 2O 4 and Li 1+xT 2-xO 4 samples was done by the author using thermogravimetric analysis (TGA), x-ray diffraction measurements and their structural refinements by the Rietveld analysis. The results of the characterization are detailed in Chapter 3. In Chapter 4 magnetization measurements carried out by the author are detailed. In Chapter 5, after briefly discussing the resistivity measurement results including the single-crystal work by Rogers et al., for the purpose of clear characterization of LiV 2O 4 it is of great importance to introduce in the following chapters the experiments and subsequent data analyses done by his collaborators. Heat capacity measurements (Chapter 6) were carried out and analyzed by Dr. C.A. Swenson, and modeled theoretically by Dr. D.C. Johnston. In Chapter 7 a thermal expansion study using neutron diffraction by Dr. O. Chmaissem et al. and capacitance dilatometry measurements by Dr. C.A. Swenson are introduced. The data analyses for the thermal expansion study were mainly done by Dr. O. Chmaissem (for neutron diffraction) and Dr. C.A. Swendon (for dilatometry), with assistances by Dr. J.D. Jorgensen, Dr. D.C. Johnston, and S. Kondo the author of this dissertation. Chapter 8 describes nuclear magnetic resonance (NMR) measurements and analyses by Dr. A.V. Mahajan, R. Sala, E. Lee and Dr. F. Borsa. In the final chapter, a summary and discussion are given.« less
V. S. Lebedev and I. L. Beigman, Physics of Highly Excited Atoms and Ions
NASA Astrophysics Data System (ADS)
Mewe, R.
1999-07-01
This book contains a comprehensive description of the basic principles of the theoretical spectroscopy and experimental spectroscopic diagnostics of Rydberg atoms and ions, i.e., atoms in highly excited states with a very large principal quantum number (n≫1). Rydberg atoms are characterized by a number of peculiar physical properties as compared to atoms in the ground or a low excited state. They have a very small ionization potential (∝1/n2), the highly excited electron has a small orbital velocity (∝1/n), the radius (∝n2) is very large, the excited electron has a long orbital period (∝n3), and the radiation lifetime is very long (∝n3-5). At the same time the R. atom is very sensitive to perturbations from external fields in collisions with charged and neutral targets. In recent years, R. atoms have been observed in laboratory and cosmic conditions for n up to ˜1000, which means that the size amounts to about 0.1 mm, ˜106 times that of an atom in the ground state. The scope of this monograph is to familiarize the reader with today's approaches and methods for describing isolated R. atoms and ions, radiative transitions between highly excited states, and photoionization and photorecombination processes. The authors present a number of efficient methods for describing the structure and properties of R. atoms and calculating processes of collisions with neutral and charged particles as well as spectral-line broadening and shift of Rydberg atomic series in gases, cool and hot plasmas in laboratories and in astrophysical sources. Particular attention is paid to a comparison of theoretical results with available experimental data. The book contains 9 chapters. Chapter 1 gives an introduction to the basic properties of R. atoms (ions), Chapter 2 is devoted to an account of general methods describing an isolated Rydberg atom. Chapter 3 is focussed on the recent achievements in calculations of form factors and dipole matrix elements of different types of bound-bound and bound-free radiative transitions. Chapter 4 concentrates on the formulation of basic theoretical methods and physical approaches to collisions involving R. atoms. Chapters 5 to 8 contain a systematic description of major directions and modern techniques in the collision theory of R. atoms and ions with atoms, molecules, electrons, and ions. Finally, Chapter 9 deals with the spectral-line broadening and shift of R. atomic series induced by collisions with neutral and charged particles. A subject index of four pages and 250 references are given. This monograph will be a basic tool and reference for all scientists working in the fields of plasma physics, spectroscopy, physics of electronic and atomic collisions, as well as astrophysics, radio astronomy, and space physics.
Turner, Richard; Joseph, Adrian; Titchener-Hooker, Nigel; Bender, Jean
2017-08-04
Cell harvesting is the separation or retention of cells and cellular debris from the supernatant containing the target molecule Selection of harvest method strongly depends on the type of cells, mode of bioreactor operation, process scale, and characteristics of the product and cell culture fluid. Most traditional harvesting methods use some form of filtration, centrifugation, or a combination of both for cell separation and/or retention. Filtration methods include normal flow depth filtration and tangential flow microfiltration. The ability to scale down predictably the selected harvest method helps to ensure successful production and is critical for conducting small-scale characterization studies for confirming parameter targets and ranges. In this chapter we describe centrifugation and depth filtration harvesting methods, share strategies for harvest optimization, present recent developments in centrifugation scale-down models, and review alternative harvesting technologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Michael; Haeri, Hossein; Reynolds, Arlis
This chapter provides a set of model protocols for determining energy and demand savings that result from specific energy efficiency measures implemented through state and utility efficiency programs. The methods described here are approaches that are or are among the most commonly used and accepted in the energy efficiency industry for certain measures or programs. As such, they draw from the existing body of research and best practices for energy efficiency program evaluation, measurement, and verification (EM&V). These protocols were developed as part of the Uniform Methods Project (UMP), funded by the U.S. Department of Energy (DOE). The principal objectivemore » for the project was to establish easy-to-follow protocols based on commonly accepted methods for a core set of widely deployed energy efficiency measures.« less
NASA Astrophysics Data System (ADS)
Sherman, Christopher Scott
Naturally occurring geologic heterogeneity is an important, but often overlooked, aspect of seismic wave propagation. This dissertation presents a strategy for modeling the effects of heterogeneity using a combination of geostatistics and Finite Difference simulation. In the first chapter, I discuss my motivations for studying geologic heterogeneity and seis- mic wave propagation. Models based upon fractal statistics are powerful tools in geophysics for modeling heterogeneity. The important features of these fractal models are illustrated using borehole log data from an oil well and geomorphological observations from a site in Death Valley, California. A large part of the computational work presented in this disserta- tion was completed using the Finite Difference Code E3D. I discuss the Python-based user interface for E3D and the computational strategies for working with heterogeneous models developed over the course of this research. The second chapter explores a phenomenon observed for wave propagation in heteroge- neous media - the generation of unexpected shear wave phases in the near-source region. In spite of their popularity amongst seismic researchers, approximate methods for modeling wave propagation in these media, such as the Born and Rytov methods or Radiative Trans- fer Theory, are incapable of explaining these shear waves. This is primarily due to these method's assumptions regarding the coupling of near-source terms with the heterogeneities and mode conversion. To determine the source of these shear waves, I generate a suite of 3D synthetic heterogeneous fractal geologic models and use E3D to simulate the wave propaga- tion for a vertical point force on the surface of the models. I also present a methodology for calculating the effective source radiation patterns from the models. The numerical results show that, due to a combination of mode conversion and coupling with near-source hetero- geneity, shear wave energy on the order of 10% of the compressional wave energy may be generated within the shear radiation node of the source. Interestingly, in some cases this shear wave may arise as a coherent pulse, which may be used to improve seismic imaging efforts. In the third and fourth chapters, I discuss the results of a numerical analysis and field study of seismic near-surface tunnel detection methods. Detecting unknown tunnels and voids, such as old mine workings or solution cavities in karst terrain, is a challenging prob- lem in geophysics and has implications for geotechnical design, public safety, and domestic security. Over the years, a number of different geophysical methods have been developed to locate these objects (microgravity, resistivity, seismic diffraction, etc.), each with varying results. One of the major challenges facing these methods is understanding the influence of geologic heterogeneity on their results, which makes this problem a natural extension of the modeling work discussed in previous chapters. In the third chapter, I present the results of a numerical study of surface-wave based tunnel detection methods. The results of this analysis show that these methods are capable of detecting a void buried within one wavelength of the surface, with size potentially much less than one wavelength. In addition, seismic surface- wave based detection methods are effective in media with moderate heterogeneity (epsilon < 5 %), and in fact, this heterogeneity may serve to increase the resolution of these methods. In the fourth chapter, I discuss the results of a field study of tunnel detection methods at a site within the Black Diamond Mines Regional Preserve, near Antioch California. I use a com- bination of surface wave backscattering, 1D surface wave attenuation, and 2D attenuation tomography to locate and determine the condition of two tunnels at this site. These results compliment the numerical study in chapter 3 and highlight their usefulness for detecting tunnels at other sites.
NASA Astrophysics Data System (ADS)
Qin, Zhengtao
Molecular imaging is visualizations and measurements of in vivo biological processes at the molecular or cellular level using specific imaging probes. As an emerging technology, biocompatible macromolecular or nanoparticle based targeted imaging probes have gained increasing popularities. Those complexes consist of a carrier, an imaging reporter, and a targeting ligand. The active targeting ability dramatically increases the specificity. And the multivalency effect may further reduce the dose while providing a decent signal. In this thesis, sentinel lymph node (SLN) mapping and cancer imaging are two research topics. The focus is to develop molecular imaging probes with high specificity and sensitivity, for Positron Emission Tomography (PET) and optical imaging. The objective of this thesis is to explore dextran radiopharmaceuticals and porous silicon nanoparticles based molecular imaging agents. Dextran polymers are excellent carriers to deliver imaging reporters or therapeutic agents due to its well established safety profile and oligosaccharide conjugation chemistry. There is also a wide selection of dextran polymers with different lengths. On the other hand, Silicon nanoparticles represent another class of biodegradable materials for imaging and drug delivery. The success in fluorescence lifetime imaging and enhancements of the immune activation potency was briefly discussed. Chapter 1 begins with an overview on current molecular imaging techniques and imaging probes. Chapter 2 presents a near-IR dye conjugated probe, IRDye 800CW-tilmanocept. Fluorophore density was optimized to generate the maximum brightness. It was labeled with 68Ga and 99mTc and in vivo SLN mapping was successfully performed in different animals, such as mice, rabbits, dogs and pigs. With 99mTc labeled IRDye 800CW-tilmanocept, chapter 3 introduces a two-day imaging protocol with a hand-held imager. Chapter 4 proposed a method to dual radiolabel the IRDye 800CW-tilmanocept with both 68Ga and 99mTc. Chapter 5 introduces a 68Ga metal chelating bioorthogonal tetrazine dextran probe for multistep imaging of a colon cancer. Chapter 6 presents the synthesis and in vivo evaluation of a Hepatocellular Carcinoma targeting PET probe 68Ga-Insulin-Dextran. Chapter 7 discusses a novel method to prepare silicon nanoparticles with great yield and size control. The last chapter 8 concludes all probes developed in this thesis and their clinical relevance.
Investigations of Sayre's Equation.
NASA Astrophysics Data System (ADS)
Shiono, Masaaki
Available from UMI in association with The British Library. Since the discovery of X-ray diffraction, various methods of using it to solve crystal structures have been developed. The major methods used can be divided into two categories: (1) Patterson function based methods; (2) Direct phase-determination methods. In the early days of structure determination from X-ray diffraction, Patterson methods played the leading role. Direct phase-determining methods ('direct methods' for short) were introduced by D. Harker and J. S. Kasper in the form of inequality relationships in 1948. A significant development of direct methods was produced by Sayre (1952). The equation he introduced, generally called Sayre's equation, gives exact relationships between structure factors for equal atoms. Later Cochran (1955) derived the so-called triple phase relationship, the main means by which it has become possible to find the structure factor phases automatically by computer. Although the background theory of direct methods is very mathematical, the user of direct-methods computer programs needs no detailed knowledge of these automatic processes in order to solve structures. Recently introduced direct methods are based on Sayre's equation, so it is important to investigate its properties thoroughly. One such new method involves the Sayre equation tangent formula (SETF) which attempts to minimise the least square residual for the Sayre's equations (Debaerdemaeker, Tate and Woolfson; 1985). In chapters I-III the principles and developments of direct methods will be described and in chapters IV -VI the properties of Sayre's equation and its modification will be discussed. Finally, in chapter VII, there will be described the investigation of the possible use of an equation, similar in type to Sayre's equation, derived from the characteristics of the Patterson function.
ERIC Educational Resources Information Center
Cameto, Renee; Bergland, Frances; Knokey, Anne-Marie; Nagle, Katherine M.; Sanford, Christopher; Kalb, Sara C.; Blackorby, Jose; Sinclair, Beth; Riley, Derek L.; Ortega, Moreica
2010-01-01
The report is organized to provide information on the school-level implementation of alternate assessments for students with significant cognitive disabilities. Following the Introduction in Chapter 1, Chapter 2 describes the study design and methods, including the development of the teacher survey and data collection procedures and analyses.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chandola, Varun; Schryver, Jack C; Sukumar, Sreenivas R
We discuss the problem of fraud detection in healthcare in this chapter. Given the recent scrutiny of the ineciencies in the US healthcare system, identifying fraud has been on the forefront of the eorts towards reducing the healthcare costs. In this chapter we will focus on understanding the issue of healthcare fraud in detail, and review methods that have been proposed in the literature to combat this issue using data driven approach.
Jeff Skousen; Carl Zipper; Jim Burger; Christopher Barton; Patrick. Angel
2017-01-01
The Forestry Reclamation Approach (FRA), a method for reclaiming coal-mined land to forest (Chapter 2, this volume), is based on research, knowledge, and experience of forest soil scientists and reclamation practitioners. Step 1 of the FRA is to create a suitable rooting medium for good tree growth that is no less than 4 feet deep and consists of topsoil, weathered...
Conservative Diffusions: a Constructive Approach to Nelson's Stochastic Mechanics.
NASA Astrophysics Data System (ADS)
Carlen, Eric Anders
In Nelson's stochastic mechanics, quantum phenomena are described in terms of diffusions instead of wave functions; this thesis is a study of that description. We emphasize that we are concerned here with the possibility of describing, as opposed to explaining, quantum phenomena in terms of diffusions. In this direction, the following questions arise: "Do the diffusions of stochastic mechanics--which are formally given by stochastic differential equations with extremely singular coefficients--really exist?" Given that they exist, one can ask, "Do these diffusions have physically reasonable sample path behavior, and can we use information about sample paths to study the behavior of physical systems?" These are the questions we treat in this thesis. In Chapter I we review stochastic mechanics and diffusion theory, using the Guerra-Morato variational principle to establish the connection with the Schroedinger equation. This chapter is largely expository; however, there are some novel features and proofs. In Chapter II we settle the first of the questions raised above. Using PDE methods, we construct the diffusions of stochastic mechanics. Our result is sufficiently general to be of independent mathematical interest. In Chapter III we treat potential scattering in stochastic mechanics and discuss direct probabilistic methods of studying quantum scattering problems. Our results provide a solid "Yes" in answer to the second question raised above.
NASA Technical Reports Server (NTRS)
Wieland, P. O.
1998-01-01
The International Space Station (ISS) incorporates elements designed and developed by an international consortium led by the United States (U.S.), and by Russia. For this cooperative effort to succeed, it is crucial that the designs and methods of design of the other partners are understood sufficiently to ensure compatibility. Environmental Control and Life Support (ECLS) is one system in which functions are performed independently on the Russian Segment (RS) and on the U.S./international segments. This document describes, in two volumes, the design and operation of the ECLS Systems (ECLSS) on board the ISS. This current volume, Volume 1, is divided into three chapters. Chapter 1 is a general overview of the ISS, describing the configuration, general requirements, and distribution of systems as related to the ECLSS, and includes discussion of the design philosophies of the partners and methods of verification of equipment. Chapter 2 describes the U.S. ECLSS and technologies in greater detail. Chapter 3 describes the ECLSS in the European Attached Pressurized Module (APM), Japanese Experiment Module (JEM), and Italian Mini-Pressurized Logistics Module (MPLM). Volume II describes the Russian ECLSS and technologies in greater detail. These documents present thorough, yet concise, descriptions of the ISS ECLSS.
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.
Cation Exchange Reactions for Improved Quality and Diversity of Semiconductor Nanocrystals
NASA Astrophysics Data System (ADS)
Beberwyck, Brandon James
Observing the size and shape dependent physical properties of semiconductor nanocrystals requires synthetic methods capable of not only composition and crystalline phase control but also molecular scale uniformity for a particle consisting of tens to hundreds of thousands of atoms. The desire for synthetic methods that produce uniform nanocrystals of complex morphologies continues to increase as nanocrystals find roles in commercial applications, such as biolabeling and display technologies, that are simultaneously restricting material compositions. With these constraints, new synthetic strategies that decouple the nanocrystal's chemical composition from its morphology are necessary. This dissertation explores the cation exchange reaction of colloidal semiconductor nanocrystals, a template-based chemical transformation that enables the interconversion of nanocrystals between a variety of compositions while maintaining their size dispersity and morphology. Chapter 1 provides an introduction to the versatility of this replacement reaction as a synthetic method for semiconductor nanocrystals. An overview of the fundamentals of the cation exchange reaction and the diversity of products that are achievable is presented. Chapter 2 examines the optical properties of nanocrystal heterostructures produced through cation exchange reactions. The deleterious impact of exchange on the photoluminescence is correlated to residual impurities and a simple annealing protocol is demonstrated to achieve photoluminescence yields comparable to samples produced by conventional methods. Chapter 3 investigates the extension of the cation exchange reaction beyond ionic nanocrystals. Covalent III-V nanocrystal of high crystallinity and low size dispersity are synthesized by the cation exchange of cadmium pnictide nanocrystals with group 13 ions. Lastly, Chapter 4 highlights future studies to probe cation exchange reactions in colloidal semiconductor nanocrystals and progress that needs to be made for its adoption as a routine synthetic approach.
Frequency Response Function Based Damage Identification for Aerospace Structures
NASA Astrophysics Data System (ADS)
Oliver, Joseph Acton
Structural health monitoring technologies continue to be pursued for aerospace structures in the interests of increased safety and, when combined with health prognosis, efficiency in life-cycle management. The current dissertation develops and validates damage identification technology as a critical component for structural health monitoring of aerospace structures and, in particular, composite unmanned aerial vehicles. The primary innovation is a statistical least-squares damage identification algorithm based in concepts of parameter estimation and model update. The algorithm uses frequency response function based residual force vectors derived from distributed vibration measurements to update a structural finite element model through statistically weighted least-squares minimization producing location and quantification of the damage, estimation uncertainty, and an updated model. Advantages compared to other approaches include robust applicability to systems which are heavily damped, large, and noisy, with a relatively low number of distributed measurement points compared to the number of analytical degrees-of-freedom of an associated analytical structural model (e.g., modal finite element model). Motivation, research objectives, and a dissertation summary are discussed in Chapter 1 followed by a literature review in Chapter 2. Chapter 3 gives background theory and the damage identification algorithm derivation followed by a study of fundamental algorithm behavior on a two degree-of-freedom mass-spring system with generalized damping. Chapter 4 investigates the impact of noise then successfully proves the algorithm against competing methods using an analytical eight degree-of-freedom mass-spring system with non-proportional structural damping. Chapter 5 extends use of the algorithm to finite element models, including solutions for numerical issues, approaches for modeling damping approximately in reduced coordinates, and analytical validation using a composite sandwich plate model. Chapter 6 presents the final extension to experimental systems-including methods for initial baseline correlation and data reduction-and validates the algorithm on an experimental composite plate with impact damage. The final chapter deviates from development and validation of the primary algorithm to discuss development of an experimental scaled-wing test bed as part of a collaborative effort for developing structural health monitoring and prognosis technology. The dissertation concludes with an overview of technical conclusions and recommendations for future work.
The 3DGRAPE book: Theory, users' manual, examples
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.
1989-01-01
A users' manual for a new three-dimensional grid generator called 3DGRAPE is presented. The program, written in FORTRAN, is capable of making zonal (blocked) computational grids in or about almost any shape. Grids are generated by the solution of Poisson's differential equations in three dimensions. The program automatically finds its own values for inhomogeneous terms which give near-orthogonality and controlled grid cell height at boundaries. Grids generated by 3DGRAPE have been applied to both viscous and inviscid aerodynamic problems, and to problems in other fluid-dynamic areas. The smoothness for which elliptic methods are known is seen here, including smoothness across zonal boundaries. An introduction giving the history, motivation, capabilities, and philosophy of 3DGRAPE is presented first. Then follows a chapter on the program itself. The input is then described in detail. A chapter on reading the output and debugging follows. Three examples are then described, including sample input data and plots of output. Last is a chapter on the theoretical development of the method.
New Layered Materials and Functional Nanoelectronic Devices
NASA Astrophysics Data System (ADS)
Yu, Jaeeun
This thesis introduces functional nanomaterials including superatoms and carbon nanotubes (CNTs) for new layered solids and molecular devices. Chapters 1-3 present how we incorporate superatoms into two-dimensional (2D) materials. Chapter 1 describes a new and simple approach to dope transition metal dichalcogenides (TMDCs) using the superatom Co6Se8(PEt3)6 as the electron dopant. Doping is an effective method to modulate the electrical properties of materials, and we demonstrate an electron-rich cluster can be used as a tunable and controllable surface dopant for semiconducting TMDCs via charge transfer. As a demonstration of the concept, we make a p-n junction by patterning on specific areas of TMDC films. Chapter 2 and Chapter 3 introduce new 2D materials by molecular design of superatoms. Traditional atomic van der Waals materials such as graphene, hexagonal boron-nitride, and TMDCs have received widespread attention due to the wealth of unusual physical and chemical behaviors that arise when charges, spins, and vibrations are confined to a plane. Though not as widespread as their atomic counterparts, molecule-based layered solids offer significant benefits; their structural flexibility will enable the development of materials with tunable properties. Chapter 2 describes a layered van der Waals solid self-assembled from a structure-directing building block and C60 fullerene. The resulting crystalline solid contains a corrugated monolayer of neutral fullerenes and can be mechanically exfoliated. Chapter 3 describes a new method to functionalize electroactive superatoms with groups that can direct their assembly into covalent and non-covalent multi-dimensional frameworks. We synthesized Co6Se8[PEt2(4-C6H4COOH)]6 and found that it forms two types of crystalline assemblies with Zn(NO3)2, one is a three-dimensional solid and the other consists of stacked layers of two-dimensional sheets. The dimensionality is controlled by subtle changes in reaction conditions. CNT-based field-effect transistor (FETs), in which a single molecule spans an oxidatively cut gap in the CNT, provide a versatile, ground-state platform with well-defined electrical contacts. For statistical studies of a variety of small molecule bridges, Chapter 4 presents a novel fabrication method to produce hundreds of FETs on one single carbon nanotube. A large number of devices allows us to study the stability and uniformity of CNT FET properties. Moreover, the new platform also enables a quantitative analysis of molecular devices. In particular, we used CNT FETs for studying DNA-mediated charge transport. DNA conductance was measured by connecting DNA molecules of varying lengths to lithographically cut CNT FETs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donnelly, H.; Fullwood, R.; Glancy, J.
This is the second volume of a two volume report on the VISA method for evaluating safeguards at fixed-site facilities. This volume contains appendices that support the description of the VISA concept and the initial working version of the method, VISA-1, presented in Volume I. The information is separated into four appendices, each describing details of one of the four analysis modules that comprise the analysis sections of the method. The first appendix discusses Path Analysis methodology, applies it to a Model Fuel Facility, and describes the computer codes that are being used. Introductory material on Path Analysis given inmore » Chapter 3.2.1 and Chapter 4.2.1 of Volume I. The second appendix deals with Detection Analysis, specifically the schemes used in VISA-1 for classifying adversaries and the methods proposed for evaluating individual detection mechanisms in order to build the data base required for detection analysis. Examples of evaluations on identity-access systems, SNM portal monitors, and intrusion devices are provided. The third appendix describes the Containment Analysis overt-segment path ranking, the Monte Carlo engagement model, the network simulation code, the delay mechanism data base, and the results of a sensitivity analysis. The last appendix presents general equations used in Interruption Analysis for combining covert-overt segments and compares them with equations given in Volume I, Chapter 3.« less
Continuum Electrostatics Approaches to Calculating pKas and Ems in Proteins
Gunner, MR; Baker, Nathan A.
2017-01-01
Proteins change their charge state through protonation and redox reactions as well as through binding charged ligands. The free energy of these reactions are dominated by solvation and electrostatic energies and modulated by protein conformational relaxation in response to the ionization state changes. Although computational methods for calculating these interactions can provide very powerful tools for predicting protein charge states, they include several critical approximations of which users should be aware. This chapter discusses the strengths, weaknesses, and approximations of popular computational methods for predicting charge states and understanding their underlying electrostatic interactions. The goal of this chapter is to inform users about applications and potential caveats of these methods as well as outline directions for future theoretical and computational research. PMID:27497160
The characteristics of a new negative metal ion beam source and its applications
NASA Astrophysics Data System (ADS)
Paik, Namwoong
2001-10-01
Numerous efforts at energetic thin film deposition processes using ion beams have been made to meet the demands of today's thin film industry. As one of these efforts, a new Magnetron Sputter Negative Ion Source (MSNIS) was developed. In this study, the development and the characterization of the MSNIS were investigated. Amorphous carbon films were used as a sample coating medium to evaluate the ion beam energy effect. A review of energetic Physical Vapor Deposition (PVD) techniques is presented in Chapter 1. The energetic PVD methods can be classified into two major categories: the indirect ion beam method Ion Beam Assisted Deposition (IBAD), and the direct ion beam method-Direct Ion Beam Deposition (DIBD). In this chapter, currently available DIBD processes such as Cathodic Arc, Laser Ablation, Ionized Physical Vapor Deposition (I-PVD) and Magnetron Sputter Negative Ion Source (MSNIS) are individually reviewed. The design and construction of the MSNIS is presented in chapter 2. The MSNIS is a hybrid of the conventional magnetron sputter configuration and the cesium surface ionizer. The negative sputtered ions are produced directly from the sputter target by surface ionization. In chapter 3, the ion beam and plasma characteristics of an 8″ diameter MSNIS are investigated using a retarding field analyzer and a cylindrical Langmuir Probe. The measured electron temperature is approximately 2-5 eV, while the plasma density and plasma potential were of the order of 10 11-1012 cm3 and 5-20 V, respectively, depending on the pressure and power. In chapter 4, in order to evaluate the effect of the ion beam on the resultant films, amorphous carbon films were deposited under various conditions. The structure of carbon films was investigated using Raman spectroscopy and X-ray photoelectron spectroscopy (XPS). The result suggests the fraction of spa bonding is more than 70% in some samples prepared by MSNIS while magnetron sputtered samples showed less than 30%. (Abstract shortened by UMI.)
This chapter describes in detail methods for detecting viruses of bacteria and humans in soil. Methods also are presented for the assay of these viruses. Reference sources are provided for information on viruses of plants.
Advanced Methods of Protein Crystallization.
Moreno, Abel
2017-01-01
This chapter provides a review of different advanced methods that help to increase the success rate of a crystallization project, by producing larger and higher quality single crystals for determination of macromolecular structures by crystallographic methods. For this purpose, the chapter is divided into three parts. The first part deals with the fundamentals for understanding the crystallization process through different strategies based on physical and chemical approaches. The second part presents new approaches involved in more sophisticated methods not only for growing protein crystals but also for controlling the size and orientation of crystals through utilization of electromagnetic fields and other advanced techniques. The last section deals with three different aspects: the importance of microgravity, the use of ligands to stabilize proteins, and the use of microfluidics to obtain protein crystals. All these advanced methods will allow the readers to obtain suitable crystalline samples for high-resolution X-ray and neutron crystallography.
Gaia DR2 documentation Chapter 7: Variability
NASA Astrophysics Data System (ADS)
Eyer, L.; Guy, L.; Distefano, E.; Clementini, G.; Mowlavi, N.; Rimoldini, L.; Roelens, M.; Audard, M.; Holl, B.; Lanzafame, A.; Lebzelter, T.; Lecoeur-Taïbi, I.; Molnár, L.; Ripepi, V.; Sarro, L.; Jevardat de Fombelle, G.; Nienartowicz, K.; De Ridder, J.; Juhász, Á.; Molinaro, R.; Plachy, E.; Regibo, S.
2018-04-01
This chapter of the Gaia DR2 documentation describes the models and methods used on the 22 months of data to produce the Gaia variable star results for Gaia DR2. The variability processing and analysis was based mostly on the calibrated G and integrated BP and RP photometry. The variability analysis approach to the Gaia data has been described in Eyer et al. (2017), and the Gaia DR2 results are presented in Holl et al. (2018). Detailed methods on specific topics will be published in a number of separate articles. Variability behaviour in the colour magnitude diagram is presented in Gaia Collaboration et al. (2018c).
Advances in Collaborative Filtering
NASA Astrophysics Data System (ADS)
Koren, Yehuda; Bell, Robert
The collaborative filtering (CF) approach to recommenders has recently enjoyed much interest and progress. The fact that it played a central role within the recently completed Netflix competition has contributed to its popularity. This chapter surveys the recent progress in the field. Matrix factorization techniques, which became a first choice for implementing CF, are described together with recent innovations. We also describe several extensions that bring competitive accuracy into neighborhood methods, which used to dominate the field. The chapter demonstrates how to utilize temporal models and implicit feedback to extend models accuracy. In passing, we include detailed descriptions of some the central methods developed for tackling the challenge of the Netflix Prize competition.
Sultana, Hameeda
2016-01-01
West Nile virus (WNV) is a neurotropic virus that causes inflammation and neuronal loss in the Central Nervous System leading to encephalitis and death. In this chapter, detailed methods to detect WNV in the murine brain tissue by quantitative real-time polymerase chain reaction and viral plaque assays are described. Determination of WNV neuropathogenesis by Hematoxylin and Eosin staining and immunohistochemical procedures are provided. In addition, TUNEL assays to determine neuronal loss during WNV neuropathogenesis are discussed in detail. Collectively, the methods mentioned in this chapter provide an overview to understand neuroinvasion and neuropathogenesis in a murine model of WNV infection.
Surface Plasmon Resonance: New Biointerface Designs and High-Throughput Affinity Screening
NASA Astrophysics Data System (ADS)
Linman, Matthew J.; Cheng, Quan Jason
Surface plasmon resonance (SPR) is a surface optical technique that measures minute changes in refractive index at a metal-coated surface. It has become increasingly popular in the study of biological and chemical analytes because of its label-free measurement feature. In addition, SPR allows for both quantitative and qualitative assessment of binding interactions in real time, making it ideally suited for probing weak interactions that are often difficult to study with other methods. This chapter presents the biosensor development in the last 3 years or so utilizing SPR as the principal analytical technique, along with a concise background of the technique itself. While SPR has demonstrated many advantages, it is a nonselective method and so, building reproducible and functional interfaces is vital to sensing applications. This chapter, therefore, focuses mainly on unique surface chemistries and assay approaches to examine biological interactions with SPR. In addition, SPR imaging for high-throughput screening based on microarrays and novel hyphenated techniques involving the coupling of SPR to other analytical methods is discussed. The chapter concludes with a commentary on the current state of SPR biosensing technology and the general direction of future biosensor research.
Conceptual Chemical Process Design for Sustainability. ...
This chapter examines the sustainable design of chemical processes, with a focus on conceptual design, hierarchical and short-cut methods, and analyses of process sustainability for alternatives. The chapter describes a methodology for incorporating process sustainability analyses throughout the conceptual design. Hierarchical and short-cut decision-making methods will be used to approach sustainability. An example showing a sustainability-based evaluation of chlor-alkali production processes is presented with economic analysis and five pollutants described as emissions. These emissions are analyzed according to their human toxicity potential by ingestion using the Waste Reduction Algorithm and a method based on US Environmental Protection Agency reference doses, with the addition of biodegradation for suitable components. Among the emissions, mercury as an element will not biodegrade, and results show the importance of this pollutant to the potential toxicity results and therefore the sustainability of the process design. The dominance of mercury in determining the long-term toxicity results when energy use is included suggests that all process system evaluations should (re)consider the role of mercury and other non-/slow-degrading pollutants in sustainability analyses. The cycling of nondegrading pollutants through the biosphere suggests the need for a complete analysis based on the economic, environmental, and social aspects of sustainability. Chapter reviews
NASA Astrophysics Data System (ADS)
Cui, Chenxuan
When cognitive radio (CR) operates, it starts by sensing spectrum and looking for idle bandwidth. There are several methods for CR to make a decision on either the channel is occupied or idle, for example, energy detection scheme, cyclostationary detection scheme and matching filtering detection scheme [1]. Among them, the most common method is energy detection scheme because of its algorithm and implementation simplicities [2]. There are two major methods for sensing, the first one is to sense single channel slot with varying bandwidth, whereas the second one is to sense multiple channels and each with same bandwidth. After sensing periods, samples are compared with a preset detection threshold and a decision is made on either the primary user (PU) is transmitting or not. Sometimes the sensing and decision results can be erroneous, for example, false alarm error and misdetection error may occur. In order to better control error probabilities and improve CR network performance (i.e. energy efficiency), we introduce cooperative sensing; in which several CR within a certain range detect and make decisions on channel availability together. The decisions are transmitted to and analyzed by a data fusion center (DFC) to make a final decision on channel availability. After the final decision is been made, DFC sends back the decision to the CRs in order to tell them to stay idle or start to transmit data to secondary receiver (SR) within a preset transmission time. After the transmission, a new cycle starts again with sensing. This thesis report is organized as followed: Chapter II review some of the papers on optimizing CR energy efficiency. In Chapter III, we study how to achieve maximal energy efficiency when CR senses single channel with changing bandwidth and with constrain on misdetection threshold in order to protect PU; furthermore, a case study is given and we calculate the energy efficiency. In Chapter IV, we study how to achieve maximal energy efficiency when CR senses multiple channels and each channel with same bandwidth, also, we preset a misdetection threshold and calculate the energy efficiency. A comparison will be shown between two sensing methods at the end of the chapter. Finally, Chapter V concludes this thesis.
Maxwell, M; Howie, J G; Pryde, C J
1998-01-01
BACKGROUND: Prescribing matters (particularly budget setting and research into prescribing variation between doctors) have been handicapped by the absence of credible measures of the volume of drugs prescribed. AIM: To use the defined daily dose (DDD) method to study variation in the volume and cost of drugs prescribed across the seven main British National Formulary (BNF) chapters with a view to comparing different methods of setting prescribing budgets. METHOD: Study of one year of prescribing statistics from all 129 general practices in Lothian, covering 808,059 patients: analyses of prescribing statistics for 1995 to define volume and cost/volume of prescribing for one year for 10 groups of practices defined by the age and deprivation status of their patients, for seven BNF chapters; creation of prescribing budgets for 1996 for each individual practice based on the use of target volume and cost statistics; comparison of 1996 DDD-based budgets with those set using the conventional historical approach; and comparison of DDD-based budgets with budgets set using a capitation-based formula derived from local cost/patient information. RESULTS: The volume of drugs prescribed was affected by the age structure of the practices in BNF Chapters 1 (gastrointestinal), 2 (cardiovascular), and 6 (endocrine), and by deprivation structure for BNF Chapters 3 (respiratory) and 4 (central nervous system). Costs per DDD in the major BNF chapters were largely independent of age, deprivation structure, or fundholding status. Capitation and DDD-based budgets were similar to each other, but both differed substantially from historic budgets. One practice in seven gained or lost more than 100,000 Pounds per annum using DDD or capitation budgets compared with historic budgets. The DDD-based budget, but not the capitation-based budget, can be used to set volume-specific prescribing targets. CONCLUSIONS: DDD-based and capitation-based prescribing budgets can be set using a simple explanatory model and generalizable methods. In this study, both differed substantially from historic budgets. DDD budgets could be created to accommodate new prescribing strategies and raised or lowered to reflect local intentions to alter overall prescribing volume or cost targets. We recommend that future work on setting budgets and researching prescribing variations should be based on DDD statistics. PMID:10024703
Analytical chemistry at the interface between materials science and biology
NASA Astrophysics Data System (ADS)
O'Brien, Janese Christine
This work describes several research efforts that lie at the new interfaces between analytical chemistry and other disciplines, namely materials science and biology. In the materials science realm, the search for new materials that may have useful or unique chromatographic properties motivated the synthesis and characterization of electrically conductive sol-gels. In the biology realm, the search for new surface fabrication schemes that would permit or even improve the detection of specific biological reactions motivated the design of miniaturized biological arrays. Collectively, this work represents some of analytical chemistry's newest forays into these disciplines. This dissertation is divided into six chapters. Chapter 1 is an introductory chapter that provides background information pertinent to several key aspects of the work contained in this dissertation. Chapter 2 describes the synthesis and characterization of electrically conductive sol-gels derived from the acid-catalyzed hydrolysis of a vanadium alkoxide. Specifically, this chapter describes our attempts to increase the conductivity of vanadium sol-gels by optimizing the acidic and drying conditions used during synthesis. Chapter 3 reports the construction of novel antigenic immunosensing platforms of increased epitope density using Fab'-SH antibody fragments on gold. Here, X-ray photoelectron spectroscopy (XPS), thin-layer cell (TLC) and confocal fluorescence spectroscopies, and scanning force microscopy (SFM) are employed to characterize the fragment-substrate interaction, to quantify epitope density, and to demonstrate fragment viability and specificity. Chapter 4 presents a novel method for creating and interrogating double-stranded DNA (dsDNA) microarrays suitable for screening protein:dsDNA interactions. Using the restriction enzyme ECoR1, we demonstrate the ability of the atomic force microscope (AFM) to detect changes in topography that result from the enzymatic cleavage of dsDNA microarrays containing the correct recognition sequence. Chapter 5 explores more fully the microarray fabrication process described in Chapter 4. Specifically, experiments characterizing the effect of deposition conditions on oligonucleotide topography and as well as those that describe array density optimization are presented. Chapter 6 presents general conclusions from the work recorded in this dissertation and speculates on its extension.
NASA Technical Reports Server (NTRS)
Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.
2013-01-01
Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post-classification steps. Within this chapter, each of the four approaches is described in terms of scale and accuracy classifying urban land use and urban land cover; and for its range of urban applications. We demonstrate the overview of four main classification groups in Figure 1 while Table 1 details the approaches with respect to classification requirements and procedures (e.g., reflectance conversion, steps before training sample selection, training samples, spatial approaches commonly used, classifiers, primary inputs for classification, output structures, number of output layers, and accuracy assessment). The chapter concludes with a brief summary of the methods reviewed and the challenges that remain in developing new classification methods for improving the efficiency and accuracy of mapping urban areas.
ERIC Educational Resources Information Center
Loos, Roland
This report provides specialist information and application-oriented recommendations to implement innovative environmental vocational education and training (VET) measures and practices. Chapter 1 explains study method and structure. Chapter 2 provides an overview of the current state of environmental VET in Austria, Denmark, Finland, Germany,…
Introduction: Aims and Requirements of Future Aerospace Vehicles. Chapter 1
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.; Smeltzer, Stanley S., III; McConnaughey, Paul (Technical Monitor)
2001-01-01
The goals and system-level requirements for the next generation aerospace vehicles emphasize safety, reliability, low-cost, and robustness rather than performance. Technologies, including new materials, design and analysis approaches, manufacturing and testing methods, operations and maintenance, and multidisciplinary systems-level vehicle development are key to increasing the safety and reducing the cost of aerospace launch systems. This chapter identifies the goals and needs of the next generation or advanced aerospace vehicle systems.
19 CFR 146.93 - Inventory control and recordkeeping system.
Code of Federal Regulations, 2010 CFR
2010-04-01
... (first-in, first-out) method of accounting (see § 191.22(c) of this chapter). The use of this method is... method, measurement (weight or volume), and the price of product consistently (see § 146.92(g) of this...
NORMALIZATION, GROUPING, AND WEIGHTING IN LIFE CYCLE IMPACT ASSESSMENT
This chapter includes a comprehensive overview of weighting methods and principles. The authors propose a very interesting and useful system of criteria for the evaluation of weighting methods; and provide a structured way to discuss the characteristics of weighting methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez, Barbara Alvarez
In this thesis a direct search for the Standard Model Higgs boson production in association with a W boson at the CDF detector in the Tevatron is presented. This search contributes predominantly in the region of low mass Higgs region, when the mass of Higgs boson is less than about 135 GeV. The search is performed in a final state where the Higgs boson decays into two b quarks, and the W boson decays leptonically, to a charged lepton (it can be an electron or a muon) and a neutrino. This work is organized as follows. Chapter 2 gives an overview of the Standard Model theory of particle physics and presents the SM Higgs boson search results at LEP, and the Tevatron colliders, as well as the prospects for the SM Higgs boson searches at the LHC. The dataset used in this analysis corresponds to 4.8 fb -1 of integrated luminosity of pmore » $$\\bar{p}$$ collisions at a center of mass energy of 1.96 TeV. That is the luminosity acquired between the beginning of the CDF Run II experiment, February 2002, and May 2009. The relevant aspects, for this analysis, of the Tevatron accelerator and the CDF detector are shown in Chapter 3. In Chapter 4 the particles and observables that make up the WH final state, electrons, muons, E T, and jets are presented. The CDF standard b-tagging algorithms to identify b jets, and the neural network flavor separator to distinguish them from other flavor jets are also described in Chapter 4. The main background contributions are those coming from heavy flavor production processes, such as those coming from Wbb, Wcc or Wc and tt. The signal and background signatures are discussed in Chapter 5 together with the Monte CArlo generators that have been used to simulate almost all the events used in this thesis. WH candidate events have a high-p T lepton (electron or muon), high missing transverse energy, and two or more than two jets in the final state. Chapter 6 describes the event selection applied in this analysis and the method used to estimate the background contribution. The Matrix Element method, that was successfully used in the single top discovery analysis and many other analyses within the CDF collaboration, is the multivariate technique used in this thesis to discriminate signal from background events. With this technique is possible to calculate a probability for an event to be classified as signal or background. These probabilities are then combined into a discriminant function called the Event Probability Discriminant, EPD, which increases the sensitivity of the WH process. This method is described in detail in Chapter 7. As no evidence for the signal has been found, the results obtained with this work are presented in Chapter 8 in terms of exclusion regions as a function of the mass of the Higgs boso, taking into account the full systematics. The conclusions of this work to obtain the PhD are presnted in Chapter 9.« less
Sichani, Mehrdad Mohammadi; Mobarakeh, Shadi Reissizadeh; Omid, Athar
2018-01-01
Recently, medical education has made significant progress, and medical teachers are trying to find methods that have most impressive effects on learning. One of the useful learning methods is student active participation. One of the helpful teaching aids in this method is mobile technology. The present study aimed to determine the effect of sending educational questions through short message service (SMS) on academic achievement and satisfaction of medical students and compare that with lecture teaching. In an semi-experimental, two chapters of urology reference book, Smiths General Urology 17 th edition, were taught to 47 medical students of Isfahan University of Medical Sciences in urology course in 2013 academic year. Kidney tumors chapter was educated by sending questions through SMS, and bladder tumors part was taught in a lecture session. For each method, pretest and posttest were held, each consisting of thirty multiple choice questions. To examine the knowledge retention, a test session was held on the same terms for each chapter, 1 month later. At the end, survey forms were distributed to assess student's satisfaction with SMS learning method. Data were analyzed through using SPSS 20. The findings demonstrated a statistically significant difference between the two learning methods in the medication test scores. Evaluation of the satisfaction showed 78.72% of participants were not satisfied. The results of the study showed that distance learning through SMS in medical students could lead to increase knowledge, however, it was not effective on their satisfaction.
7 CFR 1410.43 - Method of payment.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 10 2012-01-01 2012-01-01 false Method of payment. 1410.43 Section 1410.43... OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS CONSERVATION RESERVE PROGRAM § 1410.43 Method... or other methods of payment in accordance with part 1401 of this chapter, unless otherwise specified...
NASA Astrophysics Data System (ADS)
Hemingway, Melinda Graham
This research focuses on hydrogel nanoparticle formation using miniemulsion polymerization and supercritical carbon dioxide. Hydrogel nanopowder is produced by a novel combination of inverse miniemulsion polymerization and supercritical drying (MPSD) methods. Three drying methods of miniemulsions are examined: (1) a conventional freeze drying technique, and (2) two supercritical drying techniques: (2a) supercritical fluid injection into miniemulsions, and (2b) the polymerized miniemulsion injection into supercritical fluid. Method 2b can produce non-agglomerated hydrogel nanoparticles that are free of solvent or surfactant (Chapter 2). The optimized MPSD method was applied for producing an extended release drug formulation with mucoadhesive properties. Drug nanoparticles of mesalamine, were produced using supercritical antisolvent technology and encapsulation within two hydrogels, polyacrylamide and poly(acrylic acid-co-acrylamide). The encapsulation efficiency and release profile of drug nanoparticles is compared with commercial ground mesalamine particles. The loading efficiency is influenced by morphological compatibility (Chapter 3). The MPSD method was extended for encapsulation of zinc oxide nanoparticles for UV protection in sunscreens (Chapter 4). ZnO was incorporated into the inverse miniemulsion during polymerization. The effect of process parameters are examined on absorbency of ultraviolet light and transparency of visible light. For use of hydrogel nanoparticles in a seismological application, delayed hydration is needed. Supercritical methods extend MPSD so that a hydrophobic coating can be applied on the particle surface (Chapter 5). Multiple analysis methods and coating materials were investigated to elucidate compatibility of coating material to polyacrylamide hydrogel. Coating materials of poly(lactide), poly(sulphone), poly(vinyl acetate), poly(hydroxybutyrate), Geluice 50-13, Span 80, octadecyltrichlorosilane, and perfluorobutane sulfate (PFBS) were tested, out of which Gelucire, perfluorobutane sulfate, and poly(vinyl acetate) materials were able to provide some coating and perfluorobutane sulfate, poly(lactide), poly(vinyl acetate) delayed hydration of hydrogel particles, but not to a sufficient extent. The interactions of the different materials with the hydrogel are examined based on phenomena observed during the production processes and characterization of the particles generated. This work provides understanding into the interactions of polyacrylamide hydrogel particles both internally by encapsulation and externally by coating.
Sutherland, Chris; Royle, Andy
2016-01-01
This chapter provides a non-technical overview of ‘closed population capture–recapture’ models, a class of well-established models that are widely applied in ecology, such as removal sampling, covariate models, and distance sampling. These methods are regularly adopted for studies of reptiles, in order to estimate abundance from counts of marked individuals while accounting for imperfect detection. Thus, the chapter describes some classic closed population models for estimating abundance, with considerations for some recent extensions that provide a spatial context for the estimation of abundance, and therefore density. Finally, the chapter suggests some software for use in data analysis, such as the Windows-based program MARK, and provides an example of estimating abundance and density of reptiles using an artificial cover object survey of Slow Worms (Anguis fragilis).
Incorporating Spatial Data into Enterprise Applications
NASA Astrophysics Data System (ADS)
Akiki, Pierre; Maalouf, Hoda
The main goal of this chapter is to discuss the usage of spatial data within enterprise as well as smaller line-of-business applications. In particular, this chapter proposes new methodologies for storing and manipulating vague spatial data and provides methods for visualizing both crisp and vague spatial data. It also provides a comparison between different types of spatial data, mainly 2D crisp and vague spatial data, and their respective fields of application. Additionally, it compares existing commercial relational database management systems, which are the most widely used with enterprise applications, and discusses their deficiencies in terms of spatial data support. A new spatial extension package called Spatial Extensions (SPEX) is provided in this chapter and is tested on a software prototype.
Jenkins, Jill A.; Tiersch, Terrence R.
2011-01-01
Although cryopreservation of sperm has become an accepted technique for selective breeding and genetic improvement in livestock industries, no systematic approach is available for banking germplasm of aquatic species (i.e. embryos, semen and ova). The intent of this chapter is not to provide recommendations for specific measures to eliminate particular pathogens and subsequent diseases, but rather to develop a general framework and strategies for facing the new and unexpected. This chapter presents microbiological and quality assurance concerns for a cryopreservation program. In particular, the chapter identifies organisms transmittable in semen of animals, microorganisms and diseases of importance to aquatic species, pathogen detection issues, methods for prevention and control and how sperm quality can be assessed.
Chapter 16Tracing Nitrogen Sources and Cycling in Catchments
Kendall, Carol
1998-01-01
This chapter focuses on the uses of isotopes to understand water chemistry.I Isotopic compositions generally cannot be interpreted successfully in the absence of other chemical and hydrologic data. The chapter focusses on uses of isotopes in tracing sources and cycling of nitrogen in the water-component of forested catchment, and on dissolved nitrate in shallow waters, nutrient uptake studies in agricultural areas, large-scale tracer experiments, groundwater contamination studies, food-web investigations, and uses of compound-specific stable isotope techniques. Shallow waters moving along a flowpath through a relatively uniform material and reacting with minerals probably do not achieve equilibrium but gradually approach some steady-state composition. The chapter also discusses the use of isotopic techniques to assess impacts of changes in land-management practices and land use on water quality. The analysis of individual molecular components for isotopic composition has much potential as a method for tracing the source, biogeochemistry, and degradation of organic liquids and gases because different materials have characteristic isotope spectrums or biomarkers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milligan, Michael; Bloom, Aaron P; Townsend, Aaron
Variable generation (VG) can reduce market prices over time and also the energy that other suppliers can sell in the market. The suppliers that are needed to provide capacity and flexibility to meet the long-term reliability requirements may, therefore, earn less revenue. This chapter discusses the topics of resource adequacy and revenue sufficiency - that is, determining and acquiring the quantity of capacity that will be needed at some future date and ensuring that those suppliers that offer the capacity receive sufficient revenue to recover their costs. The focus is on the investment time horizon and the installation of sufficientmore » generation capability. First, the chapter discusses resource adequacy, including newer methods of determining adequacy metrics. The chapter then focuses on revenue sufficiency and how suppliers have sufficient opportunity to recover their total costs. The chapter closes with a description of the mechanisms traditionally adopted by electricity markets to mitigate the issues of resource adequacy and revenue sufficiency and discusses the most recent market design changes to address these issues.« less
BOOK REVIEW: NMR Imaging of Materials
NASA Astrophysics Data System (ADS)
Blümich, Bernhard
2003-09-01
Magnetic resonance imaging (MRI) of materials is a field of increasing importance. Applications extend from fundamental science like the characterization of fluid transport in porous rock, catalyst pellets and hemodialysers into various fields of engineering for process optimization and product quality control. While the results of MRI imaging are being appreciated by a growing community, the methods of imaging are far more diverse for materials applications than for medical imaging of human beings. Blümich has delivered the first book in this field. It was published in hardback three years ago and is now offered as a paperback for nearly half the price. The text provides an introduction to MRI imaging of materials covering solid-state NMR spectroscopy, imaging methods for liquid and solid samples, and unusual MRI in terms of specialized approaches to spatial resolution such as an MRI surface scanner. The book represents an excellent and thorough treatment which will help to grow research in materials MRI. Blümich developed the treatise over many years for his research students, graduates in chemistry, physics and engineering. But it may also be useful for medical students looking for a less formal discussion of solid-state NMR spectroscopy. The structure of this book is easy to perceive. The first three chapters cover an introduction, the fundamentals and methods of solid-state NMR spectroscopy. The book starts at the ground level where no previous knowledge about NMR is assumed. Chapter 4 discusses a wide variety of transformations beyond the Fourier transformation. In particular, the Hadamard transformation and the 'wavelet' transformation are missing from most related books. This chapter also includes a description of noise-correlation spectroscopy, which promises the imaging of large objects without the need for extremely powerful radio-frequency transmitters. Chapters 5 and 6 cover basic imaging methods. The following chapter about the use of relaxation and spectroscopic methods to weight or filter the spin signals represents the core of the book. This is a subject where Blümich is deeply involved with substantial contributions. The chapter includes a lot of ideas to provide MR contrast between different regions based on their mobility, diffusion, spin couplings or NMR spectra. After describing NMR imaging methods for solids with broad lines, Blümich spends time on applications in the last two chapters of the book. This part is really fun to read. It underlines the effort to bring NMR into many kinds of manufacturing. Car tyres and high-voltage cables are just two such areas. Elastomeric materials, green-state ceramics and food science represent other interesting fields of applications. This part of the book represents a personal but nevertheless extensive compilation of modern applications. As a matter of course the MOUSE is presented, a portable permanent-magnet based NMR developed by Blümich and his co-workers. Thus the book is not only of interest to NMR spectroscopists but also to people in material science and chemical engineering. The bibliography and indexing are excellent and may serve as an attractive reference source for NMR spectroscopists. The book is the first on the subject and likely to become the standard text for NMR imaging of materials as the books by Abragam, Slicher and Ernst et al are for NMR spectroscopy. The purchase of this beautiful book for people dealing with NMR spectroscopy or medical MRI is highly recommended. Ralf Ludwig
USEPA MANUAL OF METHODS FOR VIROLOGY
This chapter describes procedures for the detection of coliphases in water matrices. These procedures are based on those presented in the Supplement to the 20th Edition of Standard Methods for the Examination of Water and Eastewater and EPA Methods 1601 and 1602. Two quantitati...
Towards Silk Fiber Optics: Refractive Index Characterization, Fiber Spinning, and Spinneret Analysis
NASA Astrophysics Data System (ADS)
Spitzberg, Joshua David
Of the many biologically derived materials, whose historical record of use by humans underscores an ex-vivo utility, silk is interesting for it's contemporary repurposing from textile to biocompatible substrate. And while even within this category silk is one of several materials studied for novel repurposing, it has the unique character of being evolutionarily developed specifically for fiber spinning in vivo. The work discussed here is inspired by taking what nature has given, to explore the in vitro spinning of silk towards biocompatible fiber optics applications. A common formulation of silk used in biomedical studies for re-forming it into the various structures begins with the silkworm cocoon, which is degummed and dissolved into an aqueous solution of its miscible protein, fibroin, and post-treated to fabricate solid structures. In the first aim, the optical refractive index (RI) of various post-treatment methods is discussed towards determining RI design techniques. The methods considered in this work for re-forming a solid fiber from the reconstituted silk fibroin (RSF) solution borrow from the industrial techniques of gel spinning, and dry-spinning. In the second aim, methods are applied to RSF and quality of the spun fibers discussed. A feature common to spinning techniques is passing the (silk) material through a spinneret of specific shape. In the third aim, fluid flow through a simplified native silkworm spinneret is modeled towards bio-inspired lessons in design. In chapter 1 the history, reconstitution, are discussed towards understanding the fabrication of several optical device examples. Chapter 2 then prefaces the experiments and measurements in fiber optics by reviewing electromagnetic theory of waveguide function, and loss factors, to be considered in actual device fabrication. Chapter 3 presents results and discussion for the first aim, understanding design principles for the refractive index of RSF. From this point, industrial fiber-spinning approaches are reviewed from a theoretical and methodological perspective in chapter 4. Thus, chapter 5 presents results for the second aim, efforts to apply these techniques using RSF. Chapter 6 discusses the third aim, understanding the design of the silkworm spinneret by an idealized model of natural and reconstituted silk fibroin flow. While the ultimate goal of a structurally and optically smooth and uniform fiber remains elusive, this work serves as a guide for future efforts.
NASA Astrophysics Data System (ADS)
Gomez de Arco, Lewis Mortimer
Graphene and carbon nanotubes have outstanding electrical and thermal conductivity. These characteristics make them exciting materials with high potential to replace silicon and surpass its performance in the next generation of semiconductors devices, such devices ought to be considerably smaller and faster than the ones used in present technology. Despite of the excellent electrical and thermal conduction properties of graphene and carbon nanotubes, the advance of nanoelectronics based on them has been hampered due to fundamental limitations of the current synthesis and integration technologies of these carbon nanomaterials. Therefore, there is a strong need to do research at fundamental and applicative levels to help find the roadmap that these materials need to follow, in order to become a real alternative for silicon in future technologies. This dissertation present our approach to overcome some of the most critical problems that hinder the implementation of graphene and carbon nanotubes as important components in real-life macro and nanoelectronic devices. Towards this end, we systematically studied synthesis methods for scalable, high quality graphene and evaluated our large-scale synthesized graphene as transparent electrodes in functional energy conversion devices. In addition, we explored scalable methods to obtain carbon nanotube field-effect transistors with only semiconductor nanotube channels and studied the substrate influence on the structure and metal to semiconductor ratio of aligned nanotubes. Although we have successfully tackled some of the most important challenges of the above-mentioned one- and two-dimensional carbon nanostructures, more remains to be done to integrate them as functional components in electronic devices to reach the goal of transferring them from the laboratory to the manufacturing industry, and ultimately to the society. In chapter 1, a general introduction to carbon nanomaterials is presented, followed by a more focused discussion on the structure and properties of graphene and carbon nanotubes. Chapter 2, presents the development of a chemical vapor deposition method for scalable graphene synthesis and the evaluation of its electrical properties as the active channel in field effect transistor and as a transparent conductor. Chapter 3 presents further work on graphene synthesis on single crystal nickel and the influence of the substrate atomic arrangement on the synthesized graphene. Chapter 4 presents the implementation of the highly scalable graphene synthesized by CVD as the transparent electrode in flexible organic photovoltaic cells. Chapter 5 evaluates the influence of substrate/nanotube interactions during align nanotube growth on the Raman signature of the resulting aligned nanotubes, nanotube structure and metal to semiconductor ratio. Chapter 6 presents our findings on a scalable method that can be used at wafer scale to achieve metal to semiconductor conversion of carbon nanotubes by light irradiation and its application to achieve semiconducting CNTFETs. Finally, in chapter 7, future research directions in related areas of science and technology are proposed.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES Ch. 301, App. C Appendix C... Description Transportation Payment Method employee used to purchase transportation tickets Method Indicator...
The effect of distance learning via SMS on academic achievement and satisfaction of medical students
Sichani, Mehrdad Mohammadi; Mobarakeh, Shadi Reissizadeh; Omid, Athar
2018-01-01
INTRODUCTION: Recently, medical education has made significant progress, and medical teachers are trying to find methods that have most impressive effects on learning. One of the useful learning methods is student active participation. One of the helpful teaching aids in this method is mobile technology. The present study aimed to determine the effect of sending educational questions through short message service (SMS) on academic achievement and satisfaction of medical students and compare that with lecture teaching. SUBJECTS AND METHODS: In an semi-experimental, two chapters of urology reference book, Smiths General Urology 17th edition, were taught to 47 medical students of Isfahan University of Medical Sciences in urology course in 2013 academic year. Kidney tumors chapter was educated by sending questions through SMS, and bladder tumors part was taught in a lecture session. For each method, pretest and posttest were held, each consisting of thirty multiple choice questions. To examine the knowledge retention, a test session was held on the same terms for each chapter, 1 month later. At the end, survey forms were distributed to assess student's satisfaction with SMS learning method. Data were analyzed through using SPSS 20. RESULTS: The findings demonstrated a statistically significant difference between the two learning methods in the medication test scores. Evaluation of the satisfaction showed 78.72% of participants were not satisfied. CONCLUSIONS: The results of the study showed that distance learning through SMS in medical students could lead to increase knowledge, however, it was not effective on their satisfaction. PMID:29629390
26 CFR 1.446-1 - General rule for methods of accounting.
Code of Federal Regulations, 2011 CFR
2011-04-01
... books. For requirement respecting the adoption or change of accounting method, see section 446(e) and... taxpayer to adopt or change to a method of accounting permitted by this chapter although the method is not..., which require the prior approval of the Commissioner in the case of changes in accounting method. (iii...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Z.; Liu, C.; Botterud, A.
Renewable energy resources have been rapidly integrated into power systems in many parts of the world, contributing to a cleaner and more sustainable supply of electricity. Wind and solar resources also introduce new challenges for system operations and planning in terms of economics and reliability because of their variability and uncertainty. Operational strategies based on stochastic optimization have been developed recently to address these challenges. In general terms, these stochastic strategies either embed uncertainties into the scheduling formulations (e.g., the unit commitment [UC] problem) in probabilistic forms or develop more appropriate operating reserve strategies to take advantage of advanced forecastingmore » techniques. Other approaches to address uncertainty are also proposed, where operational feasibility is ensured within an uncertainty set of forecasting intervals. In this report, a comprehensive review is conducted to present the state of the art through Spring 2015 in the area of stochastic methods applied to power system operations with high penetration of renewable energy. Chapters 1 and 2 give a brief introduction and overview of power system and electricity market operations, as well as the impact of renewable energy and how this impact is typically considered in modeling tools. Chapter 3 reviews relevant literature on operating reserves and specifically probabilistic methods to estimate the need for system reserve requirements. Chapter 4 looks at stochastic programming formulations of the UC and economic dispatch (ED) problems, highlighting benefits reported in the literature as well as recent industry developments. Chapter 5 briefly introduces alternative formulations of UC under uncertainty, such as robust, chance-constrained, and interval programming. Finally, in Chapter 6, we conclude with the main observations from our review and important directions for future work.« less
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); McClain, Charles R.; Darzi, Michael; Barnes, Robert A.; Eplee, Robert E.; Firestone, James K.; Patt, Frederick S.; Robinson, Wayne D.; Schieber, Brian D.;
1996-01-01
This document provides five brief reports that address several quality control procedures under the auspices of the Calibration and Validation Element (CVE) within the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Project. Chapter 1 describes analyses of the 32 sensor engineering telemetry streams. Anomalies in any of the values may impact sensor performance in direct or indirect ways. The analyses are primarily examinations of parameter time series combined with statistical methods such as auto- and cross-correlation functions. Chapter 2 describes how the various onboard (solar and lunar) and vicarious (in situ) calibration data will be analyzed to quantify sensor degradation, if present. The analyses also include methods for detecting the influence of charged particles on sensor performance such as might be expected in the South Atlantic Anomaly (SAA). Chapter 3 discusses the quality control of the ancillary environmental data that are routinely received from other agencies or projects which are used in the atmospheric correction algorithm (total ozone, surface wind velocity, and surface pressure; surface relative humidity is also obtained, but is not used in the initial operational algorithm). Chapter 4 explains the procedures for screening level-, level-2, and level-3 products. These quality control operations incorporate both automated and interactive procedures which check for file format errors (all levels), navigation offsets (level-1), mask and flag performance (level-2), and product anomalies (all levels). Finally, Chapter 5 discusses the match-up data set development for comparing SeaWiFS level-2 derived products with in situ observations, as well as the subsequent outlier analyses that will be used for evaluating error sources.
Study and development of label-free optical biosensors for biomedical applications
NASA Astrophysics Data System (ADS)
Choi, Charles J.
For the majority of assays currently performed, fluorescent or colorimetric chemical labels are commonly attached to the molecules under study so that they may be readily visualized. The methods of using labels to track biomolecular binding events are very sensitive and effective, and are employed as standardized assay protocol across research labs worldwide. However, using labels induces experimental uncertainties due to the effect of the label on molecular conformation, active binding sites, or inability to find an appropriate label that functions equivalently for all molecules in an experiment. Therefore, the ability to perform highly sensitive biochemical detection without the use of fluorescent labels would further simplify assay protocols and would provide quantitative kinetic data, while removing experimental artifacts from fluorescent quenching, shelf-life, and background fluorescence phenomena. In view of the advantages mentioned above, the study and development of optical label-free sensor technologies have been undertaken here. In general, label-free photonic crystal (PC) biosensors and metal nanodome array surface-enhanced Raman scattering (SERS) substrates, both of which are fabricated by nanoreplica molding process, have been used as the method to attack the problem. Chapter 1 shows the work on PC label-free biosensor incorporated microfluidic network for bioassay performance enhancement and kinetic reaction rate constant determination. Chapter 2 describes the work on theoretical and experimental comparison of label-free biosensing in microplate, microfluidic, and spot-based affinity capture assays. Chapter 3 shows the work on integration of PC biosensor with actuate-to-open valve microfluidic chip for pL-volume combinatorial mixing and screening application. In Chapter 4, the development and characterization of SERS nanodome array is shown. Lastly, Chapter 5 describes SERS nanodome sensor incorporated tubing for point-of-care monitoring of intravenous drugs and metabolites.
Lattice Methods and the Nuclear Few- and Many-Body Problem
NASA Astrophysics Data System (ADS)
Lee, Dean
This chapter builds upon the review of lattice methods and effective field theory of the previous chapter. We begin with a brief overview of lattice calculations using chiral effective field theory and some recent applications. We then describe several methods for computing scattering on the lattice. After that we focus on the main goal, explaining the theory and algorithms relevant to lattice simulations of nuclear few- and many-body systems. We discuss the exact equivalence of four different lattice formalisms, the Grassmann path integral, transfer matrix operator, Grassmann path integral with auxiliary fields, and transfer matrix operator with auxiliary fields. Along with our analysis we include several coding examples and a number of exercises for the calculations of few- and many-body systems at leading order in chiral effective field theory.
Overview of artificial neural networks.
Zou, Jinming; Han, Yi; So, Sung-Sau
2008-01-01
The artificial neural network (ANN), or simply neural network, is a machine learning method evolved from the idea of simulating the human brain. The data explosion in modem drug discovery research requires sophisticated analysis methods to uncover the hidden causal relationships between single or multiple responses and a large set of properties. The ANN is one of many versatile tools to meet the demand in drug discovery modeling. Compared to a traditional regression approach, the ANN is capable of modeling complex nonlinear relationships. The ANN also has excellent fault tolerance and is fast and highly scalable with parallel processing. This chapter introduces the background of ANN development and outlines the basic concepts crucially important for understanding more sophisticated ANN. Several commonly used learning methods and network setups are discussed briefly at the end of the chapter.
Continuum Electrostatics Approaches to Calculating pKas and Ems in Proteins.
Gunner, M R; Baker, N A
2016-01-01
Proteins change their charge state through protonation and redox reactions as well as through binding charged ligands. The free energy of these reactions is dominated by solvation and electrostatic energies and modulated by protein conformational relaxation in response to the ionization state changes. Although computational methods for calculating these interactions can provide very powerful tools for predicting protein charge states, they include several critical approximations of which users should be aware. This chapter discusses the strengths, weaknesses, and approximations of popular computational methods for predicting charge states and understanding the underlying electrostatic interactions. The goal of this chapter is to inform users about applications and potential caveats of these methods as well as outline directions for future theoretical and computational research. © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Purpose of this Strategic Petroleum Reserve (SPR) Plan Amendment is to provide a Distribution Plan, setting forth the method of drawdown and distribution of the Reserve. Chapter VII of the SPR Plan contained a Distribution Plan which identified and discussed the major objectives, criteria and other factors that will be considered in developing the detailed plan. This Amendment replaces Chapter VII of the SPR Plan in its entirety.
The Unexplored Mechanisms and Regulatory Functions of Ribosomal Translocation
NASA Astrophysics Data System (ADS)
Alejo, Jose Luis
In every cell, protein synthesis is carried out by the ribosome, a complex macromolecular RNA-protein assembly. Decades of structural and kinetic studies have increased our understanding of ribosome initiation, decoding, translocation and termination. Yet, the underlying mechanism of these fundamental processes has yet to be fully delineated. Hence, the molecular basis of regulation remains obscure. Here, single-molecule fluorescence methods are applied to decipher the mechanism and regulatory roles of the multi-step process of directional substrate translocation on the ribosome that accompanies every round of protein synthesis. In Chapter 1, single-molecule fluorescence resonance energy transfer (smFRET) is introduced as a tool for studying bacterial ribosome translocation. Chapter 2 details the experimental methods. In Chapter 3, the elongation factor G(EF-G)-catalyzed movement of substrates through the ribosome is examined from several perspectives or signals reporting on various degrees of freedom of ribosome dynamics. Two ribosomal states interconvert in the presence of EF-G(GDP), displaying novel head domain motions, until relocking takes place. In Chapter 4, in order to test if the mentioned fluctuations leading to relocking are correlated to the engagement of the P-site by the peptidyl-tRNA, the translocation of miscoded tRNAs is studied. Severe defects in the relocking stages of translocation reveal the correlation between this new stage of translocation and P-site tRNA engagement.
NASA Astrophysics Data System (ADS)
Stroock, Abraham Duncan
This thesis presents the use of patterned surfaces for controlling fluid dynamics on a sub-millimeter scale, and for fabricating a new class of polymeric materials. In chapters 1--4, chemical and mechanical structures were used to control the form of flows of fluids in microchannels. This work was done in the context of the development of microfluidic technology for performing chemical tasks in portable, integrated devices. Chapter 1 reviews this work for an audience of chemists who are potential users of these techniques in the development of micro-analytical and micro-synthetic devices. Appendix 1 contains a more general review of microfluidics. Chapter 2 presents experimental results on the use of patterned surface charge density to create new electroosmotic (EO) flows in microchannels; the chapter includes a successful model of the observed flows. In Chapter 3, patterns of topography on the wall of a microchannel were used to generate recirculation in pressure-driven flows. The design and characterization of an efficient mixer based on these flows is presented. A theoretical treatment of these flows is given in Appendix 2. The experimental methods used for the work with both EO and pressure-driven flows are presented in Chapter 4. In Chapter 5, a pattern of asymmetrical grooves in a heated plate was used to perturb Marangoni-Benard (M-B) convection, a dynamic system that spontaneously forms patterned flows. The interaction of the imposed pattern and the inherent pattern of the M-B convection led to a net flow in the plane of convecting layer of fluid. The direction of this flow depended on the orientation of the asymmetrical grooves, the temperature difference across the layer, and the thickness of the layer. A phenomenological model is presented to explain this ratchet effect in which local recirculation was coupled into a global flow. In Chapter 6, surfaces patterned by microcontact printing were used as a workbench on which to build molecularly thin polymer films of well-defined lateral size and shape for subsequent release into solution; the released structures are referred to as two-dimensional (2D) polymers. This type of structure has been a theoretical curiosity and an experimental challenge for several decades. The key element of this method was the use of hydrophobic interactions as a "switchable" adhesive that attached the films to the surface during growth in water and then allowed the completed films to be removed in air. The structure and chemical composition of the films was characterized.
Environmental Chemical Analysis (by B. B. Kebbekus and S. Mitra)
NASA Astrophysics Data System (ADS)
Bower, Reviewed By Nathan W.
1999-11-01
This text helps to fill a void in the market, as there are relatively few undergraduate instrumental analysis texts designed specifically for the expanding population of environmental science students. R. N. Reeve's introductory, open-learning Environmental Analysis (Wiley, 1994) is one of the few, and it is aimed at a lower level and is less appropriate for traditional classroom study. Kebbekus and Mitra's book appears to be an update of I. Marr and M. Cresser's excellent 1983 text by the same name (and also published under the Chapman and Hall imprint). It assumes no background in instrumental methods of analysis but it does depend upon a good general chemistry background in kinetic and equilibrium calculations and the standard laboratory techniques found in a classical introduction to analytical chemistry. The slant taken by the authors is aimed more toward engineers, not only in the choice of topics, but also in how they are presented. For example, the statistical significance tests presented follow an engineering format rather than the standard used in analytical chemistry. This approach does not detract from the book's clarity. The writing style is concise and the book is generally well written. The earlier text, which has become somewhat of a classic, took the unusual step of teaching the instruments in the context of their environmental application. It was divided into sections on the "atmosphere", the "hydrosphere", the "lithosphere", and the "biosphere". This text takes a similar approach in the second half, with chapters on methods for air, water, and solid samples. Users who intend to use the book as a text instead of a reference will appreciate the addition of chapters in the first half of the book on spectroscopic, chromatographic, and mass spectrometric methods. The six chapters in these two parts of the book along with four chapters scattered throughout on environmental measurements, sampling, sample preparation, and quality assurance make a nice package overall, although I might personally prefer a chapter on environmental chemometrics as well. Most of the major instrumental methods actively employed in environmental analysis are treated either in the theoretical chapters or in the later application chapters. These include introductions to UVvis, FTIR, SFC, HPLC, IC (but not CE), GC, GCMS, ISEs, anodic stripping, FAA, GFAA, XRF, ICP, ICPMS, and even two pages on the basics of immunoassays. Although this text provides an update of the earlier book, its greatest failing is a particular strength of the first text: it fails to provide any detailed references within the text, relying on an average of five generic "suggested readings" at the end of each chapter. Even tables such as "Some US drinking water quality standards" give no references, setting a bad example for students who have to write research papers of their own. As it also does not provide the detailed procedures or fine-quality figures that were available in the earlier text, it is not worth as much as a reference book or for library acquisitions. In the first book the detailed procedures served as a "lab manual within the text" and this increased its pedagogic value tremendously. Still, this text does make use of generalized procedures to step through many of the standard methods encountered by practicing environmental scientists, and the tables are in most cases superior to those in similar texts, lacking only the references to make them as useful as they might be. A second weakness of note comes from the organization. Having two different parts of the book covering material that relates to each of the instrumental methods means that it is not always clear where the reader should go to find information that relates to a particular method. For example, specifics on sampling equipment for water and soils appear in the chapter on sampling, but for air they appear in the applications section. Similarly, the sample preparation chapter would make more logical sense if it appeared before the instrumental methods that make use of it, and the F-test should be discussed before it is called upon to tell whether two populations have the same variance. The various discussions rarely refer the reader to related material located in other parts of the text, so occasionally one is left wondering about the lack of coverage. However, in the end the authors do introduce all the topics fairly well, and the text seems to have a good index. In summary, this text provides a very readable introduction to instrumental environmental analysis that is appropriate for a one-semester course designed for advanced undergraduate environmental engineering and environmental science students. If the instructor is careful to read the text beforehand so as to guide the students appropriately, supplying additional references when experimental work is to be undertaken, it should also work satisfactorily in courses that have a laboratory component.
Eliciting Spontaneous Speech in Bilingual Students: Methods & Techniques.
ERIC Educational Resources Information Center
Cornejo, Ricardo J.; And Others
Intended to provide practical information pertaining to methods and techniques for speech elicitation and production, the monograph offers specific methods and techniques to elicit spontaneous speech in bilingual students. Chapter 1, "Traditional Methodologies for Language Production and Recording," presents an overview of studies using…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendes, J.; Bessa, R.J.; Keko, H.
Wind power forecasting (WPF) provides important inputs to power system operators and electricity market participants. It is therefore not surprising that WPF has attracted increasing interest within the electric power industry. In this report, we document our research on improving statistical WPF algorithms for point, uncertainty, and ramp forecasting. Below, we provide a brief introduction to the research presented in the following chapters. For a detailed overview of the state-of-the-art in wind power forecasting, we refer to [1]. Our related work on the application of WPF in operational decisions is documented in [2]. Point forecasts of wind power are highlymore » dependent on the training criteria used in the statistical algorithms that are used to convert weather forecasts and observational data to a power forecast. In Chapter 2, we explore the application of information theoretic learning (ITL) as opposed to the classical minimum square error (MSE) criterion for point forecasting. In contrast to the MSE criterion, ITL criteria do not assume a Gaussian distribution of the forecasting errors. We investigate to what extent ITL criteria yield better results. In addition, we analyze time-adaptive training algorithms and how they enable WPF algorithms to cope with non-stationary data and, thus, to adapt to new situations without requiring additional offline training of the model. We test the new point forecasting algorithms on two wind farms located in the U.S. Midwest. Although there have been advancements in deterministic WPF, a single-valued forecast cannot provide information on the dispersion of observations around the predicted value. We argue that it is essential to generate, together with (or as an alternative to) point forecasts, a representation of the wind power uncertainty. Wind power uncertainty representation can take the form of probabilistic forecasts (e.g., probability density function, quantiles), risk indices (e.g., prediction risk index) or scenarios (with spatial and/or temporal dependence). Statistical approaches to uncertainty forecasting basically consist of estimating the uncertainty based on observed forecasting errors. Quantile regression (QR) is currently a commonly used approach in uncertainty forecasting. In Chapter 3, we propose new statistical approaches to the uncertainty estimation problem by employing kernel density forecast (KDF) methods. We use two estimators in both offline and time-adaptive modes, namely, the Nadaraya-Watson (NW) and Quantilecopula (QC) estimators. We conduct detailed tests of the new approaches using QR as a benchmark. One of the major issues in wind power generation are sudden and large changes of wind power output over a short period of time, namely ramping events. In Chapter 4, we perform a comparative study of existing definitions and methodologies for ramp forecasting. We also introduce a new probabilistic method for ramp event detection. The method starts with a stochastic algorithm that generates wind power scenarios, which are passed through a high-pass filter for ramp detection and estimation of the likelihood of ramp events to happen. The report is organized as follows: Chapter 2 presents the results of the application of ITL training criteria to deterministic WPF; Chapter 3 reports the study on probabilistic WPF, including new contributions to wind power uncertainty forecasting; Chapter 4 presents a new method to predict and visualize ramp events, comparing it with state-of-the-art methodologies; Chapter 5 briefly summarizes the main findings and contributions of this report.« less
Gallegos, Tanya J.; Bern, Carleton R.; Birdwell, Justin E.; Haines, Seth S.; Engle, Mark A.
2015-01-01
Global trends toward developing new energy resources from lower grade, larger tonnage deposits that are not generally accessible using “conventional” extraction methods involve variations of subsurface in situ extraction techniques including in situ oil-shale retorting, hydraulic fracturing of petroleum reservoirs, and in situ recovery (ISR) of uranium. Although these methods are economically feasible and perhaps result in a smaller above-ground land-use footprint, there remain uncertainties regarding potential subsurface impacts to groundwater. This chapter provides an overview of the role of water in these technologies and the opportunities and challenges for water reuse and recycling.
Effective Lagrangians and Current Algebra in Three Dimensions
NASA Astrophysics Data System (ADS)
Ferretti, Gabriele
In this thesis we study three dimensional field theories that arise as effective Lagrangians of quantum chromodynamics in Minkowski space with signature (2,1) (QCD3). In the first chapter, we explain the method of effective Langrangians and the relevance of current algebra techniques to field theory. We also provide the physical motivations for the study of QCD3 as a toy model for confinement and as a theory of quantum antiferromagnets (QAF). In chapter two, we derive the relevant effective Lagrangian by studying the low energy behavior of QCD3, paying particular attention to how the global symmetries are realized at the quantum level. In chapter three, we show how baryons arise as topological solitons of the effective Lagrangian and also show that their statistics depends on the number of colors as predicted by the quark model. We calculate mass splitting and magnetic moments of the soliton and find logarithmic corrections to the naive quark model predictions. In chapter four, we drive the current algebra of the theory. We find that the current algebra is a co -homologically non-trivial generalization of Kac-Moody algebras to three dimensions. This fact may provide a new, non -perturbative way to quantize the theory. In chapter five, we discuss the renormalizability of the model in the large-N expansion. We prove the validity of the non-renormalization theorem and compute the critical exponents in a specific limiting case, the CP^ {N-1} model with a Chern-Simons term. Finally, chapter six contains some brief concluding remarks.
Toroidal Optical Microresonators as Single-Particle Absorption Spectrometers
NASA Astrophysics Data System (ADS)
Heylman, Kevin D.
Single-particle and single-molecule measurements are invaluable tools for characterizing structural and energetic properties of molecules and nanomaterials. Photothermal microscopy in particular is an ultrasensitive technique capable of single-molecule resolution. In this thesis I introduce a new form of photothermal spectroscopy involving toroidal optical microresonators as detectors and a pair of non-interacting lasers as pump and probe for performing single-target absorption spectroscopy. The first three chapters will discuss the motivation, design principles, underlying theory, and fabrication process for the microresonator absorption spectrometer. With an early version of the spectrometer, I demonstrate photothermal mapping and all-optical tuning with toroids of different geometries in Chapter 4. In Chapter 5, I discuss photothermal mapping and measurement of the absolute absorption cross-sections of individual carbon nanotubes. For the next generation of measurements I incorporate all of the advances described in Chapter 2, including a double-modulation technique to improve detection limits and a tunable pump laser for spectral measurements on single gold nanoparticles. In Chapter 6 I observe sharp Fano resonances in the spectra of gold nanoparticles and describe them with a theoretical model. I continued to study this photonic-plasmonic hybrid system in Chapter 7 and explore the thermal tuning of the Fano resonance phase while quantifying the Fisher information. The new method of photothermal single-particle absorption spectroscopy that I will discuss in this thesis has reached record detection limits for microresonator sensing and is within striking distance of becoming the first single-molecule room-temperature absorption spectrometer.
Design with constructal theory: Steam generators, turbines and heat exchangers
NASA Astrophysics Data System (ADS)
Kim, Yong Sung
This dissertation shows that the architecture of steam generators, steam turbines and heat exchangers for power plants can be predicted on the basis of the constructal law. According to constructal theory, the flow architecture emerges such that it provides progressively greater access to its currents. Each chapter shows how constructal theory guides the generation of designs in pursuit of higher performance. Chapter two shows the tube diameters, the number of riser tubes, the water circulation rate and the rate of steam production are determined by maximizing the heat transfer rate from hot gases to riser tubes and minimizing the global flow resistance under the fixed volume constraint. Chapter three shows how the optimal spacing between adjacent tubes, the number of tubes for the downcomer and the riser and the location of the flow reversal for the continuous steam generator are determined by the intersection of asymptotes method, and by minimizing the flow resistance under the fixed volume constraints. Chapter four shows that the mass inventory for steam turbines can be distributed between high pressure and low pressure turbines such that the global performance of the power plant is maximal under the total mass constraint. Chapter five presents the more general configuration of a two-stream heat exchanger with forced convection of the hot side and natural circulation on the cold side. Chapter six demonstrates that segmenting a tube with condensation on the outer surface leads to a smaller thermal resistance, and generates design criteria for the performance of multi-tube designs.
All-optical image processing with nonlinear liquid crystals
NASA Astrophysics Data System (ADS)
Hong, Kuan-Lun
Liquid crystals are fascinating materials because of several advantages such as large optical birefringence, dielectric anisotropic, and easily compatible to most kinds of materials. Compared to the electro-optical properties of liquid crystals widely applied in displays and switching application, transparency through most parts of wavelengths also makes liquid crystals a better candidate for all-optical processing. The fast response time of liquid crystals resulting from multiple nonlinear effects, such as thermal and density effect can even make real-time processing realized. In addition, blue phase liquid crystals with spontaneously self-assembled three dimensional cubic structures attracted academic attention. In my dissertation, I will divide the whole contents into six parts. In Chapter 1, a brief introduction of liquid crystals is presented, including the current progress and the classification of liquid crystals. Anisotropy and laser induced director axis reorientation is presented in Chapter 2. In Chapter 3, I will solve the electrostrictive coupled equation and analyze the laser induced thermal and density effect in both static and dynamic ways. Furthermore, a dynamic simulation of laser induced density fluctuation is proposed by applying finite element method. In Chapter 4, two image processing setups are presented. One is the intensity inversion experiment in which intensity dependent phase modulation is the mechanism. The other is the wavelength conversion experiment in which I can read the invisible image with a visible probe beam. Both experiments are accompanied with simulations to realize the matching between the theories and practical experiment results. In Chapter 5, optical properties of blue phase liquid crystals will be introduced and discussed. The results of grating diffractions and thermal refractive index gradient are presented in this chapter. In addition, fiber arrays imaging and switching with BPLCs will be included in this chapter. Finally, I will give a brief summary and mention a few future researches in Chapter 6.
Structure, Mechanics and Synthesis of Nanoscale Carbon and Boron Nitride
NASA Astrophysics Data System (ADS)
Rinaldo, Steven G.
This thesis is divided into two parts. In Part I, we examine the properties of thin sheets of carbon and boron nitride. We begin with an introduction to the theory of elastic sheets, where the stretching and bending modes are considered in detail. The coupling between stretching and bending modes is thought to play a crucial role in the thermodynamic stability of atomically-thin 2D sheets such as graphene. In Chapter 2, we begin by looking at the fabrication of suspended, atomically thin sheets of graphene. We then study their mechanical resonances which are read via an optical transduction technique. The frequency of the resonators was found to depend on their temperature, as was their quality factor. We conclude by offering some interpretations of the data in terms of the stretching and bending modes of graphene. In Chapter 3, we look briefly at the fabrication of thin sheets of carbon and boron nitride nanotubes. We examine the structure of the sheets using transmission and scanning electron microscopy (TEM and SEM, respectively). We then show a technique by which one can make sheets suspended over a trench with adjustable supports. Finally, DC measurements of the resistivity of the sheets in the temperature range 600 -- 1400 C are presented. In Chapter 4, we study the folding of few-layer graphene oxide, graphene and boron nitride into 3D aerogel monoliths. The properties of graphene oxide are first considered, after which the structure of graphene and boron nitride aerogels is examined using TEM and SEM. Some models for their structure are proposed. In Part II, we look at synthesis techniques for boron nitride (BN). In Chapter 5, we study the conversion of carbon structures of boron nitride via the application of carbothermal reduction of boron oxide followed by nitridation. We apply the conversion to a wide variety of morphologies, including aerogels, carbon fibers and nanotubes, and highly oriented pyrolytic graphite. In the latter chapters, we look at the formation of boron nitride nanotubes (BNNTs). In Chapter 6, we look at various methods of producing BNNTs from boron droplets, and introduce a new method involving injection of boron powder into an induction furnace. In Chapter 7 we consider another useful process, where ammonia is reacted with boron vapor generated in situ, either through the reaction of boron with metal oxides or through the decomposition of metal borides.
Solution Synthesis of Atomically Precise Graphene Nanoribbons
NASA Astrophysics Data System (ADS)
Shekhirev, Mikhail; Sinitskii, Alexander
2017-05-01
Bottom-up fabrication of narrow strips of graphene, also known as graphene nanoribbons or GNRs, is an attractive way to open a bandgap in semimetallic graphene. In this chapter, we review recent progress in solution-based synthesis of GNRs with atomically precise structures. We discuss a variety of atomically precise GNRs and highlight theoretical and practical aspects of their structural design and solution synthesis. These GNRs are typically synthesized through a polymerization of rationally designed molecular precursors followed by a planarization through a cyclodehydrogenation reaction. We discuss various synthetic techniques for polymerization and planarization steps, possible approaches for chemical modification of GNRs, and compare the properties of GNRs that could be achieved by different synthetic methods. We also discuss the importance of the rational design of molecular precursors to avoid isomerization during the synthesis and achieve GNRs that have only one possible structure. Significant attention in this chapter is paid to the methods of material characterization of solution-synthesized GNRs. The chapter is concluded with the discussion of the most significant challenges in the field and the future outlook.
Thomas P. Holmes; Wiktor L. Adamowicz
2003-01-01
Stated preference methods of environmental valuation have been used by economists for decades where behavioral data have limitations. The contingent valuation method (Chapter 5) is the oldest stated preference approach, and hundreds of contingent valuation studies have been conducted. More recently, and especially over the last decade, a class of stated preference...
In silico prediction of post-translational modifications.
Liu, Chunmei; Li, Hui
2011-01-01
Methods for predicting protein post-translational modifications have been developed extensively. In this chapter, we review major post-translational modification prediction strategies, with a particular focus on statistical and machine learning approaches. We present the workflow of the methods and summarize the advantages and disadvantages of the methods.
Applications of one-dimensional structured nanomaterials as biosensors and transparent electronics
NASA Astrophysics Data System (ADS)
Ishikawa, Fumiaki
This dissertation presents applications of one-dimensional structured nanomaterials, carbon nanotubes and In2O3 nanowires, for biosensors and transparent electronics. Chapter 1 gives the motivation to study applications of one-dimensional structured nanomaterials, and also brief introduction to structure, synthesis, and electronic properties of carbon nanotubes and In2O3 nanowires. In Chapter 2, introduction and motivation of biosensors using nanotubes/nanowires is given, followed by an overview on important background knowledge and concepts in biosensing. In Chapter 3, application of carbon nanotube biosensors toward brown tide algae detection is presented. Our devices successfully detected a brown tide marker selectively with real-time response. In Chapter 4, we demonstrate that In2O3 nanowire biosensors coupled with an antibody mimic protein (Fibronectin, Fn) can be used to detect nucleocapsid (N) protein, a biomarker for severe acute respiratory syndrome (SARS), at concentrations to below the sub-nanomolar range. In Chapter 5, we develop an analytical method to calibrate nanowire biosensor responses that can suppress the device-to-device variation in sensing response significantly. In Chapter 6, we investigate the effect of nanotube density on the biosensor performance, and proved that it plays an important role through systematic studies. In Chapter 7, I propose a future direction of nanobiosensors research, and show preliminary results along the proposed direction. I first present a concept of an ideal bioassay system with a list of requirements for the system, and propose the strategy of multi-integration to establish a system based on nanobiosensors that satisfies all of the requirements. In Chapter 8, we demonstrate high performance fully transparent transistors based on transfer printed aligned carbon nanotubes on both rigid and flexible substrates. We achieved device mobility as high as 1,300 cm 2V-1s-1 on glass substrates, which is the highest among transparent transistors reported so far. We also demonstrated fully transparent PMOS inverters on flexible substrates, and also successfully controlled commercial GaN light--emitting diodes (LEDs) with light intensity modulation of 103. Lastly, a brief summary of this thesis is given in Chapter 9.
Landslides and engineering geology of the Seattle, Washington, area
Baum, Rex L.; Godt, Jonathan W.; Highland, Lynn M.
2008-01-01
This volume brings together case studies and summary papers describing the application of state-of-the-art engineering geologic methods to landslide hazard analysis for the Seattle, Washington, area. An introductory chapter provides a thorough description of the Quaternary and bedrock geology of Seattle. Nine additional chapters review the history of landslide mapping in Seattle, present case studies of individual landslides, describe the results of spatial assessments of landslide hazard, discuss hydrologic controls on landsliding, and outline an early warning system for rainfall-induced landslides.
ERIC Educational Resources Information Center
Jimenez Lozano, Blanca; And Others
This document is an English-language abstract (Approximately 1,500 words) of a study on educational research in Mexico. Chapter one discusses the importance of educational research, in terms of its role both in scientific and technical development; it should use scientific methods so that it will have solid foundations. Chapter two is a survey of…
New Passivation Methods of GaAs.
1980-01-01
Fabrication of Thin Nitride Layers on GaAs 33 - 35 CHAPTER 7 Passivation of InGaAsP 36 - 37 CHAPTER 8 Emulsions on GaAs Surfaces 38 - 42 APPENDIX...not yet given any useful results. The deposition of SiO2 by using emulsions is pursued and first results on the possibility of GaAs doping are...glycol-tartaric acid based aqueous solution was used in order to anodically oxidise the gate notch after the source and drain ohmic contacts were formed
Data Mining for Financial Applications
NASA Astrophysics Data System (ADS)
Kovalerchuk, Boris; Vityaev, Evgenii
This chapter describes Data Mining in finance by discussing financial tasks, specifics of methodologies and techniques in this Data Mining area. It includes time dependence, data selection, forecast horizon, measures of success, quality of patterns, hypothesis evaluation, problem ID, method profile, attribute-based and relational methodologies. The second part of the chapter discusses Data Mining models and practice in finance. It covers use of neural networks in portfolio management, design of interpretable trading rules and discovering money laundering schemes using decision rules and relational Data Mining methodology.
Non-commutative methods in quantum mechanics
NASA Astrophysics Data System (ADS)
Millard, Andrew Clive
1997-09-01
Non-commutativity appears in physics almost hand in hand with quantum mechanics. Non-commuting operators corresponding to observables lead to Heisenberg's Uncertainty Principle, which is often used as a prime example of how quantum mechanics transcends 'common sense', while the operators that generate a symmetry group are usually given in terms of their commutation relations. This thesis discusses a number of new developments which go beyond the usual stopping point of non-commuting quantities as matrices with complex elements. Chapter 2 shows how certain generalisations of quantum mechanics, from using complex numbers to using other (often non-commutative) algebras, can still be written as linear systems with symplectic phase flows. Chapter 3 deals with Adler's trace dynamics, a non-linear graded generalisation of Hamiltonian dynamics with supersymmetry applications, where the phase space coordinates are (generally non-commuting) operators, and reports on aspects of a demonstration that the statistical averages of the dynamical variables obey the rules of complex quantum field theory. The last two chapters discuss specific aspects of quaternionic quantum mechanics. Chapter 4 reports a generalised projective representation theory and presents a structure theorem that categorises quaternionic projective representations. Chapter 5 deals with a generalisation of the coherent states formalism and examines how it may be applied to two commonly used groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morello, Michael Joseph
2007-12-19
The thesis is organized as follows: Chapter 1 describes the theoretical framework of non-leptonic Bmore » $$0\\atop{s}$$ → H +h' - decays, with a simple overview of the CP violation mechanism within the Standard Model and of the most used phenomenological approaches in the evaluation of strong interaction contributions. The chapter contains also a review of the theoretical expectations and the current experimental measurements along with a discussion about the importance of studying such decays. Chapter 2 contains a general description of the Tevatron collider and of the CDF II detector. Chapter 3 is devoted to the description of the data sample used for the measurement and the method used in extracting the signal from the background. Particular attention is dedicated to the on-line trigger selection, which is crucial to collect a sample enriched in B$$0\\atop{s}$$ → h +h' - decays. Chapter 4 shows how the information from kinematics and particle identification was used to achieve a statistical discrimination amongst modes to extract individual measurements. The available resolutions in mass or in particle identification are separately insufficient for an event-by-event separation of B$$0\\atop{s}$$ → h +h' - modes. The choice of observables and the technique used to combine them is an important and innovative aspect of the analysis described in this thesis. Chapter 5 is devoted to the accurate determination of the invariant mass lineshape. This is a crucial ingredient for resolving overlapping mass peaks. This chapter details all resolution effects with particular attention at the tails due to the emission of low-energy photons from charged kaons and pions in the final state (FSR). For the first time the effect of FSR has been accurately accounted for in a CDF analysis. Chapter 6 describes how kinematic and PID information, discussed in chap. 4 and chap. 5 were combined in a maximum Likelihood fit to statistically determine the composition of the B$$0\\atop{s}$$ → h +h' - sample. This kinematics-PID combined fit has been developed and performed for the first time at CDF in the analysis presented in this thesis and this methodology was later inherited by several other analyses. Chapter 7 is devoted to the study of the isolation variable, which is a crucial handle to enhance the signal-to-background ratio in the off-line selection. It exploits the property that the b-hadrons tend to carry a larger fraction of the transverse momentum of the particles produced in the fragmentation, with respect to lighter hadrons. Since the simulators do not accurately reproduce the fragmentation processes, this chapter is devoted to the study of the control data sample of B$$0\\atop{s}$$ → J/ΨX decays to probe the characteristics of this variable. Chapter 8 describes an innovative procedure used to optimize the selection to minimize the statistical uncertainty on the quantities one wishes to measure. The procedure is based on the fit of composition described in chap. 6. Chapter 9 reports the results of the fit of composition described in chap. 6 and the cross-checks performed to verify the goodness of the fit of composition. In order to translate the parameters returned from the fit into physics measurements the relative efficiency corrections between the various decay modes need to be applied. Chapter 10 is devoted to the description of these corrections. Chapter 11 describes the measurement of the detector-induced charge asymmetry between positively and negatively charged kaons and pions, due to their different probability of strong interaction in the tracker material using the real data. This allows to extract the acceptance correction factor for the CP asymmetries measurement without any external inputs from the simulation, and to perform a powerful check of whole analysis. Chapter 12 describes the main sources of systematic uncertainties and the method used to evaluate the significance of the results on rare modes. The final results of the measurements and their interpretation are discussed in chap. 13.« less
Essays on inference in economics, competition, and the rate of profit
NASA Astrophysics Data System (ADS)
Scharfenaker, Ellis S.
This dissertation is comprised of three papers that demonstrate the role of Bayesian methods of inference and Shannon's information theory in classical political economy. The first chapter explores the empirical distribution of profit rate data from North American firms from 1962-2012. This chapter address the fact that existing methods for sample selection from noisy profit rate data in the industrial organization field of economics tends to be conditional on a covariate's value that risks discarding information. Conditioning sample selection instead on the profit rate data's structure by means of a two component (signal and noise) Bayesian mixture model we find the the profit rate sample to be time stationary Laplace distributed, corroborating earlier estimates of cross section distributions. The second chapter compares alternative probabilistic approaches to discrete (quantal) choice analysis and examines the various ways in which they overlap. In particular, the work on individual choice behavior by Duncan Luce and the extension of this work to quantal response problems by game theoreticians is shown to be related both to the rational inattention work of Christopher Sims through Shannon's information theory as well as to the maximum entropy principle of inference proposed physicist Edwin T. Jaynes. In the third chapter I propose a model of ``classically" competitive firms facing informational entropy constraints in their decisions to potentially enter or exit markets based on profit rate differentials. The result is a three parameter logit quantal response distribution for firm entry and exit decisions. Bayesian methods are used for inference into the the distribution of entry and exit decisions conditional on profit rate deviations and firm level data from Compustat is used to test these predictions.
Sustaining Financial Support through Workforce Development Grants and Contracts
ERIC Educational Resources Information Center
Brumbach, Mary A.
2005-01-01
Workforce development grants and contracts are important methods for sustaining financial support for community colleges. This chapter details decision factors, college issues, possible pitfalls, and methods for procuring and handling government contracts and grants for workforce training.
NASA Astrophysics Data System (ADS)
Wetzel, Andrew R.; Hopkins, Philip F.; Kim, Ji-hoon; Faucher-Giguère, Claude-André; Kereš, Dušan; Quataert, Eliot
2016-08-01
Low-mass “dwarf” galaxies represent the most significant challenges to the cold dark matter (CDM) model of cosmological structure formation. Because these faint galaxies are (best) observed within the Local Group (LG) of the Milky Way (MW) and Andromeda (M31), understanding their formation in such an environment is critical. We present first results from the Latte Project: the Milky Way on Feedback in Realistic Environments (FIRE). This simulation models the formation of an MW-mass galaxy to z=0 within ΛCDM cosmology, including dark matter, gas, and stars at unprecedented resolution: baryon particle mass of 7070 {M}⊙ with gas kernel/softening that adapts down to 1 {pc} (with a median of 25{--}60 {pc} at z=0). Latte was simulated using the GIZMO code with a mesh-free method for accurate hydrodynamics and the FIRE-2 model for star formation and explicit feedback within a multi-phase interstellar medium. For the first time, Latte self-consistently resolves the spatial scales corresponding to half-light radii of dwarf galaxies that form around an MW-mass host down to {M}{star}≳ {10}5 {M}⊙ . Latte’s population of dwarf galaxies agrees with the LG across a broad range of properties: (1) distributions of stellar masses and stellar velocity dispersions (dynamical masses), including their joint relation; (2) the mass-metallicity relation; and (3) diverse range of star formation histories, including their mass dependence. Thus, Latte produces a realistic population of dwarf galaxies at {M}{star}≳ {10}5 {M}⊙ that does not suffer from the “missing satellites” or “too big to fail” problems of small-scale structure formation. We conclude that baryonic physics can reconcile observed dwarf galaxies with standard ΛCDM cosmology.
NASA Astrophysics Data System (ADS)
Premaratne, Pavithra Dhanuka
Disruption and fragmentation of an asteroid using nuclear explosive devices (NEDs) is a highly complex yet a practical solution to mitigating the impact threat of asteroids with short warning time. A Hypervelocity Asteroid Intercept Vehicle (HAIV) concept, developed at the Asteroid Deflection Research Center (ADRC), consists of a primary vehicle that acts as kinetic impactor and a secondary vehicle that houses NEDs. The kinetic impactor (lead vehicle) strikes the asteroid creating a crater. The secondary vehicle will immediately enter the crater and detonate its nuclear payload creating a blast wave powerful enough to fragment the asteroid. The nuclear subsurface explosion modeling and hydrodynamic simulation has been a challenging research goal that paves the way an array of mission critical information. A mesh-free hydrodynamic simulation method, Smoothed Particle Hydrodynamics (SPH) was utilized to obtain both qualitative and quantitative solutions for explosion efficiency. Commercial fluid dynamics packages such as AUTODYN along with the in-house GPU accelerated SPH algorithms were used to validate and optimize high-energy explosion dynamics for a variety of test cases. Energy coupling from the NED to the target body was also examined to determine the effectiveness of nuclear subsurface explosions. Success of a disruption mission also depends on the survivability of the nuclear payload when the secondary vehicle approaches the newly formed crater at a velocity of 10 km/s or higher. The vehicle may come into contact with debris ejecting the crater which required the conceptual development of a Whipple shield. As the vehicle closes on the crater, its skin may also experience extreme temperatures due to heat radiated from the crater bottom. In order to address this thermal problem, a simple metallic thermal shield design was implemented utilizing a radiative heat transfer algorithm and nodal solutions obtained from hydrodynamic simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kvita, J
2009-04-01
The analysis presented in this thesis focuses on kinematic distributions in the tmore » $$\\bar{t}$$ system and studies in detail selected differential cross sections of top quarks as well as the reconstructed t$$\\bar{t}$$ pair, namely the top quark transverse momentum and the t$$\\bar{t}$$ system mass. The structure of the thesis is organized as follows: first the Standard Model of the particle physics is briefly introduced in Chapter 1, with relevant aspects of electroweak and strong interactions discussed. The physics of the top quark and its properties are then outlined in Chapter 2, together with the motivation for measuring the transverse top quark momentum and other kinematic-related variables of the t$$\\bar{t}$$ system. The concepts of present-day high energy physics collider experiments and the explicit example of Fermilab Tevatron collider and the D0 detector in Chapters 3 and 4 are followed by the description of basic detector-level objects, i.e. tracks, leptons and jets, in Chapter 5; their identification and calibration following in next chapter with the emphasis on the jet energy scale in Chapter 6 and jet identification at the D0. The analysis itself is outlined in Chapter 7 and is structured so that first the data and simulation samples and the basic preselection are described in Chapter 8 and 9, followed by the kinematic reconstruction part in Chapter 10. Chapter 11 on background normalization and Chapter 12 with raw reconstructed spectra results (at the detector-smeared level) are followed by the purity-based background subtraction method and examples of signal-level corrected spectra in Chapter 13. Next, the procedure of correcting measured spectra for detector effects (unfolding) is described in Chapters 14-15, including migration matrix studies, acceptance correction determination as well as the regularized unfolding procedure itself. Final differential cross sections are presented in Chapter 16 with the main results in Figures 16.19-16.20. Summary and discussion close the main analysis part in Chapter 17, supplemented by appendices on the wealthy of analysis control plots of the t$$\\bar{t}$$ → ℓ + jets channel, selected D0 event displays and finally the list of publications and references. Preliminary results of this analysis have been documented in D0 internal notes [UnfoldTop], [p17Top], [p14Top]; as well as presented at conferences [APS08], [APS05]. The author has also been a co-author of more than 135 D0 collaboration publications since 2005. The author has taken part in the jet energy scale calibration efforts performing final closure tests and deriving a correction to jet energy offset due to the suppression of the calorimeter signal. The author has also co-performed the Φ-intercalibration of the hadronic calorimeter and co-supervised the electromagnetic Φ-intercalibration; recently has also been involved in maintaining the jet identification efficiencies measurement as a JetID convener. During the years in Fermilab, many events have taken place in the course of the analysis in persuasion, including more than 170 shifts served for the D0 detector with or without the beam, 168 talks presented with mixed results and reactions; and tens of thousands of code lines in C (and sometimes perhaps even really C++) written while terabytes of data were processed, analyzed, and sometimes also lost. It has been a long but profoundly enriching chapter of my life.« less
Minimal model for the secondary structures and conformational conversions in proteins
NASA Astrophysics Data System (ADS)
Imamura, Hideo
Better understanding of protein folding process can provide physical insights on the function of proteins and makes it possible to benefit from genetic information accumulated so far. Protein folding process normally takes place in less than seconds but even seconds are beyond reach of current computational power for simulations on a system of all-atom detail. Hence, to model and explore protein folding process it is crucial to construct a proper model that can adequately describe the physical process and mechanism for the relevant time scale. We discuss the reduced off-lattice model that can express _-helix and ?-hairpin conformations defined solely by a given sequence in order to investigate a protein folding mechanism of conformations such as a ?-hairpin and also to investigate conformational conversions in proteins. The first two chapters introduce and review essential concepts in protein folding modelling physical interaction in proteins, various simple models, and also review computational methods, in particular, the Metropolis Monte Carlo method, its dynamic interpretation and thermodynamic Monte Carlo algorithms. Chapter 3 describes the minimalist model that represents both _-helix and ?-sheet conformations using simple potentials. The native conformation can be specified by the sequence without particular conformational biases to a reference state. In Chapter 4, the model is used to investigate the folding mechanism of ?-hairpins exhaustively using the dynamic Monte Carlo and a thermodynamic Monte Carlo method an effcient combination of the multicanonical Monte Carlo and the weighted histogram analysis method. We show that the major folding pathways and folding rate depend on the location of a hydrophobic. The conformational conversions between _-helix and ?-sheet conformations are examined in Chapter 5 and 6. First, the conformational conversion due to mutation in a non-hydrophobic system and then the conformational conversion due to mutation with a hydrophobic pair at a different position at various temperatures are examined.
Developing a Passive Acoustic Monitoring Network for Harbor Porpoise in California
NASA Astrophysics Data System (ADS)
Jacobson, Eiren Kate
Assessing the abundance of and trends in whale, dolphin, and porpoise (cetacean) populations using traditional visual methods can be challenging due primarily to their limited availability at the surface of the ocean. As a result, researchers are increasingly interested in incorporating non-visual and remote observations to improve cetacean population assessments. Passive acoustic monitoring (PAM) can complement or replace visual surveys for cetaceans that produce echolocation clicks, whistles, and other vocalizations. My doctoral dissertation is focused on developing methods to improve PAM of cetaceans. I used the Monterey Bay population of harbor porpoise (Phocoena phocoena ) as a case study for methods development. In Chapter 2, I used passive acoustic data to document that harbor porpoises avoid bottlenose dolphins (Tursiops truncatus) in nearshore Monterey Bay. In Chapter 3, I investigated whether different passive acoustic instruments could be used to monitor harbor porpoise. I recorded harbor porpoise echolocation clicks simultaneously on two different passive acoustic instruments and compared the number and peak frequency of echolocation signals recorded on the two instruments. I found that the number of echolocation clicks was highly correlated between instruments but that the peak frequency of echolocation clicks was not well-correlated, suggesting that some instruments may not be capable of discriminating harbor porpoise echolocation clicks in regions where multiple species with similar echolocation signals are present. In Chapter 4, I used paired visual and passive acoustic surveys to estimate the effective detection area of the passive acoustic sensors in a Bayesian framework. This approach resulted in a posterior distribution of the effective detection area that was consistent with previously published values. In Chapter 5, I used aerial survey and passive acoustic data in a simulation framework to investigate the statistical power of different passive acoustic network designs and hypothetical changes in harbor porpoise abundance. As a whole, this dissertation used an applied approach to methods development to advance the use of PAM for cetaceans.
Methods in Molecular Biology Mouse Genetics: Methods and Protocols | Center for Cancer Research
Mouse Genetics: Methods and Protocols provides selected mouse genetic techniques and their application in modeling varieties of human diseases. The chapters are mainly focused on the generation of different transgenic mice to accomplish the manipulation of genes of interest, tracing cell lineages, and modeling human diseases.
A HANDBOOK FOR LITERACY TEACHERS.
ERIC Educational Resources Information Center
MCKILLIAM, K.R.
THE METHODS DESCRIBED IN THIS HANDBOOK CAN BE ADAPTED FOR USE IN ANY LANGUAGE WHICH CAN BE WRITTEN PHONETICALLY. CHAPTERS COVER THE VALUE OF ADULT LITERACY, HISTORY OF THE ALPHABET, HISTORY OF METHODS OF TEACHING READING AND WRITING, PRINCIPLES OF TEACHING, SOUNDS AS SYMBOLS, LESSON CONSTRUCTION, LETTER CONSTRUCTION, THE METHOD OF TEACHING…
Somogyi, Endre; Glazier, James A.
2017-01-01
Biological cells are the prototypical example of active matter. Cells sense and respond to mechanical, chemical and electrical environmental stimuli with a range of behaviors, including dynamic changes in morphology and mechanical properties, chemical uptake and secretion, cell differentiation, proliferation, death, and migration. Modeling and simulation of such dynamic phenomena poses a number of computational challenges. A modeling language describing cellular dynamics must naturally represent complex intra and extra-cellular spatial structures and coupled mechanical, chemical and electrical processes. Domain experts will find a modeling language most useful when it is based on concepts, terms and principles native to the problem domain. A compiler must then be able to generate an executable model from this physically motivated description. Finally, an executable model must efficiently calculate the time evolution of such dynamic and inhomogeneous phenomena. We present a spatial hybrid systems modeling language, compiler and mesh-free Lagrangian based simulation engine which will enable domain experts to define models using natural, biologically motivated constructs and to simulate time evolution of coupled cellular, mechanical and chemical processes acting on a time varying number of cells and their environment. PMID:29303160
Somogyi, Endre; Glazier, James A
2017-04-01
Biological cells are the prototypical example of active matter. Cells sense and respond to mechanical, chemical and electrical environmental stimuli with a range of behaviors, including dynamic changes in morphology and mechanical properties, chemical uptake and secretion, cell differentiation, proliferation, death, and migration. Modeling and simulation of such dynamic phenomena poses a number of computational challenges. A modeling language describing cellular dynamics must naturally represent complex intra and extra-cellular spatial structures and coupled mechanical, chemical and electrical processes. Domain experts will find a modeling language most useful when it is based on concepts, terms and principles native to the problem domain. A compiler must then be able to generate an executable model from this physically motivated description. Finally, an executable model must efficiently calculate the time evolution of such dynamic and inhomogeneous phenomena. We present a spatial hybrid systems modeling language, compiler and mesh-free Lagrangian based simulation engine which will enable domain experts to define models using natural, biologically motivated constructs and to simulate time evolution of coupled cellular, mechanical and chemical processes acting on a time varying number of cells and their environment.
Integrated Optofluidic Multimaterial Fibers
NASA Astrophysics Data System (ADS)
Stolyarov, Alexander Mark
The creation of integrated microphotonic devices requires a challenging assembly of optically and electrically disparate materials into complex geometries with nanometer-scale precision. These challenges are typically addressed by mature wafer-based fabrication methods, which while versatile, are limited to low-aspect-ratio structures and by the inherent complexity of sequential processing steps. Multimaterial preform-to-fiber drawing methods on the other hand present unique opportunities for realizing optical and optoelectronic devices of extended length. Importantly, these methods allow for monolithic integration of all the constituent device components into complex architectures. My research has focused on addressing the challenges and opportunities associated with microfluidic multimaterial fiber structures and devices. Specifically: (1) A photonic bandgap (PBG) fiber is demonstrated for single mode transmission at 1.55 microm with 4 dB/m losses. This fiber transmits laser pulses with peak powers of 13.5 MW. (Chapter 2) (2) A microfluidic fiber laser, characterized by purely radia l emission is demonstrated. The laser cavity is formed by an axially invariant, 17-period annular PBG structure with a unit cell thickness of 160nm. This laser is distinct from traditional lasers with cylindrically symmetric emission, which rely almost exclusively on whispering gallery modes, characterized by tangential wavevectors. (Chapter 4) (3) An array of independently-controlled liquid-crystal microchannels flanked by viscous conductors is integrated in the fiber cladding and encircles the PBG laser cavity in (2). The interplay between the radially-emitting laser and these liquid-crystal modulators enables controlled directional emission around a full azimuthal angular range. (Chapter 4) (4) The electric potential profile along the length of the electrodes in (3) is characterized and found to depend on frequency. This frequency dependence presents a new means to tune the transversely-directed transmission at a given location along the fiber axis. (Chapter 5) (5) A chemical sensing system is created within a fiber. By integrating a chemiluminescent peroxide-sensing material into the hollow core of a PBG fiber, a limit-of-detection of 300 ppb for peroxide vapors is achieved. (Chapter 3)
NASA Astrophysics Data System (ADS)
Li, Ming
In this dissertation, a set of numerical simulation tools are developed under previous work to efficiently and accurately study one-dimensional (1D), two-dimensional (2D), 2D slab and three-dimensional (3D) photonic crystal structures and their defects effects by means of spectrum (transmission, reflection, absorption), band structure (dispersion relation), and electric and/or magnetic fields distribution (mode profiles). Further more, the lasing property and spontaneous emission behaviors are studied when active gain materials are presented in the photonic crystal structures. First, the planewave based transfer (scattering) matrix method (TMM) is described in every detail along with a brief review of photonic crystal history (Chapter 1 and 2). As a frequency domain method, TMM has the following major advantages over other numerical methods: (1) the planewave basis makes Maxwell's Equations a linear algebra problem and there are mature numerical package to solve linear algebra problem such as Lapack and Scalapack (for parallel computation). (2) Transfer (scattering) matrix method make 3D problem into 2D slices and link all slices together via the scattering matrix (S matrix) which reduces computation time and memory usage dramatically and makes 3D real photonic crystal devices design possible; and this also makes the simulated domain no length limitation along the propagation direction (ideal for waveguide simulation). (3) It is a frequency domain method and calculation results are all for steady state, without the influences of finite time span convolution effects and/or transient effects. (4) TMM can treat dispersive material (such as metal at visible light) naturally without introducing any additional computation; and meanwhile TMM can also deal with anisotropic material and magnetic material (such as perfectly matched layer) naturally from its algorithms. (5) Extension of TMM to deal with active gain material can be done through an iteration procedure with gain material expressed by electric field dependent dielectric constant. Next, the concepts of spectrum interpolation (Chapter 3), higher-order incident (Chapter 4) and perfectly matched layer (Chapter 5) are introduced and applied to TMM, with detailed simulation for 1D, 2D, and 3D photonic crystal examples. Curvilinear coordinate transform is applied to the Maxwell's Equations to study waveguide bend (Chapter 6). By finding the phase difference along propagation direction at various XY plane locations, the behaviors of electromagnetic wave propagation (such as light bending, focusing etc) can be studied (Chapter 7), which can be applied to diffractive optics for new devices design. Numerical simulation tools for lasing devices are usually based on rate equations which are not accurate above the threshold and for small scale lasing cavities (such as nano-scale cavities). Recently, we extend the TMM package function to include the capacity of dealing active gain materials. Both lasing (above threshold) and spontaneous emission (below threshold) can be studied in the frame work of our Gain-TMM algorithm. Chapter 8 will illustrate the algorithm in detail and show the simulation results for 3D photonic crystal lasing devices. Then, microwave experiments (mainly resonant cavity embedded at layer-by-layer woodpile structures) are performed at Chapter 9 as an efficient practical way to study photonic crystal devices. The size of photonic crystal under microwave region is at the order of centimeter which makes the fabrication easier to realize. At the same time due to the scaling property, the result of microwave experiments can be applied directly to optical or infrared frequency regions. The systematic TMM simulations for various resonant cavities are performed and consistent results are obtained when compared with microwave experiments. Besides scaling the experimental results to much smaller wavelength, designing potential photonic crystal devices for application at microwave is also an interesting and important topic. Finally, we describe the future development of TMM algorithm such as using localized functions as basis to more efficiently simulate disorder problems (Chapter 10). Future applications of photonic crystal concepts are also discussed at Chapter 10. Along with this dissertation, TMM Photonic Crystal Package User Manual and Gain TMM Photonic Crystal Package User Manual written by me, Dr. Jiangrong Cao (Canon USA) and Dr. Xinhua Hu (Ames Lab) focus more on the programming detail, software user interface, trouble shooting, and step-by-step instructions. This dissertation and the two user manuals are essential documents for TMM software package beginners and advanced users. Future software developments, new version releases and FAQs can be tracked through my web page: http://www.public.iastate.edu/~mli/ In summary, this dissertation has extended the planewave based transfer (scattering) matrix method in many aspects which make the TMM and Gain-TMM software package a powerful simulation tool in photonic crystal study. Comparisons of TMM and GTMM results with other published numerical results and experimental results indicate that TMM and GTMM is accurate and highly efficient in photonic crystal device simulation and design. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Canter, Anna Rudolph
2004-12-01
The Science Academy of South Texas, one of four magnet schools in The South Texas Independent School District (STISD), opened in 1989 to bring educational opportunities in mathematics and science to students in the Rio Grande Valley of South Texas. STISD serves three counties and offers enrollment to any student who applies from any of the twenty-eight feeder districts. The Science Academy is the only mathematics and science magnet school in the Rio Grande Valley. Over years, Science Academy has developed partnerships with major colleges and universities in Houston, Texas and the Rio Grande Valley. University partnerships have provided funding for programs at the school and have created continuing summer study programs for Science Academy students. Graduates have been accepted to and/or attended some of the most prestigious colleges and universities across the United States, despite personal challenges including low socioeconomic status, English as their second language, and being the first in their family to attend college. This historical study seeks to answer two basic questions. How has the Science Academy faced its academic, political, and social challenges over the years? What factors appear to have contributed to its establishment, survival, and success? Chapter One, "Significance of the Study and Research Methods" describes the study's significance within the scholarly literature and the research methods used for this study. Chapter Two, "The Science Academy of South Texas" presents the history of STISD and the events which precipitated Science Academy's establishment. Chapter Three, "The Administration, Faculty and Staff of Science Academy," discusses administration and faculty of the Science Academy. Its focus is Science Academy teachers and their educational beliefs as well as the administrators and staff and their beliefs. Chapter Four, "Curriculum Continuity and Change at the Science Academy," focuses on the curriculum history of Science Academy and the changes faculty members and administrators have made over time. Chapter Five, "The Students of the Science Academy of South Texas," focuses on the students at the Science Academy, who administrators and teachers describe as "the whole reason we are here." Chapter Six offers concluding thoughts and ideas for future research.
Mass spectrometry methods for the analysis of biodegradable hybrid materials
NASA Astrophysics Data System (ADS)
Alalwiat, Ahlam
This dissertation focuses on the characterization of hybrid materials and surfactant blends by using mass spectrometry (MS), tandem mass spectrometry (MS/MS), liquid chromatography (LC), and ion mobility (IM) spectrometry combined with measurement and simulation of molecular collision cross sections. Chapter II describes the principles and the history of mass spectrometry (MS) and liquid chromatography (LC). Chapter III introduces the materials and instrumentation used to complete this dissertation. In chapter IV, two hybrid materials containing poly(t-butyl acrylate) (PtBA) or poly(acrylic acid) (PAA) blocks attached to a hydrophobic peptide rich in valine and glycine (VG2), as well as the poly(acrylic acid) (PAA) and VG2 peptide precursor materials, are characterized by matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS), electrospray ionization mass spectrometry (ESI-MS) and ion mobility mass spectrometry (IM-MS). Collision cross-sections and molecular modeling have been used to determine the final architecture of both hybrid materials. Chapter V investigates a different hybrid material, [BMP-2(HA)2 ], comprised of a dendron with two polyethylene glycol (PEG) branches terminated by a hydroxyapatite binding peptide (HA), and a focal point substituted with a bone morphogenic protein mimicking peptide (BMP-2). MALDI-MS, ESI-MS and IM-MS have been used to characterize the HA and BMP-2 peptides. Collisionally activated dissociation (CAD) and electron transfer dissociation (ETD) have been employed in double stage (i.e. tandem) mass spectrometry (MS/MS) experiments to confirm the sequences of the two peptides HA and BMP-2. The MALDI-MS, ESI-MS and IM-MS methods were also applied to characterize the [BMP-2(HA)2] hybrid material. Collision cross-section measurements and molecular modeling indicated that [BMP-2(HA)2] can attain folded or extended conformation, depending on its degree of protonation (charge state). Chapter VI focuses on the analysis of alkyl polyglycoside (APG) surfactants by MALDI-MS and ESI-MS, MS/MS, and by combining MS and with ion mobility (IM) and/or ultra-performance liquid chromatography (UPLC) separation in LC-IM and LC-IM-MS experiments. Chapter VII summaries this dissertation's findings.
Pragmatic neuroethics: the social aspects of ethics in disorders of consciousness.
Racine, Eric
2013-01-01
In this chapter, evolution of ethics and bioethics is traced to show how an abstract and individualistic paradigm was at the core of mainstream ethics prior to the advent of bioethics and applied ethics. Bioethics has transformed this individualistic paradigm because of its inherent interdisciplinarity and real-world connection. This evolution has raised questions regarding how nonabstract (e.g., experiential) and nonindividualistic (e.g., social, relational) components of ethics could be married to normative theory and ethics reflection, the latter usually not amenable to empiric research. In the first part of this chapter, pragmatism is introduced as an approach offering perspectives on the integration of social, nonindividualistic aspects of ethics, supporting the use of social science methods within ethics and neuroethics. In the second part of this chapter, using the example of disorders of consciousness, a pragmatic perspective is explored to reframe questions and help foster nonreductionistic understandings of ethical questions and ethical dilemmas. This chapter aims to generate reflections on a set of specific clinical contexts that will also stimulate a discussion on the nature of ethical approaches. © 2013 Elsevier B.V. All rights reserved.
2016-04-05
About this volumeMontana StreamStats is a Web-based geographic information system (http://water.usgs.gov/osw/streamstats/) application that provides users with access to basin and streamflow characteristics for gaged and ungaged streams in Montana. Montana StreamStats was developed by the U.S. Geological Survey (USGS) in cooperation with the Montana Departments of Transportation, Environmental Quality, and Natural Resources and Conservation. The USGS Scientific Investigations Report consists of seven independent but complementary chapters dealing with various aspects of this effort.Chapter A describes the Montana StreamStats application, the basin and streamflow datasets, and provides a brief overview of the streamflow characteristics and regression equations used in the study. Chapters B through E document the datasets, methods, and results of analyses to determine streamflow characteristics, such as peak-flow frequencies, low-flow frequencies, and monthly and annual characteristics, for USGS streamflow-gaging stations in and near Montana. The StreamStats analytical toolsets that allow users to delineate drainage basins and solve regression equations to estimate streamflow characteristics at ungaged sites in Montana are described in Chapters F and G.
NASA Astrophysics Data System (ADS)
Ingwersen, Wesley W.
Life cycle assessment (LCA) is an internationally standardized framework for assessing the environmental impacts of products that is rapidly evolving to improve understanding and quantification of how complex product systems depend upon and affect the environment. This dissertation contributes to that evolution through the development of new methods for measuring impacts, estimating the uncertainty of impacts, and measuring ranges of environmental performance, with a focus on product systems in non-OECD countries that have not been well characterized. The integration of a measure of total energy use, emergy, is demonstrated in an LCA of gold from the Yanacocha mine in Peru in the second chapter. A model for estimating the accuracy of emergy results is proposed in the following chapter. The fourth chapter presents a template for LCA-based quantification of the range of environmental performance for tropical agricultural products using the example of fresh pineapple production for export in Costa Rica that can be used to create product labels with environmental information. The final chapter synthesizes how each methodological contribution will together improve the science of measuring product environmental performance.
NASA Astrophysics Data System (ADS)
Harris, Glenn A.
Molecular ionization is owed much of its development from the early implementation of electron ionization (EI). Although dramatically increasing the library of compounds discovered, an inherent problem with EI was the low abundance of molecular ions detected due to high fragmentation leading to the difficult task of the correct chemical identification after mass spectrometry (MS). These problems stimulated the research into new ionization methods which sought to "soften" the ionization process. In the late 1980s the advancements of ionization techniques was thought to have reached its pinnacle with both electrospray ionization (ESI) and matrix-assisted laser desorption/ionization (MALDI). Both ionization techniques allowed for "soft" ionization of large molecular weight and/or labile compounds for intact characterization by MS. Albeit pervasive, neither ESI nor MALDI can be viewed as "magic bullet" ionization techniques. Both techniques require sample preparation which often included native sample destruction, and operation of these techniques took place in sealed enclosures and often, reduced pressure conditions. New open-air ionization techniques termed "ambient MS" enable direct analysis of samples of various physical states, sizes and shapes. One particular technique named Direct Analysis In Real Time (DART) has been steadily growing as one of the ambient tools of choice to ionize small molecular weight (< 1000 Da) molecules with a wide range of polarities. Although there is a large list of reported applications using DART as an ionization source, there have not been many studies investigating the fundamental properties of DART desorption and ionization mechanisms. The work presented in this thesis is aimed to provide in depth findings on the physicochemical phenomena during open-air DART desorption and ionization MS and current application developments. A review of recent ambient plasma-based desorption/ionization techniques for analytical MS is presented in Chapter 1. Chapter 2 presents the first investigations into the atmospheric pressure ion transport phenomena during DART analysis. Chapter 3 provides a comparison on the internal energy deposition processes during DART and pneumatically assisted-ESI. Chapter 4 investigates the complex spatially-dependent sampling sensitivity, dynamic range and ion suppression effects present in most DART experiments. New implementations and applications with DART are shown in Chapters 5 and 6. In Chapter 5, DART is coupled to multiplexed drift tube ion mobility spectrometry as a potential fieldable platform for the detection of toxic industrial chemicals and chemical warfare agents simulants. In Chapter 6, transmission-mode DART is shown to be an effective method for reproducible sampling from materials which allow for gas to flow through it. Also, Chapter 6 provides a description of a MS imaging platform coupling infrared laser ablation and DART-like phenomena. Finally, in Chapter 7 I will provide perspective on the work completed with DART and the tasks and goals that future studies should focus on.
Liquid crystal interfaces: Experiments, simulations and biosensors
NASA Astrophysics Data System (ADS)
Popov, Piotr
Interfacial phenomena are ubiquitous and extremely important in various aspects of biological and industrial processes. For example, many liquid crystal applications start by alignment with a surface. The underlying mechanisms of the molecular organization of liquid crystals at an interface are still under intensive study and continue to be important to the display industry in order to develop better and/or new display technology. My dissertation research has been devoted to studying how complex liquid crystals can be guided to organize at an interface, and to using my findings to develop practical applications. Specifically, I have been working on developing biosensors using liquid-crystal/surfactant/lipid/protein interactions as well as the alignment of low-symmetry liquid crystals for potential new display and optomechanical applications. The biotechnology industry needs better ways of sensing biomaterials and identifying various nanoscale events at biological interfaces and in aqueous solutions. Sensors in which the recognition material is a liquid crystal naturally connects the existing knowledge and experience of the display and biotechnology industries together with surface and soft matter sciences. This dissertation thus mainly focuses on the delicate phenomena that happen at liquid interfaces. In the introduction, I start by defining the interface and discuss its structure and the relevant interfacial forces. I then introduce the general characteristics of biosensors and, in particular, describe the design of biosensors that employ liquid crystal/aqueous solution interfaces. I further describe the basic properties of liquid crystal materials that are relevant for liquid crystal-based biosensing applications. In CHAPTER 2, I describe the simulation methods and experimental techniques used in this dissertation. In CHAPTER 3 and CHAPTER 4, I present my computer simulation work. CHAPTER 3 presents insight of how liquid crystal molecules are aligned by hydrocarbon surfaces at the atomic level. I show that the vertical alignment of a rod-like liquid crystal molecule first requires its insertion into the alignment layer. In CHAPTER 4, I investigate the Brownian behavior of a tracer molecule at an oil/water interface and explain the experimentally-observed anomaly of its increased mobility. Following my molecular dynamics simulation studies of liquid interfaces, I continue my work in CHAPTER 5 with experimental research. I employ the high sensitivity of liquid crystal alignment to the presence of amphiphiles adsorbed to the liquid crystal surface from water for potential biosensor applications. I propose a more accurate method of sensing using circular polarization and spectrophotometry. In CHAPTER 6, I investigate if cholesteric and smectic liquid crystals can potentially offer new modes of biosensing. In CHAPTER 7, I describe preliminary results toward constructing a liquid crystal biosensor platform with capabilities of specific sensitivity using proteins and antibodies. Finally in CHAPTER 8, I summarize the findings of my studies and research and suggest possible future experiments to further advance our knowledge in interfacial science for future applications.
Computational Thermodynamics of Materials Zi-Kui Liu and Yi Wang
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devanathan, Ram
This authoritative volume introduces the reader to computational thermodynamics and the use of this approach to the design of material properties by tailoring the chemical composition. The text covers applications of this approach, introduces the relevant computational codes, and offers exercises at the end of each chapter. The book has nine chapters and two appendices that provide background material on computer codes. Chapter 1 covers the first and second laws of thermodynamics, introduces the spinodal as the limit of stability, and presents the Gibbs-Duhem equation. Chapter 2 focuses on the Gibbs energy function. Starting with a homogeneous system with amore » single phase, the authors proceed to phases with variable compositions, and polymer blends. The discussion includes the contributions of external electric and magnetic fields to the Gibbs energy. Chapter 3 deals with phase equilibria in heterogeneous systems, the Gibbs phase rule, and phase diagrams. Chapter 4 briefly covers experimental measurements of thermodynamic properties used as input for thermodynamic modeling by Calculation of Phase Diagrams (CALPHAD). Chapter 5 discusses the use of density functional theory to obtain thermochemical data and fill gaps where experimental data is missing. The reader is introduced to the Vienna Ab Initio Simulation Package (VASP) for density functional theory and the YPHON code for phonon calculations. Chapter 6 introduces the modeling of Gibbs energy of phases with the CALPHAD method. Chapter 7 deals with chemical reactions and the Ellingham diagram for metal-oxide systems and presents the calculation of the maximum reaction rate from equilibrium thermodynamics. Chapter 8 is devoted to electrochemical reactions and Pourbaix diagrams with application examples. Chapter 9 concludes this volume with the application of a model of multiple microstates to Ce and Fe3Pt. CALPHAD modeling is briefly discussed in the context of genomics of materials. The book introduces basic thermodynamic concepts clearly and directs readers to appropriate references for advanced concepts and details of software implementation. The list of references is quite comprehensive. The authors make liberal use of diagrams to illustrate key concepts. The two Appendices at the end discuss software requirements and the file structure, and present templates for special quasi-random structures. There is also a link to download pre-compiled binary files of the YPHON code for Linux or Microsoft Windows systems. The exercises at the end of the chapters assume that the reader has access to VASP, which is not freeware. Readers without access to this code can work on a limited number of exercises. However, results from other first principles codes can be organized in the YPHON format as explained in the Appendix. This book will serve as an excellent reference on computational thermodynamics and the exercises provided at the end of each chapter make it valuable as a graduate level textbook. Reviewer: Ram Devanathan is Acting Director of Earth Systems Science Division, Pacific Northwest National Laboratory, USA.« less
Globule-size distribution in injectable 20% lipid emulsions: Compliance with USP requirements.
Driscoll, David F
2007-10-01
The compliance of injectable 20% lipid emulsions with the globule-size limits in chapter 729 of the U.S. Pharmacopeia (USP) was examined. As established in chapter 729, dynamic light scattering was applied to determine mean droplet diameter (MDD), with an upper limit of 500 nm. Light obscuration was used to determine the size of fat globules found in the large-diameter tail, expressed as the volume-weighted percent fat exceeding 5 microm (PFAT(5)), with an upper limit of 0.05%. Compliance of seven different emulsions, six of which were stored in plastic bags, with USP limits was assessed. To avoid reaching coincidence limits during the application of method II from overly concentrated emulsion samples, a variable dilution scheme was used to optimize the globule-size measurements for each emulsion. One-way analysis of variance of globule-size distribution (GSD) data was conducted if any results of method I or II exceeded the respective upper limits. Most injectable lipid emulsions complied with limits established by USP chapter 729, with the exception of those of one manufacturer, which failed limits as proposed for to meet the PFAT(5) three of the emulsions tested. In contrast, all others studied (one packaged in glass and three packaged in plastic) met both criteria. Among seven injectable lipid emulsions tested for GSD, all met USP chapter 729 MDD requirements and three, all from the same manufacturer and packaged in plastic, did not meet PFAT(5) requirements.
Setting the Record Straight: Bottom-Up Carbon Nanostructures via Solid-State Reactions
NASA Astrophysics Data System (ADS)
Jordan, Robert Stanley
Chapter 1 describes the development and spectroscopic investigation of a novel synthetic route to N = 8 armchair graphene nanoribbons from polydiacetylene polymers. Four distinct diphenyl polydiacetylene polymers are produced from the crystal-phase topochemical polymerization of their corresponding diphenyl-1,4-butadiynes. These polydiacetylene polymers are transformed into spectroscopically indistinguishable N = 8 armchair graphene nanoribbons via simple heating in the bulk, solid-state. The stepwise transformation of polydiacetylenes to graphene nanoribbons is examined in detail by the use of complementary spectroscopic methods, namely solid-state nuclear magnetic resonance, infrared, Raman and X-ray photoelectron spectroscopy. The final morphology and width of the nanoribbons is established through the use of high-resolution transmission electron microscopy. Chapter 2 chronicles the implementation of a similar approach to N = 12 armchair graphene nanoribbons from a dinaphthyl substituted polydiacetylene polymer. The mild nature of the process and pristine structure of the nanoribbons is again confirmed with the use of spectroscopic and microscopic methods. The chapter concludes with preliminary electrical measurements of the nanoribbons confirming that they are indeed conductive. Chapter 3 details the development of a synthetic route to diaryl trans-enediynes as structural models of individual reactive units within a polydiacetylene polymer. The trans-enediynes described are found to undergo three distinct annulation reactions depending on reaction conditions. Finally, the synthetic routes developed are utilized to access diethynyl [5]helicenes and phenanthrenes which fueled studies on the mechanism of the Bergman polymerization reaction.
Structure and engineering of celluloses.
Pérez, Serge; Samain, Daniel
2010-01-01
This chapter collates the developments and conclusions of many of the extensive studies that have been conducted on cellulose, with particular emphasis on the structural and morphological features while not ignoring the most recent results derived from the elucidation of unique biosynthetic pathways. The presentation of structural and morphological data gathered together in this chapter follows the historical development of our knowledge of the different structural levels of cellulose and its various organizational levels. These levels concern features such as chain conformation, chain polarity, chain association, crystal polarity, and microfibril structure and organization. This chapter provides some historical landmarks related to the evolution of concepts in the field of biopolymer science, which parallel the developments of novel methods for characterization of complex macromolecular structures. The elucidation of the different structural levels of organization opens the way to relating structure to function and properties. The chemical and biochemical methods that have been developed to dissolve and further modify cellulose chains are briefly covered. Particular emphasis is given to the facets of topochemistry and topoenzymology where the morphological features play a key role in determining unique physicochemical properties. A final chapter addresses what might be considered tomorrow's goal in amplifying the economic importance of cellulose in the context of sustainable development. Selected examples illustrate the types of result that can be obtained when cellulose fibers are no longer viewed as inert substrates, and when the polyhydroxyl nature of their surfaces, as well as their entire structural complexity, are taken into account. Copyright © 2010 Elsevier Inc. All rights reserved.
Classical methods and modern analysis for studying fungal diversity
John Paul Schmit
2005-01-01
In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...
Classical Methods and Modern Analysis for Studying Fungal Diversity
J. P. Schmit; D. J. Lodge
2005-01-01
In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...
perturbation formulas of Groebner (1960) and Alexseev (1961) for the solution of ordinary differential equations. These formulas are generalized and...iteration methods are given, which include the Methods of Picard, Groebner -Knapp, Poincare, Chen, as special cases. Chapter 3 generalizes an iterated
Content-Based Medical Image Retrieval
NASA Astrophysics Data System (ADS)
Müller, Henning; Deserno, Thomas M.
This chapter details the necessity for alternative access concepts to the currently mainly text-based methods in medical information retrieval. This need is partly due to the large amount of visual data produced, the increasing variety of medical imaging data and changing user patterns. The stored visual data contain large amounts of unused information that, if well exploited, can help diagnosis, teaching and research. The chapter briefly reviews the history of image retrieval and its general methods before technologies that have been developed in the medical domain are focussed. We also discuss evaluation of medical content-based image retrieval (CBIR) systems and conclude with pointing out their strengths, gaps, and further developments. As examples, the MedGIFT project and the Image Retrieval in Medical Applications (IRMA) framework are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lanekoff, Ingela; Laskin, Julia
In recent years, mass spectroscopy imaging (MSI) has emerged as a foundational technique in metabolomics and drug screening providing deeper understanding of complex mechanistic pathways within biochemical systems and biological organisms. We have been invited to contribute a chapter to a new Springer series volume, entitled “Mass Spectrometry Imaging of Small Molecules”. The volume is planned for the highly successful lab protocol series Methods in Molecular Biology, published by Humana Press, USA. The volume is aimed to equip readers with step-by-step mass spectrometric imaging protocols and bring rapidly maturing methods of MS imaging to life science researchers. The chapter willmore » provide a detailed protocol of ambient MSI by use of nanospray desorption electrospray ionization.« less
Introduction to the Wetland Book 1: Wetland structure and function, management, and nethods
Davidson, Nick C.; Middleton, Beth A.; McInnes, Robert J.; Everard, Mark; Irvine, Kenneth; Van Dam, Anne A.; Finlayson, C. Max; Finlayson, C. Max; Everard, Mark; Irvine, Kenneth; McInnes, Robert J.; Middleton, Beth A.; Van Dam, Anne A.; Davidson, Nick C.
2016-01-01
The Wetland Book 1 is designed as a ‘first port-of-call’ reference work for information on the structure and functions of wetlands, current approaches to wetland management, and methods for researching and understanding wetlands. Contributions by experts summarize key concepts, orient the reader to the major issues, and support further research on such issues by individuals and multidisciplinary teams. The Wetland Book 1 is organized in three parts - Wetland structure and function; Wetland management; and Wetland methods - each of which is divided into a number of thematic Sections. Each Section starts with one or more overview chapters, supported by chapters providing further information and case studies on different aspects of the theme.
NASA Technical Reports Server (NTRS)
Antle, John M.; Valdivia, Roberto O.; Boote, Kenneth J.; Janssen, Sander; Jones, James W.; Porter, Cheryl H.; Rosenzweig, Cynthia; Ruane, Alexander C.; Thorburn, Peter J.
2015-01-01
This chapter describes methods developed by the Agricultural Model Intercomparison and Improvement Project (AgMIP) to implement a transdisciplinary, systems-based approach for regional-scale (local to national) integrated assessment of agricultural systems under future climate, biophysical, and socio-economic conditions. These methods were used by the AgMIP regional research teams in Sub-Saharan Africa and South Asia to implement the analyses reported in their respective chapters of this book. Additional technical details are provided in Appendix 1.The principal goal that motivates AgMIP's regional integrated assessment (RIA) methodology is to provide scientifically rigorous information needed to support improved decision-making by various stakeholders, ranging from local to national and international non-governmental and governmental organizations.
Theoretical analysis of sheet metal formability
NASA Astrophysics Data System (ADS)
Zhu, Xinhai
Sheet metal forming processes are among the most important metal-working operations. These processes account for a sizable proportion of manufactured goods made in industrialized countries each year. Furthermore, to reduce the cost and increase the performance of manufactured products, in addition to the environmental concern, more and more light weight and high strength materials have been used as a substitute to the conventional steel. These materials usually have limited formability, thus, a thorough understanding of the deformation processes and the factors limiting the forming of sound parts is important, not only from a scientific or engineering viewpoint, but also from an economic point of view. An extensive review of previous studies pertaining to theoretical analyses of Forming Limit Diagrams (FLDs) is contained in Chapter I. A numerical model to analyze the neck evolution process is outlined in Chapter II. With the use of strain gradient theory, the effect of initial defect profile on the necking process is analyzed. In the third chapter, the method proposed by Storen and Rice is adopted to analyze the initiation of localized neck and predict the corresponding FLDs. In view of the fact that the width of the localized neck is narrow, the deformation inside the neck region is constrained by the material in the neighboring homogeneous region. The relative rotation effect may then be assumed to be small and is thus neglected. In Chapter IV, Hill's 1948 yield criterion and strain gradient theory are employed to obtain FLDs, for planar anisotropic sheet materials by using bifurcation analysis. The effects of the strain gradient coefficient c and the material anisotropic parameters R's on the orientation of the neck and FLDs are analyzed in a systematic manner and compared with experiments. In Chapter V, Hill's 79 non-quadratic yield criterion with a deformation theory of plasticity is used along with bifurcation analyses to derive a general analytical expression for calculating FLDs. In the final chapter, a method is proposed to construct forming limit diagrams for sheet metals under different deformation histories. This analysis employs Hill's 79 anisotropic yield function and uses strain gradient theory to describe the constitutive equation for the flow stress. In order to utilize an analytical method developed earlier for proportional loading, the concept of "virtual deformation" is introduced. The actual deformation path is divided into a sequence of linear paths and an effective "virtual deformation" path is defined having a strain ratio identical to that of the linear part in the final deformation stage, and a plastic work identical to that of the prior actual deformation it is replacing. (Abstract shortened by UMI.)
A Performance Comparison of Tree and Ring Topologies in Distributed System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Min
A distributed system is a collection of computers that are connected via a communication network. Distributed systems have become commonplace due to the wide availability of low-cost, high performance computers and network devices. However, the management infrastructure often does not scale well when distributed systems get very large. Some of the considerations in building a distributed system are the choice of the network topology and the method used to construct the distributed system so as to optimize the scalability and reliability of the system, lower the cost of linking nodes together and minimize the message delay in transmission, and simplifymore » system resource management. We have developed a new distributed management system that is able to handle the dynamic increase of system size, detect and recover the unexpected failure of system services, and manage system resources. The topologies used in the system are the tree-structured network and the ring-structured network. This thesis presents the research background, system components, design, implementation, experiment results and the conclusions of our work. The thesis is organized as follows: the research background is presented in chapter 1. Chapter 2 describes the system components, including the different node types and different connection types used in the system. In chapter 3, we describe the message types and message formats in the system. We discuss the system design and implementation in chapter 4. In chapter 5, we present the test environment and results, Finally, we conclude with a summary and describe our future work in chapter 6.« less
DNA Free Energy Landscapes and RNA Nano-Self-Assembly Using Atomic Force Microscopy
NASA Astrophysics Data System (ADS)
Frey, Eric William
There is an important conceptual lesson which has long been appreciated by those who work in biophysics and related interdisciplinary fields. While the extraordinary behavior of biological matter is governed by its detailed atomic structure and random fluctuations, and is therefore difficult to predict, it can nevertheless be understood within simplified frameworks. Such frameworks model the system as consisting of only one or a few components, and model the behavior of the system as the occupation of a single state out of a small number of states available. The emerging widespread application of nanotechnology, such as atomic force microscopy (AFM), has expanded this understanding in eye-opening new levels of detail by enabling nano-scale control, measurement, and visualization of biological molecules. This thesis describes two independent projects, both of which illuminate this understanding using AFM, but which do so from very different perspectives. The organization of this thesis is as follows. Chapter 1 begins with an experimental background and introduction to AFM, and then describes our setup in both single-molecule manipulation and imaging modes. In Chapter 2, we describe the first project, the motivation for which is to extend methods for the experimental determination of the free energy landscape of a molecule. This chapter relies on the analysis of single-molecule manipulation data. Chapter 3 describes the second project, the motivation for which is to create RNA-based nano-structures suitable for future applications in living mammalian cells. This chapter relies mainly on imaging. Chapters 2 and 3 can thus be read and understood separately.
[On the biological properties of fragrance compounds and essential oils].
Buchbauer, Gerhard
2004-11-01
In the present review the physiological and/or pharmacological properties of essential oils and of single fragrance compounds are discussed. Essential oils are known and have been used since ancient times as natural medicines. As natural products essential oils are dependent on climate and their composition varies according to conditions of soil, to solar irradiation, to harvest time, to production methods, to storage conditions and similar facts which are discussed in chapter 2 of this review. The next chapters deal with the therapeutic use of essential oils in treating diseases, disorders or ailments of the nervous system, against cancer and as penetration enhancers. For space-saving reasons, however, the manifold antimicrobial and antifungal properties of these natural products have been left out. In the last chapter, the pros and cons in the use of essential oils in therapy are also discussed.
Book review: Advances in 40Ar/39Ar dating: From archaeology to planetary sciences
Cosca, Michael A.
2015-01-01
The recently published book Advances in 40Ar/39Ar Dating: From Archaeology to Planetary Sciences is a collection of 24 chapters authored by international scientists on topics ranging from decay constants to 40Ar/39Ar dating of extraterrestrial objects. As stated by the editors in their introduction, these chapters were assembled with the goal of providing technique-specific examples highlighting recent advances in the field of 40Ar/39Ar dating. As this is the first book truly dedicated to 40Ar/39Ar dating since the second edition printing of the argon geochronologist’s handbook Geochronology and Thermochronology by the 40Ar/39Ar Method (McDougall and Harrison 1999), a new collection of chapters highlighting recent advances in 40Ar/39Ar geochronology offers much to the interested reader.
Biolistic transformation of cotton zygotic embryo meristem
USDA-ARS?s Scientific Manuscript database
Biolistic transformation of cotton meristems, isolated from mature seed is detailed in this book chapter. This method is simple and avoids the necessity to use genotype-dependent regenerable cell cultures. However, identification of germ line transformation using this method is laborious and time-c...
MEASURING INVERTEBRATE GRAZING ON SEAGRASSES AND EPIPHYTES
The chapter describes methods to assess grazing rates, grazer preferences, and grazer impacts, by mobile organisms living in the canopy or in the rhizome layer in any seagrass system. One set of methods quantifies grazing activity in small to medium sized, mobile organisms livin...
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES FURNISHED BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND RESIDENTS... cost method to non-PPS participating providers in accordance with part 413 of this chapter. ...
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES FURNISHED BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND RESIDENTS... cost method to non-PPS participating providers in accordance with part 413 of this chapter. ...
BOOK REVIEW: Introduction to Computational Plasticity
NASA Astrophysics Data System (ADS)
Hartley, P.
2006-04-01
The use of computational modelling in all areas of science and engineering has in recent years escalated to the point where it underpins much of current research. However, the distinction must be made between computer systems in which no knowledge of the underlying computer technology or computational theory is required and those areas of research where the mastery of computational techniques is of great value, almost essential, for final year undergraduates or masters students planning to pursue a career in research. Such a field of research in the latter category is continuum mechanics, and in particular non-linear material behaviour, which is the core topic of this book. The focus of the book on computational plasticity embodies techniques of relevance not only to academic researchers, but also of interest to industrialists engaged in the production of components using bulk or sheet forming processes. Of particular interest is the guidance on how to create modules for use with the commercial system Abaqus for specific types of material behaviour. The book is in two parts, the first of which contains six chapters, starting with microplasticity, but predominantly on continuum plasticity. The first chapter on microplasticty gives a brief description of the grain structure of metals and the existence of slip systems within the grains. This provides an introduction to the concept of incompressibility during plastic deformation, the nature of plastic yield and the importance of the critically resolved shear stress on the slip planes (Schmid's law). Some knowledge of the notation commonly used to describe slip systems is assumed, which will be familiar to students of metallurgy, but anyone with a more general engineering background may need to undertake additional reading to understand the various descriptions. Any lack of knowledge in this area however, is of no disadvantage as it serves only as an introduction and the book moves on quickly to continuum plasticity. Chapter two introduces one of several yield criteria, that normally attributed to von Mises (though historians of mechanics might argue over who was first to develop the theory of yielding associated with strain energy density), and its two or three-dimensional representation as a yield surface. The expansion of the yield surface during plastic deformation, its translation due to kinematic hardening and the Bauschinger effect in reversed loading are described with a direct link to the material stress-strain curve. The assumption, that the increment of strain is normal to the yield surface, the normality principle, is introduced. Uniaxial loading of an elastic-plastic material is used as an example in which to develop expressions to describe increments in stress and strain. The full presentation of numerous expressions, tensors and matrices with a clear explanation of their development, is a recurring, and commendable, feature of the book, which provides an invaluable introduction for those new to the subject. The chapter moves on from time-independent behaviour to introduce viscoplasticity and creep. Chapter three takes the theories of deformation another stage further to consider the problems associated with large deformation in which an important concept is the separation of the phenomenon into material stretch and rotation. The latter is crucial to allow correct measures of strain and stress to be developed in which the effects of rigid body rotation do not contribute to these variables. Hence, the introduction of 'objective' measures for stress and strain. These are described with reference to deformation gradients, which are clearly explained; however, the introduction of displacement gradients passes with little comment, although velocity gradients appear later in the chapter. The interpretation of different strain measures, e.g. Green--Lagrange and Almansi, is covered briefly, followed by a description of the spin tensor and its use in developing the objective Jaumann rate of stress. It is tempting here to suggest that a more complete description should be given together with other measures of strain and stress, of which there are several, but there would be a danger of changing the book from an `introduction' to a more comprehensive text, and examples of such exist already. Chapter four begins the process of developing the plasticity theories into a form suitable for inclusion in the finite-element method. The starting point is Hamilton's principle for equilibrium of a dynamic system. A very brief introduction to the finite-element method is then given, followed by the finite-element equilibrium equations and a description of how they are incorporated into Hamilton's principle. A useful clarification is provided by comparing tensor notation and the form normally used in finite-element expressions, i.e. Voigt notation. The chapter concludes with a brief overview of implicit integration methods, i.e. tangent stiffness, initial tangent stiffness and Newton Raphson. Chapter five deals with the more specialized topic of implicit and explicit integration of von Mises plasticity. One of the techniques described is the radial-return method which ensures that the stresses at the end of an increment of deformation always lie on the expanded yield surface. Although this method guarantees a solution it may not always be the most accurate for large deformation, this is one area where reference to alternative methods would have been a helpful addition. Chapter six continues with further detail of how the plasticity models may be incorporated into finite-element codes, with particular reference to the Abaqus package and the use of user-defined subroutines, introduced via a `UMAT' subroutine. This completes part I of the book. Part II focuses on plasticity models, each chapter dealing with a particular process or material model. For example, chapter seven deals with superplasticity, chapter eight with porous plasticity, chapter nine with creep and chapter ten with cyclic plasticity, creep and TMF. Examples of deep drawing, forming of titanium metal-matrix composites and creep damage are provided, together with further guidelines on the use of Abaqus to model these processes. Overall, the book is organised in a very logical and readable form. The use of simple one-dimensional examples, with full descriptions of tensors and vectors throughout the book, is particularly useful. It provides a good introduction to the topic, covering much of the theory and with applications to give a good grounding that can be taken further with more comprehensive advanced texts. An excellent starting point for anyone involved in research in computational plasticity.
ERIC Educational Resources Information Center
Schonfeld, Irvin Sam; Farrell, Edwin
2010-01-01
The chapter examines the ways in which qualitative and quantitative methods support each other in research on occupational stress. Qualitative methods include eliciting from workers unconstrained descriptions of work experiences, careful first-hand observations of the workplace, and participant-observers describing "from the inside" a…
Neutron-stimulated gamma ray analysis of soil
USDA-ARS?s Scientific Manuscript database
The chapter will discuss methods to use gamma rays to measure elements in soil. In regard to land management, there is a need to develop a non-destructive, non-contact, in-situ method of determining soil elements distributed in a soil volume or on soil surface. A unique method having all of above ...
John Dewey on History Education and the Historical Method
ERIC Educational Resources Information Center
Fallace, Thomas D.
2010-01-01
This essay constructs a comprehensive view of Dewey's approach to history, the historical method, and history education. Drawing on Dewey's approach to the subject at the University of Chicago Laboratory School (1896-1904), Dewey's chapter on the historical method in "Logic: A Theory of Inquiry" (1938), and a critique of Dewey's…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The final report for the project is presented in five volumes. This volume, Detailed Methodology Review, presents a discussion of the methods considered and used to estimate the impacts of Outer Continental Shelf (OCS) oil and gas development on coastal recreation in California. The purpose is to provide the Minerals Management Service with data and methods to improve their ability to analyze the socio-economic impacts of OCS development. Chapter II provides a review of previous attempts to evaluate the effects of OCS development and of oil spills on coastal recreation. The review also discusses the strengths and weaknesses of differentmore » approaches and presents the rationale for the methodology selection made. Chapter III presents a detailed discussion of the methods actually used in the study. The volume contains the bibliography for the entire study.« less
NASA Astrophysics Data System (ADS)
Du, Xiaofeng; Song, William; Munro, Malcolm
Web Services as a new distributed system technology has been widely adopted by industries in the areas, such as enterprise application integration (EAI), business process management (BPM), and virtual organisation (VO). However, lack of semantics in the current Web Service standards has been a major barrier in service discovery and composition. In this chapter, we propose an enhanced context-based semantic service description framework (CbSSDF+) that tackles the problem and improves the flexibility of service discovery and the correctness of generated composite services. We also provide an agile transformation method to demonstrate how the various formats of Web Service descriptions on the Web can be managed and renovated step by step into CbSSDF+ based service description without large amount of engineering work. At the end of the chapter, we evaluate the applicability of the transformation method and the effectiveness of CbSSDF+ through a series of experiments.
Chapter 8: Pyrolysis of Biomass for Aviation Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robichaud, David J; Jenkins, Rhodri W.; Sutton, Andrew D.
2016-07-15
Pyrolysis, the breaking down of organic material using heat and the absence of oxygen, is a method that has been widely researched for the production of liquid fuels. In this chapter, we review the feedstocks typically used for pyrolysis, the properties and the composition of the liquid fraction (termed 'bio-oil') obtained, the studies in which pyrolysis has been used in an attempt to increase the bio-oil yield, and how the bio-oil has been upgraded to fuel-like molecules. We also discuss the viability of pyrolysis to produce jet fuel hydrocarbons.
Nanomaterials for Defense Applications
NASA Astrophysics Data System (ADS)
Turaga, Uday; Singh, Vinitkumar; Lalagiri, Muralidhar; Kiekens, Paul; Ramkumar, Seshadri S.
Nanotechnology has found a number of applications in electronics and healthcare. Within the textile field, applications of nanotechnology have been limited to filters, protective liners for chemical and biological clothing and nanocoatings. This chapter presents an overview of the applications of nanomaterials such as nanofibers and nanoparticles that are of use to military and industrial sectors. An effort has been made to categorize nanofibers based on the method of production. This chapter particularly focuses on a few latest developments that have taken place with regard to the application of nanomaterials such as metal oxides in the defense arena.
Engineering mechanics: statics and dynamics. [Textbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandor, B.I.
1983-01-01
The purpose of this textbook is to provide engineering students with basic learning material about statics and dynamics which are fundamental engineering subjects. The chapters contain information on: an introduction to engineering mechanics; forces on particles, rigid bodies, and structures; kinetics of particles, particle systems, and rigid bodies in motion; kinematics; mechanical vibrations; and friction, work, moments of inertia, and potential energy. Each chapter contains introductory material, the development of the essential equations, worked-out example problems, homework problems, and, finally, summaries of the essential methods and equations, graphically illustrated where appropriate. (LCL)
General introduction and recovery factors
Verma, Mahendra K.
2017-07-17
IntroductionThe U.S. Geological Survey (USGS) compared methods for estimating an incremental recovery factor (RF) for the carbon dioxide enhanced oil recovery (CO2-EOR) process involving the injection of CO2 into oil reservoirs. This chapter first provides some basic information on the RF, including its dependence on various reservoir and operational parameters, and then discusses the three development phases of oil recovery—primary, secondary, and tertiary (EOR). It ends with a brief discussion of the three approaches for estimating recovery factors, which are detailed in subsequent chapters.
George, Mark S; Aston-Jones, Gary
2010-01-01
Although the preceding chapters discuss much of the new knowledge of neurocircuitry of neuropsychiatric diseases, and an invasive approach to treatment, this chapter describes and reviews the noninvasive methods of testing circuit-based theories and treating neuropsychiatric diseases that do not involve implanting electrodes into the brain or on its surface. These techniques are transcranial magnetic stimulation, vagus nerve stimulation, and transcranial direct current stimulation. Two of these approaches have FDA approval as therapies. PMID:19693003
NASA Technical Reports Server (NTRS)
Hanner, Martha
1988-01-01
The optical properties of small grains provide the link between the infrared observations presented in Chapter 1 and the dust composition described in Chapter 3. In this session, the optical properties were discussed from the viewpoint of modeling the emission from the dust coma and the scattering in order to draw inference about the dust size distribution and composition. The optical properties are applied to the analysis of the infrared data in several ways, and these different uses should be kept in mind when judging the validity of the methods for applying optical constants to real grains.
Code of Federal Regulations, 2014 CFR
2014-10-01
... (CONTINUED) SERVICES FURNISHED BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND... reasonable cost method to non-PPS participating providers in accordance with part 413 of this chapter. ...
Code of Federal Regulations, 2012 CFR
2012-10-01
... (CONTINUED) SERVICES FURNISHED BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND... reasonable cost method to non-PPS participating providers in accordance with part 413 of this chapter. ...
Code of Federal Regulations, 2013 CFR
2013-10-01
... (CONTINUED) SERVICES FURNISHED BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND... reasonable cost method to non-PPS participating providers in accordance with part 413 of this chapter. ...
NASA Astrophysics Data System (ADS)
Chatterjee, Rohit
In this research work, we explore fundamental silicon-based active and passive photonic devices that can be integrated together to form functional photonic integrated circuits. The devices which include power splitters, switches and lenses are studied starting from their physics, their design and fabrication techniques and finally from an experimental standpoint. The experimental results reveal high performance devices that are compatible with standard CMOS fabrication processes and can be easily integrated with other devices for near infrared telecom applications. In Chapter 2, a novel method for optical switching using nanomechanical proximity perturbation technique is described and demonstrated. The method which is experimentally demonstrated employs relatively low powers, small chip footprint and is compatible with standard CMOS fabrication processes. Further, in Chapter 3, this method is applied to develop a hitless bypass switch aimed at solving an important issue in current wavelength division multiplexing systems namely hitless switching of reconfigurable optical add drop multiplexers. Experimental results are presented to demonstrate the application of the nanomechanical proximity perturbation technique to practical situations. In Chapter 4, a fundamental photonic component namely the power splitter is described. Power splitters are important components for any photonic integrated circuits because they help split the power from a single light source to multiple devices on the same chip so that different operations can be performed simultaneously. The power splitters demonstrated in this chapter are based on multimode interference principles resulting in highly compact low loss and highly uniform power splitting to split the power of the light from a single channel to two and four channels. These devices can further be scaled to achieve higher order splitting such as 1x16 and 1x32 power splits. Finally in Chapter 5 we overcome challenges in device fabrication and measurement techniques to demonstrate for the first time a "superlens" for the technologically important near infrared wavelength ranges with the opportunity to scale down further to visible wavelengths. The observed resolution is 0.47lambda, clearly smaller than the diffraction limit of 0.61lambda and is supported by detailed theoretical analyses and comprehensive numerical simulations. Importantly, we clearly show for the first time this subdiffraction limit imaging is due to the resonant excitation of surface slab modes, permitting amplification of evanescent waves. The demonstrated "superlens" has the largest figure of merit ever reported till date both theoretically and experimentally. The techniques and devices described in this thesis can be further applied to develop new devices with different functionalities. In Chapter 6 we describe two examples using these ideas. First, we experimentally demonstrate the use of the nanomechanical proximity perturbation technique to develop a phase retarder for on-chip all state polarization control. Next, we use the negative refraction photonic crystals described in Chapter 5 to achieve a special kind of bandgap called the zero-n¯ bandgap having unique properties.
NASA Astrophysics Data System (ADS)
Da Silva, Rafael
In nanomaterials there is a strong correlation between structure and properties. Thus, the design and synthesis of nanomaterials with well-defined structures and morphology is essential in order to produce materials with not only unique but also tailorable properties. The unique properties of nanomaterials in turn can be taken advantage of to create materials and nanoscale devices that can help address important societal issues, such as meeting renewable energy sources and efficient therapeutic and diagnostic methods to cure a range of diseases. In this thesis, the different synthetic approaches I have developed to produce functional nanomaterials composed of earth-abundant elements (mainly carbon and silica) at low cost in a very sustainable manner are discussed. In Chapter 1, the fundamental properties of nanomaterials and their properties and potential applications in many areas are introduced. In chapter 2, a novel synthetic method that allows polymerization of polyaniline (PANI), a conducting polymer, inside cylindrical channel pores of nanoporous silica (SBA-15) is discussed. In addition, the properties of the III resulting conducting polymer in the confined nanochannel spaces of SBA-15, and more importantly, experimental demonstration of the use of the resulting hybrid material (PANI/SBA-15 material) as electocatalyst for electrooxidation reactions with good overpotential, close to zero, are detailed. In chapter 3, the synthetic approach discussed in Chapter 2 is further extended to afford nitrogen- and oxygen-doped mesoporous carbons. This is possible by pyrolysis of the PANI/SBA-15 composite materials under inert atmosphere, followed by etching away their silica framework. The high catalytic activity of resulting carbon-based materials towards oxygen reduction reaction despite they do not possess any metal dopants is also included. The potential uses of nanomaterials in areas such as nanomedicine need deep understanding of the biocompatibility/ toxicity of the materials. In Chapter 4, comparative in vitro and in vivo assessments of the biological properties and murine lung toxicity (biocompatibility) of the carbon-based nanomaterials synthesized above and in core-shell architectures containing carbon, silica and cobalt is presented. The results indicate that silica shell is essential for biocompatibility. Furthermore, cobalt oxide is the preferred phase over the zero valent Co(0) phase to impart biocompatibility to cobalt-based nanoparticles. This study is a result of collaboration between Asefa's research group at Rutgers University and Souid's research group at United Arab Emirates University. In Chapter 5, a new synthetic method to carbon nanoneedles (or a new class of carbon nanomaterials with high aspect ratios) is presented. In the work, cellulose nanocrystals are prepared and used as precursor for carbon nanostructures. Unlike other types of carbon nanomaterials, carbon IV nanoneedles possess high surface area and large proportion of edge planes, which have outstanding charge transfer and catalytic properties. The resulting metal-free, carbon nanoneedles are shown to serve as effective electrocatalysts for oxidation of hydrazine. In Chapter 6, the synthesis of amorphous carbon nanoneedles containing cobalt and their catalytic activities for oxygen reduction reaction is discussed. Even though the activity of the materials is lower than the one discussed in Chapter 3 for polyaniline-derived mesoporous carbons, the result and discussion in this chapter provides new insights on the effects and advantages of carbon nanoneedles on the electrocatalytic activity of the materials. In addition, the effects of cobalt content and nanoneedles' structures on the catalytic activity of the materials are described. In chapter 7, the synthesis of very small Au nanoparticles within SBA-15 mesoporous silica host materials by galvanic exchange reactions is described. The resulting Au/SBA-15 materials with different size Au nanoparticles are shown to have very interesting surface plasmon resonance (SPR) activity as a result of the confinement of large numbers of Au nanoparticles side-to-side in a row within the cylindrical channel pores of SBA-15 and the many SPR hot spots they formed. The surface enhanced Raman scattering (SERS) property of the materials in form of powder, showing reasonably high SERS enhancement factor for analytes is discussed. Finally in Chapter 8, Conclusions and Future Prospects are discussed.
Pollination Research Methods with Apis mellifera
USDA-ARS?s Scientific Manuscript database
This chapter describes field and lab procedures for doing experiments on honey bee pollination. Most of the methods apply to any insect for whom pollen vectoring capacity is the question. What makes honey bee pollination distinctive is its historic emphasis on agricultural applications; hence one fi...
48 CFR 715.370-2 - Title XII selection procedure-collaborative assistance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION... contracting system is appropriate. See AIDR sppendix F (of this chapter)—Use of Collaborative Assistance... initiating any contract actions under the collaborative assistance method: (1) The cognizant technical office...
48 CFR 715.370-2 - Title XII selection procedure-collaborative assistance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION... contracting system is appropriate. See AIDR sppendix F (of this chapter)—Use of Collaborative Assistance... initiating any contract actions under the collaborative assistance method: (1) The cognizant technical office...
Analytical Methods for Biomass Characterization during Pretreatment and Bioconversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pu, Yunqiao; Meng, Xianzhi; Yoo, Chang Geun
2016-01-01
Lignocellulosic biomass has been introduced as a promising resource for alternative fuels and chemicals because of its abundance and complement for petroleum resources. Biomass is a complex biopolymer and its compositional and structural characteristics largely vary depending on its species as well as growth environments. Because of complexity and variety of biomass, understanding its physicochemical characteristics is a key for effective biomass utilization. Characterization of biomass does not only provide critical information of biomass during pretreatment and bioconversion, but also give valuable insights on how to utilize the biomass. For better understanding biomass characteristics, good grasp and proper selection ofmore » analytical methods are necessary. This chapter introduces existing analytical approaches that are widely employed for biomass characterization during biomass pretreatment and conversion process. Diverse analytical methods using Fourier transform infrared (FTIR) spectroscopy, gel permeation chromatography (GPC), and nuclear magnetic resonance (NMR) spectroscopy for biomass characterization are reviewed. In addition, biomass accessibility methods by analyzing surface properties of biomass are also summarized in this chapter.« less
Oligonucleotide-Functionalized Anisotropic Gold Nanoparticles
NASA Astrophysics Data System (ADS)
Jones, Matthew Robert
In this thesis, we describe the properties of oligonucleotide-functionalized gold colloids under the unique set of conditions where the particles are geometrically anisotropic and have nanometer-scale dimensions. While nearly two decades of previous work elucidated numerous unexpected and emergent phenomena arising from the combination of inorganic nanoparticles with surface-bound DNA strands, virtually nothing was known about how these properties are altered when the shape of the nanoparticle core is chosen to be non-spherical. In particular, we are interested in understanding, and ultimately controlling, the ways in which these DNA-conjugated anisotropic nanostructures interact when their attraction is governed by programmable DNA hybridization events. Chapter 1 introduces the field of DNA-based materials assembly by discussing how nanoscale building blocks which present rigid, directional interactions can be thought of as possessing artificial versions of the familiar chemical principles of "bonds" and "valency". In chapter 2 we explore the fundamental interparticle binding thermodynamics of DNA-functionalized spherical and anisotropic nanoparticles, which reveals enormous preferences for collective ligand interactions occurring between flat surfaces over those that occur between curved surfaces. Using these insights, chapter 3 demonstrates that when syntheses produce mixtures of different nanoparticle shapes, the tailorable nature of DNA-mediated interparticle association can be used to selectively crystallize and purify the desired anisotropic nanostructure products, leaving spherical impurity particles behind. Chapter 4 leverages the principle that the flat facets of anisotropic particles generate directional DNA-based hybridization interactions to assemble a variety of tailorable nanoparticle superlattices whose symmetry and dimensionality are a direct consequence of the shape of the nanoparticle building block used in their construction. Chapter 5 explores a useful application of having thermally labile DNA duplexes bound to anisotropic nanoparticles -- the selective photothermal heating and release of hybridized oligonucleotides via a plasmon excitation-based mechanism. The final chapter presents a brief summary of the seminal findings of this thesis and provides an outlook covering future directions and remaining challenges for the field. A comprehensive review covering methods to synthesize and assemble noble metal nanostructures is included in the appendix as an additional resource. All experimental chapters are organized similarly; they begin with an abstract or introduction to motivate and contextualize the work, present the main results and discussion with brief experimental details, and conclude with more detailed, supplementary information for the interested reader. As a whole, this work establishes fundamental understanding and new experimental methods for exploiting nanoscale shape anisotropy to manipulate the chemical and physical properties of matter.
ERIC Educational Resources Information Center
Thatcher, L. L.; And Others
Analytical methods for determining important components of fission and natural radioactivity found in water are reported. The discussion of each method includes conditions for application of the method, a summary of the method, interferences, required apparatus, procedures, calculations and estimation of precision. Isotopes considered are…
Solar Electric Propulsion Triple-Satellite-Aided Capture With Mars Flyby
NASA Astrophysics Data System (ADS)
Patrick, Sean
Triple-Satellite-aided-capture sequences use gravity-assists at three of Jupiter's four massive Galilean moons to reduce the DeltaV required to enter into Jupiter orbit. A triple-satellite-aided capture at Callisto, Ganymede, and Io is proposed to capture a SEP spacecraft into Jupiter orbit from an interplanetary Earth-Jupiter trajectory that employs low-thrust maneuvers. The principal advantage of this method is that it combines the ISP efficiency of ion propulsion with nearly impulsive but propellant-free gravity assists. For this thesis, two main chapters are devoted to the exploration of low-thrust triple-flyby capture trajectories. Specifically, the design and optimization of these trajectories are explored heavily. The first chapter explores the design of two solar electric propulsion (SEP), low-thrust trajectories developed using the JPL's MALTO software. The two trajectories combined represent a full Earth to Jupiter capture split into a heliocentric Earth to Jupiter Sphere of Influence (SOI) trajectory and a Joviocentric capture trajectory. The Joviocentric trajectory makes use of gravity assist flybys of Callisto, Ganymede, and Io to capture into Jupiter orbit with a period of 106.3 days. Following this, in chapter two, three more SEP low-thrust trajectories were developed based upon those in chapter one. These trajectories, devised using the high-fidelity Mystic software, also developed by JPL, improve upon the original trajectories developed in chapter one. Here, the developed trajectories are each three separate, full Earth to Jupiter capture orbits. As in chapter one, a Mars gravity assist is used to augment the heliocentric trajectories. Gravity-assist flybys of Callisto, Ganymede, and Io or Europa are used to capture into Jupiter Orbit. With between 89.8 and 137.2-day periods, the orbits developed in chapters one and two are shorter than most Jupiter capture orbits achieved using low-thrust propulsion techniques. Finally, chapter 3 presents an original trajectory design for a Very-Long-Baseline Interferometry (VLBI) satellite constellation. The design was created for the 8th Global Trajectory Optimization Competition (GTOC8) in which participants are tasked with creating and optimizing low-thrust trajectories to place a series of three space craft into formation to map given radio sources.
Carbon nanotube macroelectronics
NASA Astrophysics Data System (ADS)
Zhang, Jialu
In this dissertation, I discuss the application of carbon nanotubes in macroelectronis. Due to the extraordinary electrical properties such as high intrinsic carrier mobility and current-carrying capacity, single wall carbon nanotubes are very desirable for thin-film transistor (TFT) applications such as flat panel display, transparent electronics, as well as flexible and stretchable electronics. Compared with other popular channel material for TFTs, namely amorphous silicon, polycrystalline silicon and organic materials, nanotube thin-films have the advantages of low-temperature processing compatibility, transparency, and flexibility, as well as high device performance. In order to demonstrate scalable, practical carbon nanotube macroelectroncis, I have developed a platform to fabricate high-density, uniform separated nanotube based thin-film transistors. In addition, many other essential analysis as well as technology components, such as nanotube film density control, purity and diameter dependent semiconducting nanotube electrical performance study, air-stable n-type transistor fabrication, and CMOS integration platform have also been demonstrated. On the basis of the above achievement, I have further demonstrated various kinds of applications including AMOLED display electronics, PMOS and CMOS logic circuits, flexible and transparent electronics. The dissertation is structured as follows. First, chapter 1 gives a brief introduction to the electronic properties of carbon nanotubes, which serves as the background knowledge for the following chapters. In chapter 2, I will present our approach of fabricating wafer-scale uniform semiconducting carbon nanotube thin-film transistors and demonstrate their application in display electronics and logic circuits. Following that, more detailed information about carbon nanotube thin-film transistor based active matrix organic light-emitting diode (AMOLED) displays is discussed in chapter 3. And in chapter 4, a technology to fabricate air-stable n-type semiconducting nanotube thin-film transistor is developed and complementary metal--oxide--semiconductor (CMOS) logic circuits are demonstrated. Chapter 5 discusses the application of carbon nanotubes in transparent and flexible electronics. After that, in chapter 6, a simple and low cost nanotube separation method is introduced and the electrical performance of separated nanotubes with different diameter is studied. Finally, in chapter 7 a brief summary is drawn and some future research directions are proposed with preliminary results.
Problems with the Fraser report Chapter 1: Pitfalls in BMI time trend analysis.
Lo, Ernest
2014-11-05
The first chapter of the Fraser report "Obesity in Canada: Overstated Problems, Misguided Policy Solutions" presents a flawed and misleading analysis of BMI time trends. The objective of this commentary is to provide a tutorial on BMI time trend analysis through the examination of these flaws. Three issues are discussed: 1. Spotting regions of confidence interval overlap is a statistically flawed method of assessing trend; regression methods which measure the behaviour of the data as a whole are preferred. 2. Temporal stability in overweight (25≤BMI<30) prevalence must be interpreted in the context of the underlying population BMI distribution. 3. BMI is considered reliable for tracking population-level weight trends due to its high correlation with body fat percentage. BMI-defined obesity prevalence represents a conservative underestimate of the population at risk. The findings of the Fraser report Chapter 1 are either refuted or substantially mitigated once the above issues are accounted for, and we do not find that the 'Canadian situation largely lacks a disconcerting or negative trend', as claimed. It is hoped that this commentary will help guide public health professionals who need to interpret, or wish to perform their own, time trend analyses of BMI.
Scanning electron microscopy of bone.
Boyde, Alan
2012-01-01
This chapter described methods for Scanning Electron Microscopical imaging of bone and bone cells. Backscattered electron (BSE) imaging is by far the most useful in the bone field, followed by secondary electrons (SE) and the energy dispersive X-ray (EDX) analytical modes. This chapter considers preparing and imaging samples of unembedded bone having 3D detail in a 3D surface, topography-free, polished or micromilled, resin-embedded block surfaces, and resin casts of space in bone matrix. The chapter considers methods for fixation, drying, looking at undersides of bone cells, and coating. Maceration with alkaline bacterial pronase, hypochlorite, hydrogen peroxide, and sodium or potassium hydroxide to remove cells and unmineralised matrix is described in detail. Attention is given especially to methods for 3D BSE SEM imaging of bone samples and recommendations for the types of resin embedding of bone for BSE imaging are given. Correlated confocal and SEM imaging of PMMA-embedded bone requires the use of glycerol to coverslip. Cathodoluminescence (CL) mode SEM imaging is an alternative for visualising fluorescent mineralising front labels such as calcein and tetracyclines. Making spatial casts from PMMA or other resin embedded samples is an important use of this material. Correlation with other imaging means, including microradiography and microtomography is important. Shipping wet bone samples between labs is best done in glycerol. Environmental SEM (ESEM, controlled vacuum mode) is valuable in eliminating -"charging" problems which are common with complex, cancellous bone samples.
Methods for Tumor Targeting with Salmonella typhimurium A1-R.
Hoffman, Robert M; Zhao, Ming
2016-01-01
Salmonella typhimurium A1-R (S. typhimurium A1-R) has shown great preclinical promise as a broad-based anti-cancer therapeutic (please see Chapter 1 ). The present chapter describes materials and methods for the preclinical study of S. typhimurium A1-R in clinically-relevant mouse models. Establishment of orthotopic metastatic mouse models of the major cancer types is described, as well as other useful models, for efficacy studies of S. typhimurium A1-R or other tumor-targeting bacteria, as well. Imaging methods are described to visualize GFP-labeled S. typhimurium A1-R, as well as GFP- and/or RFP-labeled cancer cells in vitro and in vivo, which S. typhimurium A1-R targets. The mouse models include metastasis to major organs that are life-threatening to cancer patients including the liver, lung, bone, and brain and how to target these metastases with S. typhimurium A1-R. Various routes of administration of S. typhimurium A1-R are described with the advantages and disadvantages of each. Basic experiments to determine toxic effects of S. typhimurium A1-R are also described. Also described are methodologies for combining S. typhimurium A1-R and chemotherapy. The testing of S. typhimurium A1-R on patient tumors in patient-derived orthotopic xenograft (PDOX) mouse models is also described. The major methodologies described in this chapter should be translatable for clinical studies.
ERIC Educational Resources Information Center
Osterman, Dean
This chapter explains how the Guided Design method of teaching can be used to solve problems, and how this method was used in the development of a new method of teaching. Called the Feedback Lecture, this method is illustrated through an example, and research data on its effectiveness is presented. The Guided Decision-Making Process is also…
DOT National Transportation Integrated Search
2008-02-01
The objective of the proposed research project is to compare the results of two recently introduced nondestructive load test methods to the existing 24-hour load test method described in Chapter 20 of ACI 318-05. The two new methods of nondestructive...
Geoid determination in the coastal areas of the Gulf of Mexico
NASA Astrophysics Data System (ADS)
Song, HongZhi
Coastal areas of the Gulf of Mexico are important for many reasons. This part of the United States provides vital coastal habitats for many marine species; the area has seen-ever increasing human settlement along the coast, ever increasing infrastructure for marine transportation of the nation's imports and exports through Gulf ports, and ever increasing recreational users of coastal resources. These important uses associated with the Gulf coast are subject to dynamic environmental and physical changes including: coastal erosion (Gulf-wide rates of 25 square miles per year), tropical storm surges, coastal subsidence, and global sea level rise. Coastal land subsidence is a major component of relative sea level rise along the coast of the Gulf of Mexico. These dynamic coastal changes should be evident in changes to the geoid along the coast. The geoid is the equipotential gravity surface of the earth, which the best fits the global mean sea level. The geoid is not only been seen as the most natural shape of the Earth, but also it serves as the reference surface for most of the height systems. By using satellites (GRACE mission) scientists have been able to measure the large scale geoid for the Earth. A small scale geoid model is required to monitor local events such as flooding, for example, flooding created by storm surges from hurricanes such as Katrina (2005), Rita (2005), and Ike (2008). The overall purpose of this study is to evaluate the accuracy of the local coastal geoid. The more precise geoid will enable to improve coastal flooding predictions, and will enable more cost effective and accurate measurement of coastal topography using global navigation satellite systems (GNSS). The main objective of this study is to devise mathematical models and computational methods to achieve the best possible precision for evaluation of the geoid in the coastal areas of the Gulf of Mexico. More specifically, the numerical objectives of this study are 1) to obtain a continuous map of gravity anomalies and a continuous map of gravity by using spatial interpolation methods and to evaluate errors; 2) to solve the Laplace boundary value problem and evaluate errors; 3) to evaluate precision of the local geoid by using geospatial statistical tools and numerical techniques. This dissertation investigates modeling of the geoid, especially the gravimetric equipotential surface that approximates mean sea level, in the coastal areas of the Gulf of Mexico as well as errors in the geoid determination. The document begins with Chapter 1 which introduces the study of this dissertation. Different models of kriging are used to determine the precision of the geoid based on the free-air gravity anomalies data supplied by United States Naval Research Laboratory and the airborne gravity data provided by the U.S. National Geodetic Survey, which can be found in Chapters 2 and 3. Research in Chapters 2 shows that more precise evaluation of errors in gravity anomalies can be achieved by using different models of kriging. Results from Chapters 2 and 3 show that ordinary kriging with the stable semivariogram model provide better predictions. Research results from Chapter 3 provide estimation of maximum possible errors in the calculation of the geoid undulation. The dissertation also investigates behavior of gravity equipotential surfaces around coastal lines and its impact on the geoid evaluation. Chapters 4 and 5 are about evaluation of errors in the Dirichlet problem for calculation of gravity potential with uncertain boundary and boundary values has been achieved by solving the Laplace equation by means of separation of variables. Research has provided a theoretical model in Chapter 4 to estimate very small changes in gravimetric potential relative to the coast. Maximum possible error in the solution of Direchlet problem is determined in Chapter 5. Maximum possible error depends on the errors of boundary values and the precision of the boundary itself. Chapter 6 describes a novel approach to sea level rise modeling. Factor analysis is used to analyze local and global sea level rise and relationships between changing sea levels, currents, and the shape of the Earth. Results of factor analysis from Chapter 6 show that the elevation of sea level relates to the geoid and ocean circulation. Chapter 7 describes the relationship between the geoid and wetlands modeling. Research in Chapter 7 shows that the predicted continuous elevation map obtained through the ordinary stable kriging was sufficiently precise and fairly reliable. Chapter 7 is an exploratory chapter, and the ideas of this chapter will help the future research.
NASA Technical Reports Server (NTRS)
Ippolito, Louis J.
1989-01-01
The NASA Propagation Effects Handbook for Satellite Systems Design provides a systematic compilation of the major propagation effects experienced on space-Earth paths in the 10 to 100 GHz frequency band region. It provides both a detailed description of the propagation phenomenon and a summary of the impact of the effect on the communications system design and performance. Chapter 2 through 5 describe the propagation effects, prediction models, and available experimental data bases. In Chapter 6, design techniques and prediction methods available for evaluating propagation effects on space-Earth communication systems are presented. Chapter 7 addresses the system design process and how the effects of propagation on system design and performance should be considered and how that can be mitigated. Examples of operational and planned Ku, Ka, and EHF satellite communications systems are given.
Biological Chemistry and Functionality of Protein Sulfenic Acids and Related Thiol Modifications
Devarie-Baez, Nelmi O.; Silva Lopez, Elsa I.; Furdui, Cristina M.
2016-01-01
Selective modification of proteins at cysteine residues by reactive oxygen, nitrogen or sulfur species formed under physiological and pathological states is emerging as a critical regulator of protein activity impacting cellular function. This review focuses primarily on protein sulfenylation (-SOH), a metastable reversible modification connecting reduced cysteine thiols to many products of cysteine oxidation. An overview is first provided on the chemistry principles underlining synthesis, stability and reactivity of sulfenic acids in model compounds and proteins, followed by a brief description of analytical methods currently employed to characterize these oxidative species. The following chapters present a selection of redox-regulated proteins for which the -SOH formation was experimentally confirmed and linked to protein function. These chapters are organized based on the participation of these proteins in the regulation of signaling, metabolism and epigenetics. The last chapter discusses the therapeutic implications of altered redox microenvironment and protein oxidation in disease. PMID:26340608
40 CFR 459.11 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... definitions, abbreviations and methods of analysis set forth in part 401 of this chapter shall apply to this... as paper prints, slides, negatives, enlargements, movie film and other sensitized materials. ...
23 CFR 635.104 - Method of construction.
Code of Federal Regulations, 2013 CFR
2013-04-01
... STD demonstrates to the satisfaction of the Division Administrator that some other method is more cost effective or that an emergency exists. The STD shall assure opportunity for free, open, and competitive... this chapter. Before such finding is made, the STD shall determine that the organization to undertake...
23 CFR 635.104 - Method of construction.
Code of Federal Regulations, 2011 CFR
2011-04-01
... STD demonstrates to the satisfaction of the Division Administrator that some other method is more cost effective or that an emergency exists. The STD shall assure opportunity for free, open, and competitive... this chapter. Before such finding is made, the STD shall determine that the organization to undertake...
23 CFR 635.104 - Method of construction.
Code of Federal Regulations, 2012 CFR
2012-04-01
... STD demonstrates to the satisfaction of the Division Administrator that some other method is more cost effective or that an emergency exists. The STD shall assure opportunity for free, open, and competitive... this chapter. Before such finding is made, the STD shall determine that the organization to undertake...
23 CFR 635.104 - Method of construction.
Code of Federal Regulations, 2014 CFR
2014-04-01
... STD demonstrates to the satisfaction of the Division Administrator that some other method is more cost effective or that an emergency exists. The STD shall assure opportunity for free, open, and competitive... this chapter. Before such finding is made, the STD shall determine that the organization to undertake...
Optimization of forest wildlife objectives
John Hof; Robert Haight
2007-01-01
This chapter presents an overview of methods for optimizing wildlife-related objectives. These objectives hinge on landscape pattern, so we refer to these methods as "spatial optimization." It is currently possible to directly capture deterministic characterizations of the most basic spatial relationships: proximity relationships (including those that lead to...
Chapter 3: Design of the Saber-Tooth Project.
ERIC Educational Resources Information Center
Ward, Phillip
1999-01-01
Used data from interviews, surveys, and document analysis to describe the methods and reform processes of the Saber Tooth Project, examining selection of sites; demographics (school sites, teachers, data sources, and project assumptions); and project phases (development, planning, implementation, and support). The project's method of reform was…
Code of Federal Regulations, 2010 CFR
2010-07-01
... Method employee used to purchase transportation tickets Method Indicator GTR U.S. Government Transportation Request Central Billing Account A contractor centrally billed account Government Charge Card In.../Date Fields Claimant Signature Traveler's signature, or digital representation. The signature signifies...
Nonlinear Localized Dissipative Structures for Long-Time Solution of Wave Equation
2009-07-01
are described in this chapter. These details are required to compute interference. WC can be used to generate constant arrival time ( Eikonal phase...complicated using Eikonal schemes. Some recent developments in Eikonal methods [2] can treat multiple arrival times but, these methods require extra
Assessing and measuring wetland hydrology
Rosenberry, Donald O.; Hayashi, Masaki; Anderson, James T.; Davis, Craig A.
2013-01-01
Virtually all ecological processes that occur in wetlands are influenced by the water that flows to, from, and within these wetlands. This chapter provides the “how-to” information for quantifying the various source and loss terms associated with wetland hydrology. The chapter is organized from a water-budget perspective, with sections associated with each of the water-budget components that are common in most wetland settings. Methods for quantifying the water contained within the wetland are presented first, followed by discussion of each separate component. Measurement accuracy and sources of error are discussed for each of the methods presented, and a separate section discusses the cumulative error associated with determining a water budget for a wetland. Exercises and field activities will provide hands-on experience that will facilitate greater understanding of these processes.
CHAPTER 7: Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages
Zhu, Rui; Zacharias, Lauren; Wooding, Kerry M.; Peng, Wenjing; Mechref, Yehia
2017-01-01
Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins while automated software tools started replacing manual processing to improve the reliability and throughout of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. PMID:28109440
Chemistry and catalysis at the surface of nanomaterials
NASA Astrophysics Data System (ADS)
White, Brian Edward
This thesis will delve into three main areas of nanomaterials research: (I) Designing, building, and utilizing a chemical vapor deposition (CVD) system for the growth of CNTs; (II) Aqueous suspensions of carbon nanotubes (CNT) solubilized by various surfactants, and the oxidative chemistry that can occur at CNT surfaces; (III) Catalytic CO oxidation over supported Cu2O nanoparticle systems. An introduction to nanomaterials in general, with a particular emphasis on carbon nanotubes and nanoparticles will be given in Chapter one. Chapter two provides a summary of common techniques used to grow carbon nanotubes, and introduces a new method we have developed. This method is based on previous chemical vapor deposition techniques, but uses liquids, specifically ethanol, as the carbon source. Using ethanol has several advantages, including ease of use and safety, as well as chemical benefits. Our new process affords long, aligned, single-walled nanotubes, with a relatively narrow diameter distribution. This method can also be used to grow CNTs across slits, which can then be studied spectroscopically. In Chapter three CNT-surfactant aqueous suspensions will be discussed in depth, including a new robust polymer surfactant. Poly(maleic acid/octyl vinyl ether) (PMAOVE) is stable over a large range of temperatures and pH values, and is well suited for the study of the oxidative chemistry that can occur on SWNT surfaces. Our aqueous suspensions were found to be quite stable by zeta potential studies and their emissive properties exhibited a pH dependence, quenching at higher concentrations of H+. We attribute this dependence to chemisorbed oxygen and its protonation at lower pH values. By heating the suspensions of SWNTs, O2 can be driven off, thus eliminating the dependence on pH. We also reproducibly add oxygen back into the system in the form of 1DeltaO2 , obtained from an endoperoxide. This method allows us to calculate the number of oxygen molecules needed for fluorescence quenching and absorption bleaching. With the aid of theoretical calculations, we propose a structure for the oxygen-nanotube species, as well as its protonated form. The final two chapters describe our development of a Cu2O nanoparticle based catalyst that can efficiently oxidize CO to CO2. Chapter four discusses the characterization of our catalytic system by TEM, XRD, TGA, BET, and elemental analysis, and the theoretical calculations that were carried out to verify our experimental findings in support of the redox mechanism of the reaction. The biggest drawback of this catalyst was the short lifetime, which was approximately 12 hours. The addition of CeO2 nanoparticles was used to increase lifetime, and this methodology is demonstrated in Chapter five. Efficient catalytic oxidation of CO was observed for over 200 hours, as well as the preferential oxidation of CO in a hydrogen environment.
Lipid globule size in total nutrient admixtures prepared in three-chamber plastic bags.
Driscoll, David F; Thoma, Andrea; Franke, Rolf; Klütsch, Karsten; Nehne, Jörg; Bistrian, Bruce R
2009-04-01
The stability of injectable lipid emulsions in three-chamber plastic (3CP) bags, applying the globule-size limits established by United States Pharmacopeia ( USP ) chapter 729, was studied. A total of five premixed total nutrient admixture (TNA) products packaged in 3CP bags from two different lipid manufacturers containing either 20% soybean oil or a mixture of soybean oil and medium-chain-triglyceride oil as injectable lipid emulsions were tested. Two low-osmolarity 3CP bags and three high-osmolarity 3CP bags were studied. All products were tested with the addition of trace elements and multivitamins. All additive conditions (with and without electrolytes) were tested in triplicate at time 0 (immediately after mixing) and at 6, 24, 30, and 48 hours after mixing; the bags were stored at 24-26 degrees C. All additives were equally distributed in each bag for comparative testing, applying both globule sizing methods outlined in USP chapter 729. Of the bags tested, all bags from one manufacturer were coarse emulsions, showing signs of significant growth in the large-diameter tail when mixed as a TNA formulation and failing the limits set by method II of USP chapter 729 from the outset and throughout the study, while the bags from the other manufacturer were fine emulsions and met these limits. Of the bags that failed, significant instability was noted in one series containing additional electrolytes. Injectable lipid emulsions provided in 3CP bags that did not meet the globule-size limits of USP chapter 729 produced coarser TNA formulations than emulsions that met the USP limits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, Charles Ashley
In Chapter 2 several experimental and data analysis methods used in this thesis are described. In Chapter 3 steady-state fluorescence spectroscopy was used to determine the concentration of the efflux pump inhibitors (EPIs), pheophorbide a and pyropheophorbide a, in the feces of animals and it was found that their levels far exceed those reported to be inhibitory to efflux pumps. In Chapter 4 the solvation dynamics of 6-Propionyl-2-(N,Ndimethyl) aminonaphthalene (PRODAN) was studied in reverse micelles. The two fluorescent states of PRODAN solvate on different time scales and as such care must be exercised in solvation dynamic studies involving it andmore » its analogs. In Chapter 5 we studied the experimental and theoretical solvation dynamics of coumarin 153 (C153) in wild-type (WT) and modified myoglobins. Based on the nuclear magnetic resonance (NMR) spectroscopy and time-resolved fluorescence studies, we have concluded that it is important to thoroughly characterize the structure of a protein and probe system before comparing the theoretical and experimental results. In Chapter 6 the photophysical and spectral properties of a derivative of the medically relevant compound curcumin called cyclocurcumin was studied. Based on NMR, fluorescence, and absorption studies, the ground- and excited-states of cyclocurcumin are complicated by the existence of multiple structural isomers. In Chapter 7 the hydrolysis of cellulose by a pure form of cellulase in an ionic liquid, HEMA, and its aqueous mixtures at various temperatures were studied with the goal of increasing the cellulose to glucose conversion for biofuel production. It was found that HEMA imparts an additional stability to cellulase and can allow for faster conversion of cellulose to glucose using a pre-treatment step in comparison to only buffer.« less
Creepy Crawlies and the Scientific Method: Over 100 Hands-On Science Experiments for Children.
ERIC Educational Resources Information Center
Kneidel, Sally Stenhouse
This book contains 114 experiments, mostly behavioral, with animals that are commonly found in nature. Each experiment is a five-step procedure: question, hypothesis, methods, result, and conclusion. Chapter 1 is devoted entirely to explaining these five steps, which together constitute the scientific method. The experiments are in the last part…
What's the Harm? The Coverage of Ethics and Harm Avoidance in Research Methods Textbooks
ERIC Educational Resources Information Center
Dixon, Shane; Quirke, Linda
2018-01-01
Methods textbooks play a role in socializing a new generation of researchers about ethical research. How do undergraduate social research methods textbooks portray harm, its prevalence, and ways to mitigate harm to participants? We conducted a content analysis of ethics chapters in the 18 highest-selling undergraduate textbooks used in sociology…
Information Fusion - Methods and Aggregation Operators
NASA Astrophysics Data System (ADS)
Torra, Vicenç
Information fusion techniques are commonly applied in Data Mining and Knowledge Discovery. In this chapter, we will give an overview of such applications considering their three main uses. This is, we consider fusion methods for data preprocessing, model building and information extraction. Some aggregation operators (i.e. particular fusion methods) and their properties are briefly described as well.
Stated Preference Methods for Valuation of Forest Attributes
Thomas P. Holmes; Kevin J. Boyle
2003-01-01
The valuation methods described in this chapter are based on the idea that forest ecosystems produce a wide variety of goods and services that are valued by people. Rather than focusing attention on the holistic value of forest ecosystems as is done in contingent valuation studies, attribute-based valuation methods (ABMs) focus attention on a set of attributes that...
Photonic crystals: Theory and device applications
NASA Astrophysics Data System (ADS)
Fan, Shanhui
In this thesis, first-principle frequency-domain and time-domain methods are developed and applied to investigate various properties and device applications of photonic crystals. In Chapter 2, I discuss the two numerical methods used to investigate the properties of photonic crystals. The first solves Maxwell's equations in the frequency domain, while the second solves the equations in the time domain. The frequency-domain method yields the frequency, polarization, symmetry, and field distribution of every eigenmode of the system; the time-domain method allows one to determine the temporal behavior of the modes. In Chapter 3, a new class of three-dimensional photonic crystal structures is introduced that is amenable for fabrication at submicron-length scales. The structures give rise to a 3D photonic bandgap. They consist of a layered structure in which a series of cylindrical air holes are etched at normal incidence. The calculation demonstrates the existence of a gap as large as 14% of the mid-gap frequency using Si, SiO2, and air; and 23% using Si and air. In Chapter 4, the bandstructure and transmission properties of three-dimensional metallodielectric photonic crystals are presented. The metallodielectric crystals are modeled as perfect electrical conducting objects embedded in dielectric media. We investigate the face-centered-cubic (fcc) lattice, and the diamond lattice. Partial gaps are predicted in the fcc lattice, in excellent agreement with recent experiments. Complete gaps are found in a diamond lattice of isolated metal spheres. The gaps appear between the second and third bands, and their sizes can be larger than 60% when the radius of the spheres exceeds 21% of the cubic unit cell size. In Chapter 5, I investigate the properties of resonant modes which arise from the introduction of local defects in two-dimensional (2D) and 3D photonic crystals. The properties of these modes can be controlled by changing the nature and the size of the defects. The symmetry associated with these modes translates into an orbital angular momentum for each photon. In Chapter 6, a new type of high-Q microcavity is introduced that consists of a channel waveguide and a one-dimensional photonic crystal. A band gap for the guided modes is opened and a sharp resonant state is created by adding a defect in the periodic system. Strong field confinement of the defect can be achieved with a modal volume less than half of a cubic wavelength. The coupling efficiency to this mode from a channel waveguide exceeds 80%. In Chapter 7, a tunable single-mode waveguide microcavity is proposed that is well suited for frequency modulations and switching. The cavity mode has a volume of less than one cubic half-wavelength, and the resonant frequency is tuned by refractive-index modulation. Picosecond on-off switching times are achievable when two of these cavities are placed in series. In Chapter 8, I show that a thin slab of two-dimensional photonic crystal can alter drastically the radiation pattern of spontaneous emission. By eliminating all guided modes at the transition frequencies, spontaneous emission can be coupled entirely to free space modes. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.) (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Xu, M. H.
2016-03-01
Since 1998 January 1, instead of the traditional stellar reference system, the International Celestial Reference System (ICRS) has been realized by an ensemble of extragalactic radio sources that are located at hundreds of millions of light years away (if we accept their cosmological distances), so that the reference frame realized by extragalactic radio sources is assumed to be space-fixed. The acceleration of the barycenter of solar system (SSB), which is the origin of the ICRS, gives rise to a systematical variation in the directions of the observed radio sources. This phenomenon is called the secular aberration drift. As a result, the extragalactic reference frame fixed to the space provides a reference standard for detecting the secular aberration drift, and the acceleration of the barycenter with respect to the space can be determined from the observations of extragalactic radio sources. In this thesis, we aim to determine the acceleration of the SSB from astrometric and geodetic observations obtained by Very Long Baseline Interferometry (VLBI), which is a technique using the telescopes globally distributed on the Earth to observe a radio source simultaneously, and with the capacity of angular positioning for compact radio sources at 10-milliarcsecond level. The method of the global solution, which allows the acceleration vector to be estimated as a global parameter in the data analysis, is developed. Through the formal error given by the solution, this method shows directly the VLBI observations' capability to constrain the acceleration of the SSB, and demonstrates the significance level of the result. In the next step, the impact of the acceleration on the ICRS is studied in order to obtain the correction of the celestial reference frame (CRF) orientation. This thesis begins with the basic background and the general frame of this work. A brief review of the realization of the CRF based on the kinematical and the dynamical methods is presented in Chapter 2, along with the definition of the CRF and its relationship with the inertial reference frame. Chapter 3 is divided into two parts. The first part describes various effects that modify the geometric direction of an object, especially the parallax, the aberration, and the proper motion. Then the derivative model and the principle of determination of the acceleration are introduced in the second part. The VLBI data analysis method, including VLBI data reduction (solving the ambiguity, identifying the clock break, and determining the ionospheric effect), theoretical delay model, parameterization, and datum definition, is discussed in detail in Chapter 4. The estimation of the acceleration by more than 30-year VLBI observations and the results are then described in Chapter 5. The evaluation and the robust check of our results by different solutions and the comparison to that from another research group are performed. The error sources for the estimation of the acceleration, such as the secular parallax caused by the velocity of the barycenter in space, are quantitatively studied by simulation and data analysis in Chapter 6. The two main impacts of the acceleration on the CRF, the apparent proper motion with the magnitude of the μ as\\cdot yr^{-1} level and the global rotation in the CRF due to the un-uniformed distribution of radio sources on the sky, are discussed in Chapter 7. The definition and the realization of the epoch CRF are presented as well. The future work concerning the explanation of the estimated acceleration and potential research on several main problems in modern astrometry are discussed in the last chapter.
40 CFR 436.31 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., the general definitions, abbreviations and methods of analysis set forth in part 401 of this chapter... may be obtained from the National Climatic Center of the Environmental Data Service, National Oceanic...
40 CFR 436.21 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... definitions, abbreviations and methods of analysis set forth in part 401 of this chapter shall apply to this... National Climatic Center of the Environmental Data Service, National Oceanic and Atmospheric Administration...
40 CFR 436.41 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... definitions, abbreviations, and methods of analysis set forth in part 401 of this chapter shall apply to this... National Climatic Center of the Environmental Data Service, National Oceanic and Atmospheric Administration...
METHODS OF TREATMENT OF COMPLEX SURFACES ON METAL CUTTING MACHINES (CHAPTERS 1 AND 12),
FORGING, MOLDINGS, MANDRELS, MARINE PROPELLERS, AERIAL PROPELLERS, TURBINE BLADES, ABRASIVES, IMPELLERS, AIRCRAFT PANELS, METAL PLATES, CAMS, ELECTROEROSIVE MACHINING, CHEMICAL MILLING, MAGNETOSTRICTIVE ELEMENTS, USSR.
This chapter describes the most widely used virus adsorption-elution (VIRADEL) method for recovering human enteric viruses from water matrices (Fout et al., 1996). The method takes advantage of postively charged cartridge filters to concentrate viruses from water. The major adv...
The Great Acting Teachers and Their Methods.
ERIC Educational Resources Information Center
Brestoff, Richard
This book explores the acting theories and teaching methods of great teachers of acting--among them, the Europeans Stanislavski, Meyerhold, Brecht, and Grotowski; the Japanese Suzuki (who trained in Europe); and the contemporary Americans, Stella Adler, Lee Strasberg, and Sanford Meisner. Each chapter of the book includes a sample class, which…
26 CFR 20.0-2 - General description of tax.
Code of Federal Regulations, 2010 CFR
2010-04-01
... this chapter contain rules that provide additional adjustments to mitigate double taxation in cases... transfer which causes the property to be included in the decedent's gross estate. (b) Method of determining... a general description of the method to be used in determining the Federal estate tax imposed upon...
2010-06-11
29 CHAPTER 3 RESEARCH METHODOLOGY ................................................................30 Congruence Method ...32 Method of Research/Criteria of Analysis...systems of taxation have collapsed and physical infrastructure has been destroyed, trade and any form of business are cut off and a climate of
IMMUNOASSAY METHODS FOR MEASURING ATRAZINE AND 3,5,6-TRICHLORO-2-PYRIDINOL IN FOODS
This chapter describes the use of enzyme-linked immunosorbent assay (ELISA) methods for the analysis of two potential environmental contaminants in food sample media, atrazine and 3,5,6-trichloro-2-pyridinol (3,5,6-TCP). Two different immunoassay formats are employed: a magnetic...
40 CFR 63.1064 - Alternative means of emission limitation.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Standards, Chapter 19, Section 3, Part A, Wind Tunnel Test Method for the Measurement of Deck-Fitting Loss... limitation. 63.1064 Section 63.1064 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... as wind, temperature, and barometric pressure. Test methods that can be used to perform the testing...
40 CFR 63.1064 - Alternative means of emission limitation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Standards, Chapter 19, Section 3, Part A, Wind Tunnel Test Method for the Measurement of Deck-Fitting Loss... limitation. 63.1064 Section 63.1064 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... as wind, temperature, and barometric pressure. Test methods that can be used to perform the testing...
40 CFR 63.1064 - Alternative means of emission limitation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Standards, Chapter 19, Section 3, Part A, Wind Tunnel Test Method for the Measurement of Deck-Fitting Loss... limitation. 63.1064 Section 63.1064 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... as wind, temperature, and barometric pressure. Test methods that can be used to perform the testing...
40 CFR 63.1064 - Alternative means of emission limitation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Standards, Chapter 19, Section 3, Part A, Wind Tunnel Test Method for the Measurement of Deck-Fitting Loss... limitation. 63.1064 Section 63.1064 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... as wind, temperature, and barometric pressure. Test methods that can be used to perform the testing...
40 CFR 63.1064 - Alternative means of emission limitation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Standards, Chapter 19, Section 3, Part A, Wind Tunnel Test Method for the Measurement of Deck-Fitting Loss... limitation. 63.1064 Section 63.1064 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... as wind, temperature, and barometric pressure. Test methods that can be used to perform the testing...