Sample records for step size limit

  1. Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples

    NASA Astrophysics Data System (ADS)

    Petit, Johan; Lallemant, Lucile

    2017-05-01

    In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.

  2. A Variable Step-Size Proportionate Affine Projection Algorithm for Identification of Sparse Impulse Response

    NASA Astrophysics Data System (ADS)

    Liu, Ligang; Fukumoto, Masahiro; Saiki, Sachio; Zhang, Shiyong

    2009-12-01

    Proportionate adaptive algorithms have been proposed recently to accelerate convergence for the identification of sparse impulse response. When the excitation signal is colored, especially the speech, the convergence performance of proportionate NLMS algorithms demonstrate slow convergence speed. The proportionate affine projection algorithm (PAPA) is expected to solve this problem by using more information in the input signals. However, its steady-state performance is limited by the constant step-size parameter. In this article we propose a variable step-size PAPA by canceling the a posteriori estimation error. This can result in high convergence speed using a large step size when the identification error is large, and can then considerably decrease the steady-state misalignment using a small step size after the adaptive filter has converged. Simulation results show that the proposed approach can greatly improve the steady-state misalignment without sacrificing the fast convergence of PAPA.

  3. Analysis of the track- and dose-averaged LET and LET spectra in proton therapy using the geant4 Monte Carlo code

    PubMed Central

    Guan, Fada; Peeler, Christopher; Bronk, Lawrence; Geng, Changran; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Grosshans, David; Mohan, Radhe; Titt, Uwe

    2015-01-01

    Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the geant 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from geant 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LETt and dose-averaged LET, LETd) using geant 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LETt and LETd of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LETt but significant for LETd. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in geant 4 can result in incorrect LETd calculation results in the dose plateau region for small step limits. The erroneous LETd results can be attributed to the algorithm to determine fluctuations in energy deposition along the tracking step in geant 4. The incorrect LETd values lead to substantial differences in the calculated RBE. Conclusions: When the geant 4 particle tracking method is used to calculate the average LET values within targets with a small step limit, such as smaller than 500 μm, the authors recommend the use of LETt in the dose plateau region and LETd around the Bragg peak. For a large step limit, i.e., 500 μm, LETd is recommended along the whole Bragg curve. The transition point depends on beam parameters and can be found by determining the location where the gradient of the ratio of LETd and LETt becomes positive. PMID:26520716

  4. Framework for Creating a Smart Growth Economic Development Strategy

    EPA Pesticide Factsheets

    This step-by-step guide can help small and mid-sized cities, particularly those that have limited population growth, areas of disinvestment, and/or a struggling economy, build a place-based economic development strategy.

  5. Analysis of the track- and dose-averaged LET and LET spectra in proton therapy using the GEANT4 Monte Carlo code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Fada; Peeler, Christopher; Taleei, Reza

    Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the GEANT 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from GEANT 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LET{sub t} and dose-averaged LET, LET{sub d}) using GEANT 4 for different tracking stepmore » size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LET{sub t} and LET{sub d} of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LET{sub t} but significant for LET{sub d}. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in GEANT 4 can result in incorrect LET{sub d} calculation results in the dose plateau region for small step limits. The erroneous LET{sub d} results can be attributed to the algorithm to determine fluctuations in energy deposition along the tracking step in GEANT 4. The incorrect LET{sub d} values lead to substantial differences in the calculated RBE. Conclusions: When the GEANT 4 particle tracking method is used to calculate the average LET values within targets with a small step limit, such as smaller than 500 μm, the authors recommend the use of LET{sub t} in the dose plateau region and LET{sub d} around the Bragg peak. For a large step limit, i.e., 500 μm, LET{sub d} is recommended along the whole Bragg curve. The transition point depends on beam parameters and can be found by determining the location where the gradient of the ratio of LET{sub d} and LET{sub t} becomes positive.« less

  6. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    DOE PAGES

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...

    2016-08-09

    Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less

  7. Detection limits for nanoparticles in solution with classical turbidity spectra

    NASA Astrophysics Data System (ADS)

    Le Blevennec, G.

    2013-09-01

    Detection of nanoparticles in solution is required to manage safety and environmental problems. Spectral transmission turbidity method has now been known for a long time. It is derived from the Mie Theory and can be applied to any number of spheres, randomly distributed and separated by large distance compared to wavelength. Here, we describe a method for determination of size, distribution and concentration of nanoparticles in solution using UV-Vis transmission measurements. The method combines Mie and Beer Lambert computation integrated in a best fit approximation. In a first step, a validation of the approach is completed on silver nanoparticles solution. Verification of results is realized with Transmission Electronic Microscopy measurements for size distribution and an Inductively Coupled Plasma Mass Spectrometry for concentration. In view of the good agreement obtained, a second step of work focuses on how to manage the concentration to be the most accurate on the size distribution. Those efficient conditions are determined by simple computation. As we are dealing with nanoparticles, one of the key points is to know what the size limits reachable are with that kind of approach based on classical electromagnetism. In taking into account the transmission spectrometer accuracy limit we determine for several types of materials, metals, dielectrics, semiconductors the particle size limit detectable by such a turbidity method. These surprising results are situated at the quantum physics frontier.

  8. OGS#PETSc approach for robust and efficient simulations of strongly coupled hydrothermal processes in EGS reservoirs

    NASA Astrophysics Data System (ADS)

    Watanabe, Norihiro; Blucher, Guido; Cacace, Mauro; Kolditz, Olaf

    2016-04-01

    A robust and computationally efficient solution is important for 3D modelling of EGS reservoirs. This is particularly the case when the reservoir model includes hydraulic conduits such as induced or natural fractures, fault zones, and wellbore open-hole sections. The existence of such hydraulic conduits results in heterogeneous flow fields and in a strengthened coupling between fluid flow and heat transport processes via temperature dependent fluid properties (e.g. density and viscosity). A commonly employed partitioned solution (or operator-splitting solution) may not robustly work for such strongly coupled problems its applicability being limited by small time step sizes (e.g. 5-10 days) whereas the processes have to be simulated for 10-100 years. To overcome this limitation, an alternative approach is desired which can guarantee a robust solution of the coupled problem with minor constraints on time step sizes. In this work, we present a Newton-Raphson based monolithic coupling approach implemented in the OpenGeoSys simulator (OGS) combined with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library. The PETSc library is used for both linear and nonlinear solvers as well as MPI-based parallel computations. The suggested method has been tested by application to the 3D reservoir site of Groß Schönebeck, in northern Germany. Results show that the exact Newton-Raphson approach can also be limited to small time step sizes (e.g. one day) due to slight oscillations in the temperature field. The usage of a line search technique and modification of the Jacobian matrix were necessary to achieve robust convergence of the nonlinear solution. For the studied example, the proposed monolithic approach worked even with a very large time step size of 3.5 years.

  9. Surfactant-controlled polymerization of semiconductor clusters to quantum dots through competing step-growth and living chain-growth mechanisms.

    PubMed

    Evans, Christopher M; Love, Alyssa M; Weiss, Emily A

    2012-10-17

    This article reports control of the competition between step-growth and living chain-growth polymerization mechanisms in the formation of cadmium chalcogenide colloidal quantum dots (QDs) from CdSe(S) clusters by varying the concentration of anionic surfactant in the synthetic reaction mixture. The growth of the particles proceeds by step-addition from initially nucleated clusters in the absence of excess phosphinic or carboxylic acids, which adsorb as their anionic conjugate bases, and proceeds indirectly by dissolution of clusters, and subsequent chain-addition of monomers to stable clusters (Ostwald ripening) in the presence of excess phosphinic or carboxylic acid. Fusion of clusters by step-growth polymerization is an explanation for the consistent observation of so-called "magic-sized" clusters in QD growth reactions. Living chain-addition (chain addition with no explicit termination step) produces QDs over a larger range of sizes with better size dispersity than step-addition. Tuning the molar ratio of surfactant to Se(2-)(S(2-)), the limiting ionic reagent, within the living chain-addition polymerization allows for stoichiometric control of QD radius without relying on reaction time.

  10. Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.

    PubMed

    Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng

    2013-01-01

    Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.

  11. Unstable vicinal crystal growth from cellular automata

    NASA Astrophysics Data System (ADS)

    Krasteva, A.; Popova, H.; KrzyŻewski, F.; Załuska-Kotur, M.; Tonchev, V.

    2016-03-01

    In order to study the unstable step motion on vicinal crystal surfaces we devise vicinal Cellular Automata. Each cell from the colony has value equal to its height in the vicinal, initially the steps are regularly distributed. Another array keeps the adatoms, initially distributed randomly over the surface. The growth rule defines that each adatom at right nearest neighbor position to a (multi-) step attaches to it. The update of whole colony is performed at once and then time increases. This execution of the growth rule is followed by compensation of the consumed particles and by diffusional update(s) of the adatom population. Two principal sources of instability are employed - biased diffusion and infinite inverse Ehrlich-Schwoebel barrier (iiSE). Since these factors are not opposed by step-step repulsion the formation of multi-steps is observed but in general the step bunches preserve a finite width. We monitor the developing surface patterns and quantify the observations by scaling laws with focus on the eventual transition from diffusion-limited to kinetics-limited phenomenon. The time-scaling exponent of the bunch size N is 1/2 for the case of biased diffusion and 1/3 for the case of iiSE. Additional distinction is possible based on the time-scaling exponents of the sizes of multi-step Nmulti, these are 0.36÷0.4 (for biased diffusion) and 1/4 (iiSE).

  12. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  13. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  14. Monte-Carlo simulation of a stochastic differential equation

    NASA Astrophysics Data System (ADS)

    Arif, ULLAH; Majid, KHAN; M, KAMRAN; R, KHAN; Zhengmao, SHENG

    2017-12-01

    For solving higher dimensional diffusion equations with an inhomogeneous diffusion coefficient, Monte Carlo (MC) techniques are considered to be more effective than other algorithms, such as finite element method or finite difference method. The inhomogeneity of diffusion coefficient strongly limits the use of different numerical techniques. For better convergence, methods with higher orders have been kept forward to allow MC codes with large step size. The main focus of this work is to look for operators that can produce converging results for large step sizes. As a first step, our comparative analysis has been applied to a general stochastic problem. Subsequently, our formulization is applied to the problem of pitch angle scattering resulting from Coulomb collisions of charge particles in the toroidal devices.

  15. Numerical algorithms for scatter-to-attenuation reconstruction in PET: empirical comparison of convergence, acceleration, and the effect of subsets.

    PubMed

    Berker, Yannick; Karp, Joel S; Schulz, Volkmar

    2017-09-01

    The use of scattered coincidences for attenuation correction of positron emission tomography (PET) data has recently been proposed. For practical applications, convergence speeds require further improvement, yet there exists a trade-off between convergence speed and the risk of non-convergence. In this respect, a maximum-likelihood gradient-ascent (MLGA) algorithm and a two-branch back-projection (2BP), which was previously proposed, were evaluated. MLGA was combined with the Armijo step size rule; and accelerated using conjugate gradients, Nesterov's momentum method, and data subsets of different sizes. In 2BP, we varied the subset size, an important determinant of convergence speed and computational burden. We used three sets of simulation data to evaluate the impact of a spatial scale factor. The Armijo step size allowed 10-fold increased step sizes compared to native MLGA. Conjugate gradients and Nesterov momentum lead to slightly faster, yet non-uniform convergence; improvements were mostly confined to later iterations, possibly due to the non-linearity of the problem. MLGA with data subsets achieved faster, uniform, and predictable convergence, with a speed-up factor equivalent to the number of subsets and no increase in computational burden. By contrast, 2BP computational burden increased linearly with the number of subsets due to repeated evaluation of the objective function, and convergence was limited to the case of many (and therefore small) subsets, which resulted in high computational burden. Possibilities of improving 2BP appear limited. While general-purpose acceleration methods appear insufficient for MLGA, results suggest that data subsets are a promising way of improving MLGA performance.

  16. Stability analysis of implicit time discretizations for the Compton-scattering Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.

    2009-09-01

    The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.

  17. Individual-based modelling of population growth and diffusion in discrete time.

    PubMed

    Tkachenko, Natalie; Weissmann, John D; Petersen, Wesley P; Lake, George; Zollikofer, Christoph P E; Callegari, Simone

    2017-01-01

    Individual-based models (IBMs) of human populations capture spatio-temporal dynamics using rules that govern the birth, behavior, and death of individuals. We explore a stochastic IBM of logistic growth-diffusion with constant time steps and independent, simultaneous actions of birth, death, and movement that approaches the Fisher-Kolmogorov model in the continuum limit. This model is well-suited to parallelization on high-performance computers. We explore its emergent properties with analytical approximations and numerical simulations in parameter ranges relevant to human population dynamics and ecology, and reproduce continuous-time results in the limit of small transition probabilities. Our model prediction indicates that the population density and dispersal speed are affected by fluctuations in the number of individuals. The discrete-time model displays novel properties owing to the binomial character of the fluctuations: in certain regimes of the growth model, a decrease in time step size drives the system away from the continuum limit. These effects are especially important at local population sizes of <50 individuals, which largely correspond to group sizes of hunter-gatherers. As an application scenario, we model the late Pleistocene dispersal of Homo sapiens into the Americas, and discuss the agreement of model-based estimates of first-arrival dates with archaeological dates in dependence of IBM model parameter settings.

  18. Criteria for software modularization

    NASA Technical Reports Server (NTRS)

    Card, David N.; Page, Gerald T.; Mcgarry, Frank E.

    1985-01-01

    A central issue in programming practice involves determining the appropriate size and information content of a software module. This study attempted to determine the effectiveness of two widely used criteria for software modularization, strength and size, in reducing fault rate and development cost. Data from 453 FORTRAN modules developed by professional programmers were analyzed. The results indicated that module strength is a good criterion with respect to fault rate, whereas arbitrary module size limitations inhibit programmer productivity. This analysis is a first step toward defining empirically based standards for software modularization.

  19. Avoiding Stair-Step Artifacts in Image Registration for GOES-R Navigation and Registration Assessment

    NASA Technical Reports Server (NTRS)

    Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John

    2016-01-01

    In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.

  20. Size dependence of the propulsion velocity for catalytic Janus-sphere swimmers.

    PubMed

    Ebbens, Stephen; Tu, Mei-Hsien; Howse, Jonathan R; Golestanian, Ramin

    2012-02-01

    The propulsion velocity of active colloids that asymmetrically catalyze a chemical reaction is probed experimentally as a function of their sizes. It is found that over the experimentally accessible range, the velocity decays as a function of size, with a rate that is compatible with an inverse size dependence. A diffusion-reaction model for the concentrations of the fuel and waste molecules that takes into account a two-step process for the asymmetric catalytic activity on the surface of the colloid is shown to predict a similar behavior for colloids at the large size limit, with a saturation for smaller sizes. © 2012 American Physical Society

  1. On the stability of projection methods for the incompressible Navier-Stokes equations based on high-order discontinuous Galerkin discretizations

    NASA Astrophysics Data System (ADS)

    Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin

    2017-12-01

    The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.

  2. A variable-step-size robust delta modulator.

    NASA Technical Reports Server (NTRS)

    Song, C. L.; Garodnick, J.; Schilling, D. L.

    1971-01-01

    Description of an analytically obtained optimum adaptive delta modulator-demodulator configuration. The device utilizes two past samples to obtain a step size which minimizes the mean square error for a Markov-Gaussian source. The optimum system is compared, using computer simulations, with a linear delta modulator and an enhanced Abate delta modulator. In addition, the performance is compared to the rate distortion bound for a Markov source. It is shown that the optimum delta modulator is neither quantization nor slope-overload limited. The highly nonlinear equations obtained for the optimum transmitter and receiver are approximated by piecewise-linear equations in order to obtain system equations which can be transformed into hardware. The derivation of the experimental system is presented.

  3. Step-scan T cell-based differential Fourier transform infrared photoacoustic spectroscopy (DFTIR-PAS) for detection of ambient air contaminants

    NASA Astrophysics Data System (ADS)

    Liu, Lixian; Mandelis, Andreas; Huan, Huiting; Melnikov, Alexander

    2016-10-01

    A step-scan differential Fourier transform infrared photoacoustic spectroscopy (DFTIR-PAS) using a commercial FTIR spectrometer was developed theoretically and experimentally for air contaminant monitoring. The configuration comprises two identical, small-size and low-resonance-frequency T cells satisfying the conflicting requirements of low chopping frequency and limited space in the sample compartment. Carbon dioxide (CO2) IR absorption spectra were used to demonstrate the capability of the DFTIR-PAS method to detect ambient pollutants. A linear amplitude response to CO2 concentrations from 100 to 10,000 ppmv was observed, leading to a theoretical detection limit of 2 ppmv. The differential mode was able to suppress the coherent noise, thereby imparting the DFTIR-PAS method with a better signal-to-noise ratio and lower theoretical detection limit than the single mode. The results indicate that it is possible to use step-scan DFTIR-PAS with T cells as a quantitative method for high sensitivity analysis of ambient contaminants.

  4. Exciton size and binding energy limitations in one-dimensional organic materials.

    PubMed

    Kraner, S; Scholz, R; Plasser, F; Koerner, C; Leo, K

    2015-12-28

    In current organic photovoltaic devices, the loss in energy caused by the charge transfer step necessary for exciton dissociation leads to a low open circuit voltage, being one of the main reasons for rather low power conversion efficiencies. A possible approach to avoid these losses is to tune the exciton binding energy to a value of the order of thermal energy, which would lead to free charges upon absorption of a photon, and therefore increase the power conversion efficiency towards the Shockley-Queisser limit. We determine the size of the excitons for different organic molecules and polymers by time dependent density functional theory calculations. For optically relevant transitions, the exciton size saturates around 0.7 nm for one-dimensional molecules with a size longer than about 4 nm. For the ladder-type polymer poly(benzimidazobenzophenanthroline), we obtain an exciton binding energy of about 0.3 eV, serving as a lower limit of the exciton binding energy for the organic materials investigated. Furthermore, we show that charge transfer transitions increase the exciton size and thus identify possible routes towards a further decrease of the exciton binding energy.

  5. Exploring the detection limits of infrared near-field microscopy regarding small buried structures and pushing them by exploiting superlens-related effects.

    PubMed

    Jung, Lena; Hauer, Benedikt; Li, Peining; Bornhöfft, Manuel; Mayer, Joachim; Taubner, Thomas

    2016-03-07

    We present a study on subsurface imaging with an infrared scattering-type scanning near-field optical microscope (s-SNOM). The depth-limitation for the visibility of gold nanoparticles with a diameter of 50 nm under Si 3 N 4 is determined to about 50 nm. We first investigate spot size and signal strength concerning their particle-size dependence for a dielectric cover layer with positive permittivity. The experimental results are confirmed by model calculations and a comparison to TEM images. In the next step, we investigate spectroscopically also the regime of negative permittivity of the capping layer and its influence on lateral resolution and signal strength in experiment and simulations. The explanation of this observation combines subsurface imaging and superlensing, and shows up limitations of the latter regarding small structure sizes.

  6. Effect of immunomagnetic bead size on recovery of foodborne pathogenic bacteria

    USDA-ARS?s Scientific Manuscript database

    Long culture enrichment is currently a speed-limiting step in both traditional and rapid detection techniques for foodborne pathogens. Immunomagnetic separation (IMS) as a culture-free enrichment sample preparation technique has gained increasing popularity in the development of rapid detection met...

  7. Modelling uveal melanoma

    PubMed Central

    Foss, A.; Cree, I.; Dolin, P.; Hungerford, J.

    1999-01-01

    BACKGROUND/AIM—There has been no consistent pattern reported on how mortality for uveal melanoma varies with age. This information can be useful to model the complexity of the disease. The authors have examined ocular cancer trends, as an indirect measure for uveal melanoma mortality, to see how rates vary with age and to compare the results with their other studies on predicting metastatic disease.
METHODS—Age specific mortality was examined for England and Wales, the USA, and Canada. A log-log model was fitted to the data. The slopes of the log-log plots were used as measure of disease complexity and compared with the results of previous work on predicting metastatic disease.
RESULTS—The log-log model provided a good fit for the US and Canadian data, but the observed rates deviated for England and Wales among people over the age of 65 years. The log-log model for mortality data suggests that the underlying process depends upon four rate limiting steps, while a similar model for the incidence data suggests between three and four rate limiting steps. Further analysis of previous data on predicting metastatic disease on the basis of tumour size and blood vessel density would indicate a single rate limiting step between developing the primary tumour and developing metastatic disease.
CONCLUSIONS—There is significant underreporting or underdiagnosis of ocular melanoma for England and Wales in those over the age of 65 years. In those under the age of 65, a model is presented for ocular melanoma oncogenesis requiring three rate limiting steps to develop the primary tumour and a fourth rate limiting step to develop metastatic disease. The three steps in the generation of the primary tumour involve two key processes—namely, growth and angiogenesis within the primary tumour. The step from development of the primary to development of metastatic disease is likely to involve a single rate limiting process.

 PMID:10216060

  8. How many steps/day are enough? For older adults and special populations

    PubMed Central

    2011-01-01

    Older adults and special populations (living with disability and/or chronic illness that may limit mobility and/or physical endurance) can benefit from practicing a more physically active lifestyle, typically by increasing ambulatory activity. Step counting devices (accelerometers and pedometers) offer an opportunity to monitor daily ambulatory activity; however, an appropriate translation of public health guidelines in terms of steps/day is unknown. Therefore this review was conducted to translate public health recommendations in terms of steps/day. Normative data indicates that 1) healthy older adults average 2,000-9,000 steps/day, and 2) special populations average 1,200-8,800 steps/day. Pedometer-based interventions in older adults and special populations elicit a weighted increase of approximately 775 steps/day (or an effect size of 0.26) and 2,215 steps/day (or an effect size of 0.67), respectively. There is no evidence to inform a moderate intensity cadence (i.e., steps/minute) in older adults at this time. However, using the adult cadence of 100 steps/minute to demark the lower end of an absolutely-defined moderate intensity (i.e., 3 METs), and multiplying this by 30 minutes produces a reasonable heuristic (i.e., guiding) value of 3,000 steps. However, this cadence may be unattainable in some frail/diseased populations. Regardless, to truly translate public health guidelines, these steps should be taken over and above activities performed in the course of daily living, be of at least moderate intensity accumulated in minimally 10 minute bouts, and add up to at least 150 minutes over the week. Considering a daily background of 5,000 steps/day (which may actually be too high for some older adults and/or special populations), a computed translation approximates 8,000 steps on days that include a target of achieving 30 minutes of moderate-to-vigorous physical activity (MVPA), and approximately 7,100 steps/day if averaged over a week. Measured directly and including these background activities, the evidence suggests that 30 minutes of daily MVPA accumulated in addition to habitual daily activities in healthy older adults is equivalent to taking approximately 7,000-10,000 steps/day. Those living with disability and/or chronic illness (that limits mobility and or/physical endurance) display lower levels of background daily activity, and this will affect whole-day estimates of recommended physical activity. PMID:21798044

  9. Dynamic Scaling and Island Growth Kinetics in Pulsed Laser Deposition of SrTiO 3

    DOE PAGES

    Eres, Gyula; Tischler, J. Z.; Rouleau, C. M.; ...

    2016-11-11

    We use real-time diffuse surface x-ray diffraction to probe the evolution of island size distributions and its effects on surface smoothing in pulsed laser deposition (PLD) of SrTiO 3. In this study, we show that the island size evolution obeys dynamic scaling and two distinct regimes of island growth kinetics. Our data show that PLD film growth can persist without roughening despite thermally driven Ostwald ripening, the main mechanism for surface smoothing, being shut down. The absence of roughening is concomitant with decreasing island density, contradicting the prevailing view that increasing island density is the key to surface smoothing inmore » PLD. We also report a previously unobserved crossover from diffusion-limited to attachment-limited island growth that reveals the influence of nonequilibrium atomic level surface transport processes on the growth modes in PLD. We show by direct measurements that attachment-limited island growth is the dominant process in PLD that creates step flowlike behavior or quasistep flow as PLD “self-organizes” local step flow on a length scale consistent with the substrate temperature and PLD parameters.« less

  10. Dynamic Scaling and Island Growth Kinetics in Pulsed Laser Deposition of SrTiO 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eres, Gyula; Tischler, J. Z.; Rouleau, C. M.

    We use real-time diffuse surface x-ray diffraction to probe the evolution of island size distributions and its effects on surface smoothing in pulsed laser deposition (PLD) of SrTiO 3. In this study, we show that the island size evolution obeys dynamic scaling and two distinct regimes of island growth kinetics. Our data show that PLD film growth can persist without roughening despite thermally driven Ostwald ripening, the main mechanism for surface smoothing, being shut down. The absence of roughening is concomitant with decreasing island density, contradicting the prevailing view that increasing island density is the key to surface smoothing inmore » PLD. We also report a previously unobserved crossover from diffusion-limited to attachment-limited island growth that reveals the influence of nonequilibrium atomic level surface transport processes on the growth modes in PLD. We show by direct measurements that attachment-limited island growth is the dominant process in PLD that creates step flowlike behavior or quasistep flow as PLD “self-organizes” local step flow on a length scale consistent with the substrate temperature and PLD parameters.« less

  11. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  12. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  13. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  14. Soft Landing of Bare Nanoparticles with Controlled Size, Composition, and Morphology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Grant E.; Colby, Robert J.; Laskin, Julia

    2015-01-01

    A kinetically-limited physical synthesis method based on magnetron sputtering and gas aggregation has been coupled with size-selection and ion soft landing to prepare bare metal nanoparticles on surfaces with controlled coverage, size, composition, and morphology. Employing atomic force microscopy (AFM) and scanning electron microscopy (SEM), it is demonstrated that the size and coverage of bare nanoparticles soft landed onto flat glassy carbon and silicon as well as stepped graphite surfaces may be controlled through size-selection with a quadrupole mass filter and the length of deposition, respectively. The bare nanoparticles are observed with AFM to bind randomly to the flat glassymore » carbon surface when soft landed at relatively low coverage (1012 ions). In contrast, on stepped graphite surfaces at intermediate coverage (1013 ions) the soft landed nanoparticles are shown to bind preferentially along step edges forming extended linear chains of particles. At the highest coverage (5 x 1013 ions) examined in this study the nanoparticles are demonstrated with both AFM and SEM to form a continuous film on flat glassy carbon and silicon surfaces. On a graphite surface with defects, however, it is shown with SEM that the presence of localized surface imperfections results in agglomeration of nanoparticles onto these features and the formation of neighboring depletion zones that are devoid of particles. Employing high resolution scanning transmission electron microscopy in the high angular annular dark field imaging mode (STEM-HAADF) and electron energy loss spectroscopy (EELS) it is demonstrated that the magnetron sputtering/gas aggregation synthesis technique produces single metal particles with controlled morphology as well as bimetallic alloy nanoparticles with clearly defined core-shell structure. Therefore, this kinetically-limited physical synthesis technique, when combined with ion soft landing, is a versatile complementary method for preparing a wide range of bare supported nanoparticles with selected properties that are free of the solvent, organic capping agents, and residual reactants present with nanoparticles synthesized in solution.« less

  15. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  16. Exciton size and binding energy limitations in one-dimensional organic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraner, S., E-mail: stefan.kraner@iapp.de; Koerner, C.; Leo, K.

    2015-12-28

    In current organic photovoltaic devices, the loss in energy caused by the charge transfer step necessary for exciton dissociation leads to a low open circuit voltage, being one of the main reasons for rather low power conversion efficiencies. A possible approach to avoid these losses is to tune the exciton binding energy to a value of the order of thermal energy, which would lead to free charges upon absorption of a photon, and therefore increase the power conversion efficiency towards the Shockley-Queisser limit. We determine the size of the excitons for different organic molecules and polymers by time dependent densitymore » functional theory calculations. For optically relevant transitions, the exciton size saturates around 0.7 nm for one-dimensional molecules with a size longer than about 4 nm. For the ladder-type polymer poly(benzimidazobenzophenanthroline), we obtain an exciton binding energy of about 0.3 eV, serving as a lower limit of the exciton binding energy for the organic materials investigated. Furthermore, we show that charge transfer transitions increase the exciton size and thus identify possible routes towards a further decrease of the exciton binding energy.« less

  17. Kinematic, muscular, and metabolic responses during exoskeletal-, elliptical-, or therapist-assisted stepping in people with incomplete spinal cord injury.

    PubMed

    Hornby, T George; Kinnaird, Catherine R; Holleran, Carey L; Rafferty, Miriam R; Rodriguez, Kelly S; Cain, Julie B

    2012-10-01

    Robotic-assisted locomotor training has demonstrated some efficacy in individuals with neurological injury and is slowly gaining clinical acceptance. Both exoskeletal devices, which control individual joint movements, and elliptical devices, which control endpoint trajectories, have been utilized with specific patient populations and are available commercially. No studies have directly compared training efficacy or patient performance during stepping between devices. The purpose of this study was to evaluate kinematic, electromyographic (EMG), and metabolic responses during elliptical- and exoskeletal-assisted stepping in individuals with incomplete spinal cord injury (SCI) compared with therapist-assisted stepping. Design A prospective, cross-sectional, repeated-measures design was used. Participants with incomplete SCI (n=11) performed 3 separate bouts of exoskeletal-, elliptical-, or therapist-assisted stepping. Unilateral hip and knee sagittal-plane kinematics, lower-limb EMG recordings, and oxygen consumption were compared across stepping conditions and with control participants (n=10) during treadmill stepping. Exoskeletal stepping kinematics closely approximated normal gait patterns, whereas significantly greater hip and knee flexion postures were observed during elliptical-assisted stepping. Measures of kinematic variability indicated consistent patterns in control participants and during exoskeletal-assisted stepping, whereas therapist- and elliptical-assisted stepping kinematics were more variable. Despite specific differences, EMG patterns generally were similar across stepping conditions in the participants with SCI. In contrast, oxygen consumption was consistently greater during therapist-assisted stepping. Limitations Limitations included a small sample size, lack of ability to evaluate kinetics during stepping, unilateral EMG recordings, and sagittal-plane kinematics. Despite specific differences in kinematics and EMG activity, metabolic activity was similar during stepping in each robotic device. Understanding potential differences and similarities in stepping performance with robotic assistance may be important in delivery of repeated locomotor training using robotic or therapist assistance and for consumers of robotic devices.

  18. Investigation of the oxygen exchange mechanism on Pt|yttria stabilized zirconia at intermediate temperatures: Surface path versus bulk path

    PubMed Central

    Opitz, Alexander K.; Lutz, Alexander; Kubicek, Markus; Kubel, Frank; Hutter, Herbert; Fleig, Jürgen

    2011-01-01

    The oxygen exchange kinetics of platinum on yttria-stabilized zirconia (YSZ) was investigated by means of geometrically well-defined Pt microelectrodes. By variation of electrode size and temperature it was possible to separate two temperature regimes with different geometry dependencies of the polarization resistance. At higher temperatures (550–700 °C) an elementary step located close to the three phase boundary (TPB) with an activation energy of ∼1.6 eV was identified as rate limiting. At lower temperatures (300–400 °C) the rate limiting elementary step is related to the electrode area and exhibited a very low activation energy in the order of 0.2 eV. From these observations two parallel pathways for electrochemical oxygen exchange are concluded. The nature of these two elementary steps is discussed in terms of equivalent circuits. Two combinations of parallel rate limiting reaction steps are found to explain the observed geometry dependencies: (i) Diffusion through an impurity phase at the TPB in parallel to diffusion of oxygen through platinum – most likely along Pt grain boundaries – as area-related process. (ii) Co-limitation of oxygen diffusion along the Pt|YSZ interface and charge transfer at the interface with a short decay length of the corresponding transmission line (as TPB-related process) in parallel to oxygen diffusion through platinum. PMID:22210951

  19. Effect of experimental and sample factors on dehydration kinetics of mildronate dihydrate: mechanism of dehydration and determination of kinetic parameters.

    PubMed

    Bērziņš, Agris; Actiņš, Andris

    2014-06-01

    The dehydration kinetics of mildronate dihydrate [3-(1,1,1-trimethylhydrazin-1-ium-2-yl)propionate dihydrate] was analyzed in isothermal and nonisothermal modes. The particle size, sample preparation and storage, sample weight, nitrogen flow rate, relative humidity, and sample history were varied in order to evaluate the effect of these factors and to more accurately interpret the data obtained from such analysis. It was determined that comparable kinetic parameters can be obtained in both isothermal and nonisothermal mode. However, dehydration activation energy values obtained in nonisothermal mode showed variation with conversion degree because of different rate-limiting step energy at higher temperature. Moreover, carrying out experiments in this mode required consideration of additional experimental complications. Our study of the different sample and experimental factor effect revealed information about changes of the dehydration rate-limiting step energy, variable contribution from different rate limiting steps, as well as clarified the dehydration mechanism. Procedures for convenient and fast determination of dehydration kinetic parameters were offered. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  20. Diffractive optics fabricated by direct write methods with an electron beam

    NASA Technical Reports Server (NTRS)

    Kress, Bernard; Zaleta, David; Daschner, Walter; Urquhart, Kris; Stein, Robert; Lee, Sing H.

    1993-01-01

    State-of-the-art diffractive optics are fabricated using e-beam lithography and dry etching techniques to achieve multilevel phase elements with very high diffraction efficiencies. One of the major challenges encountered in fabricating diffractive optics is the small feature size (e.g. for diffractive lenses with small f-number). It is not only the e-beam system which dictates the feature size limitations, but also the alignment systems (mask aligner) and the materials (e-beam and photo resists). In order to allow diffractive optics to be used in new optoelectronic systems, it is necessary not only to fabricate elements with small feature sizes but also to do so in an economical fashion. Since price of a multilevel diffractive optical element is closely related to the e-beam writing time and the number of etching steps, we need to decrease the writing time and etching steps without affecting the quality of the element. To do this one has to utilize the full potentials of the e-beam writing system. In this paper, we will present three diffractive optics fabrication techniques which will reduce the number of process steps, the writing time, and the overall fabrication time for multilevel phase diffractive optics.

  1. Weak-guidance-theory review of dispersion and birefringence management by laser inscription

    NASA Astrophysics Data System (ADS)

    Zheltikov, A. M.; Reid, D. T.

    2008-01-01

    A brief review of laser inscription of micro- and nanophotonic structures in transparent materials is provided in terms of a compact and convenient formalism based on the theory of weak optical waveguides. We derive physically instructive approximate expressions allowing propagation constants of laser-inscribed micro- and nanowaveguides to be calculated as functions of the transverse waveguide size, refractive index step, and dielectric properties of the host material. Based on this analysis, we demonstrate that dispersion engineering capabilities of laser micromachining techniques are limited by the smallness of the refractive index step typical of laser-inscribed structures. However, a laser inscription of waveguides in pre-formed micro- and nanostructures suggests a variety of interesting options for a fine dispersion and birefringence tuning of small-size waveguides and photonic wires.

  2. Performance analysis and kernel size study of the Lynx real-time operating system

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei; Gibson, James S.; Fernquist, Alan R.

    1993-01-01

    This paper analyzes the Lynx real-time operating system (LynxOS), which has been selected as the operating system for the Space Station Freedom Data Management System (DMS). The features of LynxOS are compared to other Unix-based operating system (OS). The tools for measuring the performance of LynxOS, which include a high-speed digital timer/counter board, a device driver program, and an application program, are analyzed. The timings for interrupt response, process creation and deletion, threads, semaphores, shared memory, and signals are measured. The memory size of the DMS Embedded Data Processor (EDP) is limited. Besides, virtual memory is not suitable for real-time applications because page swap timing may not be deterministic. Therefore, the DMS software, including LynxOS, has to fit in the main memory of an EDP. To reduce the LynxOS kernel size, the following steps are taken: analyzing the factors that influence the kernel size; identifying the modules of LynxOS that may not be needed in an EDP; adjusting the system parameters of LynxOS; reconfiguring the device drivers used in the LynxOS; and analyzing the symbol table. The reductions in kernel disk size, kernel memory size and total kernel size reduction from each step mentioned above are listed and analyzed.

  3. Revision of the documentation for a model for calculating effects of liquid waste disposal in deep saline aquifers

    USGS Publications Warehouse

    INTERA Environmental Consultants, Inc.

    1979-01-01

    The major limitation of the model arises using second-order correct (central-difference) finite-difference approximation in space. To avoid numerical oscillations in the solution, the user must restrict grid block and time step sizes depending upon the magnitude of the dispersivity.

  4. Characterizing 3D grain size distributions from 2D sections in mylonites using a modified version of the Saltykov method

    NASA Astrophysics Data System (ADS)

    Lopez-Sanchez, Marco; Llana-Fúnez, Sergio

    2016-04-01

    The understanding of creep behaviour in rocks requires knowledge of 3D grain size distributions (GSD) that result from dynamic recrystallization processes during deformation. The methods to estimate directly the 3D grain size distribution -serial sectioning, synchrotron or X-ray-based tomography- are expensive, time-consuming and, in most cases and at best, challenging. This means that in practice grain size distributions are mostly derived from 2D sections. Although there are a number of methods in the literature to derive the actual 3D grain size distributions from 2D sections, the most popular in highly deformed rocks is the so-called Saltykov method. It has though two major drawbacks: the method assumes no interaction between grains, which is not true in the case of recrystallised mylonites; and uses histograms to describe distributions, which limits the quantification of the GSD. The first aim of this contribution is to test whether the interaction between grains in mylonites, i.e. random grain packing, affects significantly the GSDs estimated by the Saltykov method. We test this using the random resampling technique in a large data set (n = 12298). The full data set is built from several parallel thin sections that cut a completely dynamically recrystallized quartz aggregate in a rock sample from a Variscan shear zone in NW Spain. The results proved that the Saltykov method is reliable as long as the number of grains is large (n > 1000). Assuming that a lognormal distribution is an optimal approximation for the GSD in a completely dynamically recrystallized rock, we introduce an additional step to the Saltykov method, which allows estimating a continuous probability distribution function of the 3D grain size population. The additional step takes the midpoints of the classes obtained by the Saltykov method and fits a lognormal distribution with a trust region using a non-linear least squares algorithm. The new protocol is named the two-step method. The conclusion of this work is that both the Saltykov and the two-step methods are accurate and simple enough to be useful in practice in rocks, alloys or ceramics with near-equant grains and expected lognormal distributions. The Saltykov method is particularly suitable to estimate the volumes of particular grain fractions, while the two-step method to quantify the full GSD (mean and standard deviation in log grain size). The two-step method is implemented in a free, open-source and easy-to-handle script (see http://marcoalopez.github.io/GrainSizeTools/).

  5. Methods for growth of relatively large step-free SiC crystal surfaces

    NASA Technical Reports Server (NTRS)

    Neudeck, Philip G. (Inventor); Powell, J. Anthony (Inventor)

    2002-01-01

    A method for growing arrays of large-area device-size films of step-free (i.e., atomically flat) SiC surfaces for semiconductor electronic device applications is disclosed. This method utilizes a lateral growth process that better overcomes the effect of extended defects in the seed crystal substrate that limited the obtainable step-free area achievable by prior art processes. The step-free SiC surface is particularly suited for the heteroepitaxial growth of 3C (cubic) SiC, AlN, and GaN films used for the fabrication of both surface-sensitive devices (i.e., surface channel field effect transistors such as HEMT's and MOSFET's) as well as high-electric field devices (pn diodes and other solid-state power switching devices) that are sensitive to extended crystal defects.

  6. The accuracy of matrix population model projections for coniferous trees in the Sierra Nevada, California

    USGS Publications Warehouse

    van Mantgem, P.J.; Stephenson, N.L.

    2005-01-01

    1 We assess the use of simple, size-based matrix population models for projecting population trends for six coniferous tree species in the Sierra Nevada, California. We used demographic data from 16 673 trees in 15 permanent plots to create 17 separate time-invariant, density-independent population projection models, and determined differences between trends projected from initial surveys with a 5-year interval and observed data during two subsequent 5-year time steps. 2 We detected departures from the assumptions of the matrix modelling approach in terms of strong growth autocorrelations. We also found evidence of observation errors for measurements of tree growth and, to a more limited degree, recruitment. Loglinear analysis provided evidence of significant temporal variation in demographic rates for only two of the 17 populations. 3 Total population sizes were strongly predicted by model projections, although population dynamics were dominated by carryover from the previous 5-year time step (i.e. there were few cases of recruitment or death). Fractional changes to overall population sizes were less well predicted. Compared with a null model and a simple demographic model lacking size structure, matrix model projections were better able to predict total population sizes, although the differences were not statistically significant. Matrix model projections were also able to predict short-term rates of survival, growth and recruitment. Mortality frequencies were not well predicted. 4 Our results suggest that simple size-structured models can accurately project future short-term changes for some tree populations. However, not all populations were well predicted and these simple models would probably become more inaccurate over longer projection intervals. The predictive ability of these models would also be limited by disturbance or other events that destabilize demographic rates. ?? 2005 British Ecological Society.

  7. Initial condition of stochastic self-assembly

    NASA Astrophysics Data System (ADS)

    Davis, Jason K.; Sindi, Suzanne S.

    2016-02-01

    The formation of a stable protein aggregate is regarded as the rate limiting step in the establishment of prion diseases. In these systems, once aggregates reach a critical size the growth process accelerates and thus the waiting time until the appearance of the first critically sized aggregate is a key determinant of disease onset. In addition to prion diseases, aggregation and nucleation is a central step of many physical, chemical, and biological process. Previous studies have examined the first-arrival time at a critical nucleus size during homogeneous self-assembly under the assumption that at time t =0 the system was in the all-monomer state. However, in order to compare to in vivo biological experiments where protein constituents inherited by a newly born cell likely contain intermediate aggregates, other possibilities must be considered. We consider one such possibility by conditioning the unique ergodic size distribution on subcritical aggregate sizes; this least-informed distribution is then used as an initial condition. We make the claim that this initial condition carries fewer assumptions than an all-monomer one and verify that it can yield significantly different averaged waiting times relative to the all-monomer condition under various models of assembly.

  8. Approach to characterization of the higher order structure of disulfide-containing proteins using hydrogen/deuterium exchange and top-down mass spectrometry.

    PubMed

    Wang, Guanbo; Kaltashov, Igor A

    2014-08-05

    Top-down hydrogen/deuterium exchange (HDX) with mass spectrometric (MS) detection has recently matured to become a potent biophysical tool capable of providing valuable information on higher order structure and conformational dynamics of proteins at an unprecedented level of structural detail. However, the scope of the proteins amenable to the analysis by top-down HDX MS still remains limited, with the protein size and the presence of disulfide bonds being the two most important limiting factors. While the limitations imposed by the physical size of the proteins gradually become more relaxed as the sensitivity, resolution and dynamic range of modern MS instrumentation continue to improve at an ever accelerating pace, the presence of the disulfide linkages remains a much less forgiving limitation even for the proteins of relatively modest size. To circumvent this problem, we introduce an online chemical reduction step following completion and quenching of the HDX reactions and prior to the top-down MS measurements of deuterium occupancy of individual backbone amides. Application of the new methodology to the top-down HDX MS characterization of a small (99 residue long) disulfide-containing protein β2-microglobulin allowed the backbone amide protection to be probed with nearly a single-residue resolution across the entire sequence. The high-resolution backbone protection pattern deduced from the top-down HDX MS measurements carried out under native conditions is in excellent agreement with the crystal structure of the protein and high-resolution NMR data, suggesting that introduction of the chemical reduction step to the top-down routine does not trigger hydrogen scrambling either during the electrospray ionization process or in the gas phase prior to the protein ion dissociation.

  9. Simulation of Micron-Sized Debris Populations in Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Xu, Y.-L.; Hyde, J. L.; Prior, T.; Matney, Mark

    2010-01-01

    The update of ORDEM2000, the NASA Orbital Debris Engineering Model, to its new version ORDEM2010, is nearly complete. As a part of the ORDEM upgrade, this paper addresses the simulation of micro-debris (greater than 10 m and smaller than 1 mm in size) populations in low Earth orbit. The principal data used in the modeling of the micron-sized debris populations are in-situ hypervelocity impact records, accumulated in post-flight damage surveys on the space-exposed surfaces of returned spacecrafts. The development of the micro-debris model populations follows the general approach to deriving other ORDEM2010-required input populations for various components and types of debris. This paper describes the key elements and major steps in the statistical inference of the ORDEM2010 micro-debris populations. A crucial step is the construction of a degradation/ejecta source model to provide prior information on the micron-sized objects (such as orbital and object-size distributions). Another critical step is to link model populations with data, which is rather involved. It demands detailed information on area-time/directionality for all the space-exposed elements of a shuttle orbiter and damage laws, which relate impact damage with the physical properties of a projectile and impact conditions such as impact angle and velocity. Also needed are model-predicted debris fluxes as a function of object size and impact velocity from all possible directions. In spite of the very limited quantity of the available shuttle impact data, the population-derivation process is satisfactorily stable. Final modeling results obtained from shuttle window and radiator impact data are reasonably convergent and consistent, especially for the debris populations with object-size thresholds at 10 and 100 m.

  10. Simulation of Micron-Sized Debris Populations in Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Xu, Y.-L.; Matney, M.; Liou, J.-C.; Hyde, J. L.; Prior, T. G.

    2010-01-01

    The update of ORDEM2000, the NASA Orbital Debris Engineering Model, to its new version . ORDEM2010, is nearly complete. As a part of the ORDEM upgrade, this paper addresses the simulation of micro-debris (greater than 10 micron and smaller than 1 mm in size) populations in low Earth orbit. The principal data used in the modeling of the micron-sized debris populations are in-situ hypervelocity impact records, accumulated in post-flight damage surveys on the space-exposed surfaces of returned spacecrafts. The development of the micro-debris model populations follows the general approach to deriving other ORDEM2010-required input populations for various components and types of debris. This paper describes the key elements and major steps in the statistical inference of the ORDEM2010 micro-debris populations. A crucial step is the construction of a degradation/ejecta source model to provide prior information on the micron-sized objects (such as orbital and object-size distributions). Another critical step is to link model populations with data, which is rather involved. It demands detailed information on area-time/directionality for all the space-exposed elements of a shuttle orbiter and damage laws, which relate impact damage with the physical properties of a projectile and impact conditions such as impact angle and velocity. Also needed are model-predicted debris fluxes as a function of object size and impact velocity from all possible directions. In spite of the very limited quantity of the available shuttle impact data, the population-derivation process is satisfactorily stable. Final modeling results obtained from shuttle window and radiator impact data are reasonably convergent and consistent, especially for the debris populations with object-size thresholds at 10 and 100 micron.

  11. Identification of optimal mask size parameter for noise filtering in 99mTc-methylene diphosphonate bone scintigraphy images.

    PubMed

    Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-11-01

    Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.

  12. Strength and fatigue properties of three-step sintered dense nanocrystal hydroxyapatite bioceramics

    NASA Astrophysics Data System (ADS)

    Guo, Wen-Guang; Qiu, Zhi-Ye; Cui, Han; Wang, Chang-Ming; Zhang, Xiao-Jun; Lee, In-Seop; Dong, Yu-Qi; Cui, Fu-Zhai

    2013-06-01

    Dense hydroxyapatite (HA) ceramic is a promising material for hard tissue repair due to its unique physical properties and biologic properties. However, the brittleness and low compressive strength of traditional HA ceramics limited their applications, because previous sintering methods produced HA ceramics with crystal sizes greater than nanometer range. In this study, nano-sized HA powder was employed to fabricate dense nanocrystal HA ceramic by high pressure molding, and followed by a three-step sintering process. The phase composition, microstructure, crystal dimension and crystal shape of the sintered ceramic were examined by X-ray diffraction (XRD) and scanning electron microscopy (SEM). Mechanical properties of the HA ceramic were tested, and cytocompatibility was evaluated. The phase of the sintered ceramic was pure HA, and the crystal size was about 200 nm. The compressive strength and elastic modulus of the HA ceramic were comparable to human cortical bone, especially the good fatigue strength overcame brittleness of traditional sintered HA ceramics. Cell attachment experiment also demonstrated that the ceramics had a good cytocompatibility.

  13. General methods for analysis of sequential "n-step" kinetic mechanisms: application to single turnover kinetics of helicase-catalyzed DNA unwinding.

    PubMed

    Lucius, Aaron L; Maluf, Nasib K; Fischer, Christopher J; Lohman, Timothy M

    2003-10-01

    Helicase-catalyzed DNA unwinding is often studied using "all or none" assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using "n-step" sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the "kinetic step size", m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using "n-step" sequential mechanisms has previously been limited by an inability to float the number of "unwinding steps", n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, f(ss)(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain f(ss)(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation.

  14. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review

    PubMed Central

    Morris, Tom; Gray, Laura

    2017-01-01

    Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637

  15. Influence of BMI and dietary restraint on self-selected portions of prepared meals in US women.

    PubMed

    Labbe, David; Rytz, Andréas; Brunstrom, Jeffrey M; Forde, Ciarán G; Martin, Nathalie

    2017-04-01

    The rise of obesity prevalence has been attributed in part to an increase in food and beverage portion sizes selected and consumed among overweight and obese consumers. Nevertheless, evidence from observations of adults is mixed and contradictory findings might reflect the use of small or unrepresentative samples. The objective of this study was i) to determine the extent to which BMI and dietary restraint predict self-selected portion sizes for a range of commercially available prepared savoury meals and ii) to consider the importance of these variables relative to two previously established predictors of portion selection, expected satiation and expected liking. A representative sample of female consumers (N = 300, range 18-55 years) evaluated 15 frozen savoury prepared meals. For each meal, participants rated their expected satiation and expected liking, and selected their ideal portion using a previously validated computer-based task. Dietary restraint was quantified using the Dutch Eating Behaviour Questionnaire (DEBQ-R). Hierarchical multiple regression was performed on self-selected portions with age, hunger level, and meal familiarity entered as control variables in the first step of the model, expected satiation and expected liking as predictor variables in the second step, and DEBQ-R and BMI as exploratory predictor variables in the third step. The second and third steps significantly explained variance in portion size selection (18% and 4%, respectively). Larger portion selections were significantly associated with lower dietary restraint and with lower expected satiation. There was a positive relationship between BMI and portion size selection (p = 0.06) and between expected liking and portion size selection (p = 0.06). Our discussion considers future research directions, the limited variance explained by our model, and the potential for portion size underreporting by overweight participants. Copyright © 2016 Nestec S.A. Published by Elsevier Ltd.. All rights reserved.

  16. Scanning near-field optical microscopy.

    PubMed

    Vobornik, Dusan; Vobornik, Slavenka

    2008-02-01

    An average human eye can see details down to 0,07 mm in size. The ability to see smaller details of the matter is correlated with the development of the science and the comprehension of the nature. Today's science needs eyes for the nano-world. Examples are easily found in biology and medical sciences. There is a great need to determine shape, size, chemical composition, molecular structure and dynamic properties of nano-structures. To do this, microscopes with high spatial, spectral and temporal resolution are required. Scanning Near-field Optical Microscopy (SNOM) is a new step in the evolution of microscopy. The conventional, lens-based microscopes have their resolution limited by diffraction. SNOM is not subject to this limitation and can offer up to 70 times better resolution.

  17. An improved maximum power point tracking method for a photovoltaic system

    NASA Astrophysics Data System (ADS)

    Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes

    2016-06-01

    In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.

  18. One size fits all electronics for insole-based activity monitoring.

    PubMed

    Hegde, Nagaraj; Bries, Matthew; Melanson, Edward; Sazonov, Edward

    2017-07-01

    Footwear based wearable sensors are becoming prominent in many areas of monitoring health and wellness, such as gait and activity monitoring. In our previous research we introduced an insole based wearable system SmartStep, which is completely integrated in a socially acceptable package. From a manufacturing perspective, SmartStep's electronics had to be custom made for each shoe size, greatly complicating the manufacturing process. In this work we explore the possibility of making a universal electronics platform for SmartStep - SmartStep 3.0, which can be used in the most common insole sizes without modifications. A pilot human subject experiments were run to compare the accuracy between the one-size fits all (SmartStep 3.0) and custom size SmartStep 2.0. A total of ~10 hours of data was collected in the pilot study involving three participants performing different activities of daily living while wearing SmartStep 2.0 and SmartStep 3.0. Leave one out cross validation resulted in a 98.5% average accuracy from SmartStep 2.0, while SmartStep 3.0 resulted in 98.3% accuracy, suggesting that the SmartStep 3.0 can be as accurate as SmartStep 2.0, while fitting most common shoe sizes.

  19. Kinematic, Muscular, and Metabolic Responses During Exoskeletal-, Elliptical-, or Therapist-Assisted Stepping in People With Incomplete Spinal Cord Injury

    PubMed Central

    Kinnaird, Catherine R.; Holleran, Carey L.; Rafferty, Miriam R.; Rodriguez, Kelly S.; Cain, Julie B.

    2012-01-01

    Background Robotic-assisted locomotor training has demonstrated some efficacy in individuals with neurological injury and is slowly gaining clinical acceptance. Both exoskeletal devices, which control individual joint movements, and elliptical devices, which control endpoint trajectories, have been utilized with specific patient populations and are available commercially. No studies have directly compared training efficacy or patient performance during stepping between devices. Objective The purpose of this study was to evaluate kinematic, electromyographic (EMG), and metabolic responses during elliptical- and exoskeletal-assisted stepping in individuals with incomplete spinal cord injury (SCI) compared with therapist-assisted stepping. Design A prospective, cross-sectional, repeated-measures design was used. Methods Participants with incomplete SCI (n=11) performed 3 separate bouts of exoskeletal-, elliptical-, or therapist-assisted stepping. Unilateral hip and knee sagittal-plane kinematics, lower-limb EMG recordings, and oxygen consumption were compared across stepping conditions and with control participants (n=10) during treadmill stepping. Results Exoskeletal stepping kinematics closely approximated normal gait patterns, whereas significantly greater hip and knee flexion postures were observed during elliptical-assisted stepping. Measures of kinematic variability indicated consistent patterns in control participants and during exoskeletal-assisted stepping, whereas therapist- and elliptical-assisted stepping kinematics were more variable. Despite specific differences, EMG patterns generally were similar across stepping conditions in the participants with SCI. In contrast, oxygen consumption was consistently greater during therapist-assisted stepping. Limitations Limitations included a small sample size, lack of ability to evaluate kinetics during stepping, unilateral EMG recordings, and sagittal-plane kinematics. Conclusions Despite specific differences in kinematics and EMG activity, metabolic activity was similar during stepping in each robotic device. Understanding potential differences and similarities in stepping performance with robotic assistance may be important in delivery of repeated locomotor training using robotic or therapist assistance and for consumers of robotic devices. PMID:22700537

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobo, R.; Revah, S.; Viveros-Garcia, T.

    An analysis of the local processes occurring in a trickle-bed bioreactor (TBB) with a first-order bioreaction shows that the identification of the TBB operating regime requires knowledge of the substrate concentration in the liquid phase. If the substrate liquid concentration is close to 0, the rate-controlling step is mass transfer at the gas-liquid interface; when it is close to the value in equilibrium with the gas phase, the controlling step is the phenomena occurring in the biofilm, CS{sub 2} removal rate data obtained in a TBB with a Thiobacilii consortia biofilm are analyzed to obtain the mass transfer and kineticmore » parameters, and to show that the bioreactor operates in a regime mainly controlled by mass transfer. A TBB model with two experimentally determined parameters is developed and used to show how the bioreactor size depends on the rate-limiting step, the absorption factor, the substrate fractional conversion, and on the gas and liquid contact pattern. Under certain conditions, the TBB size is independent of the flowing phases` contact pattern. The model effectively describes substrate gas and liquid concentration data for mass transfer and biodegradation rate controlled processes.« less

  1. Effect of Pore Clogging on Kinetics of Lead Uptake by Clinoptilolite.

    PubMed

    Inglezakis; Diamandis; Loizidou; Grigoropoulou

    1999-07-01

    The kinetics of lead-sodium ion exchange using pretreated natural clinoptilolite are investigated, more specifically the influence of agitation (0, 210, and 650 rpm) on the limiting step of the overall process, for particle sizes of 0.63-0.8 and 0.8-1 mm at ambient temperature and initial lead solutions of 500 mg l-1 without pH adjustment. The isotopic exchange model is found to fit the ion exchange process. Particle diffusion is shown to be the controlling step for both particle sizes under agitation, while in the absence of agitation film diffusion is shown to control. The ion exchange process effective diffusion coefficients are calculated and found to depend strongly on particle size in the case of agitation at 210 rpm and only slightly on particle size at 650 rpm. Lead uptake rates are higher for smaller particles only at rigorous agitation, while at mild agitation the results are reversed. These facts are due to partial clogging of the pores of the mineral during the grinding process. This is verified through comparison of lead uptake rates for two samples of the same particle size, one of which is rigorously washed for a certain time before being exposed to the ion exchange. Copyright 1999 Academic Press.

  2. N-terminus of Cardiac Myosin Essential Light Chain Modulates Myosin Step-Size

    PubMed Central

    Wang, Yihua; Ajtai, Katalin; Kazmierczak, Katarzyna; Szczesna-Cordary, Danuta; Burghardt, Thomas P.

    2016-01-01

    Muscle myosin cyclically hydrolyzes ATP to translate actin. Ventricular cardiac myosin (βmys) moves actin with three distinct unitary step-sizes resulting from its lever-arm rotation and with step-frequencies that are modulated in a myosin regulation mechanism. The lever-arm associated essential light chain (vELC) binds actin by its 43 residue N-terminal extension. Unitary steps were proposed to involve the vELC N-terminal extension with the 8 nm step engaging the vELC/actin bond facilitating an extra ~19 degrees of lever-arm rotation while the predominant 5 nm step forgoes vELC/actin binding. A minor 3 nm step is the unlikely conversion of the completed 5 to the 8 nm step. This hypothesis was tested using a 17 residue N-terminal truncated vELC in porcine βmys (Δ17βmys) and a 43 residue N-terminal truncated human vELC expressed in transgenic mouse heart (Δ43αmys). Step-size and step-frequency were measured using the Qdot motility assay. Both Δ17βmys and Δ43αmys had significantly increased 5 nm step-frequency and coincident loss in the 8 nm step-frequency compared to native proteins suggesting the vELC/actin interaction drives step-size preference. Step-size and step-frequency probability densities depend on the relative fraction of truncated vELC and relate linearly to pure myosin species concentrations in a mixture containing native vELC homodimer, two truncated vELCs in the modified homodimer, and one native and one truncated vELC in the heterodimer. Step-size and step-frequency, measured for native homodimer and at two or more known relative fractions of truncated vELC, are surmised for each pure species by using a new analytical method. PMID:26671638

  3. STEPS TOWARD COMPENSATORY EDUCATION IN THE CHICAGO PUBLIC SCHOOLS. HIGH SCHOOL DISTRICTS. REPORT OF AN EVALUATIVE STUDY.

    ERIC Educational Resources Information Center

    CITIZENS SCHOOLS COMMITTEE OF CHICAGO

    THE CITIZENS COMMITTEE OFFERS SUGGESTIONS FOR COMPENSATORY EDUCATION TO MEET THE NEEDS OF ALL CHILDREN LIVING IN AREAS OF HIGH TRANSIENCY WHO HAVE EXPERIENCED A MEAGER EDUCATIONAL BACKGROUND. THE SUGGESTIONS ARE--THAT CLASS SIZE BE LIMITED TO 25 STUDENTS. THAT THE LENGTH OF SCHOOL IN "DIFFICULT" AREAS BE LENGTHENED, AND THAT THE SALARY…

  4. Funding Survival Toolkit: 3 Fiscal Cliff Myths, Debunked

    ERIC Educational Resources Information Center

    House, Jenny

    2013-01-01

    In the face of annual budget deficits, sequestration means automatic, across-the-board spending cuts to all federal agencies. This drastic step allows Congress to limit the size of the budget and gives it the right to make mandatory cuts if the cost of running the government exceeds the cap. On March 1, we all watched as Congress was unable to…

  5. Molecular simulation of small Knudsen number flows

    NASA Astrophysics Data System (ADS)

    Fei, Fei; Fan, Jing

    2012-11-01

    The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.

  6. New insights into aldol reactions of methyl isocyanoacetate catalyzed by heterogenized homogeneous catalysts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Rong; Zhao, Jie; Yuan, Bing

    The Hayashi–Ito aldol reaction of methyl isocyanoacetate (MI) and benzaldehydes, a classic homogeneous Au(I)-catalyzed reaction, was studied with heterogenized homogeneous catalysts. Among dendrimer encapsulated nanoparticles (NPs) of Au, Pd, Rh, or Pt loaded in mesoporous supports and the homogeneous analogues, the Au NPs led to the highest yield and highest diastereoselectivity of products in toluene at room temperature. The Au catalyst was stable and was recycled for at least six runs without substantial deactivation. Moreover, larger pore sizes of the support and the use of a hydrophobic solvent led to a high selectivity for the trans diastereomer of the product.more » The activation energy is sensitive to neither the size of Au NPs nor the support. A linear Hammett plot was obtained with a positive slope, suggesting an increased electron density on the carbonyl carbon atom in the rate-limiting step. As a result, IR studies revealed a strong interaction between MI and the gold catalyst, supporting the proposed mechanism, in which rate-limiting step involves an electrophilic attack of the aldehyde on the enolate formed from the deprotonated MI.« less

  7. New insights into aldol reactions of methyl isocyanoacetate catalyzed by heterogenized homogeneous catalysts

    DOE PAGES

    Ye, Rong; Zhao, Jie; Yuan, Bing; ...

    2016-12-14

    The Hayashi–Ito aldol reaction of methyl isocyanoacetate (MI) and benzaldehydes, a classic homogeneous Au(I)-catalyzed reaction, was studied with heterogenized homogeneous catalysts. Among dendrimer encapsulated nanoparticles (NPs) of Au, Pd, Rh, or Pt loaded in mesoporous supports and the homogeneous analogues, the Au NPs led to the highest yield and highest diastereoselectivity of products in toluene at room temperature. The Au catalyst was stable and was recycled for at least six runs without substantial deactivation. Moreover, larger pore sizes of the support and the use of a hydrophobic solvent led to a high selectivity for the trans diastereomer of the product.more » The activation energy is sensitive to neither the size of Au NPs nor the support. A linear Hammett plot was obtained with a positive slope, suggesting an increased electron density on the carbonyl carbon atom in the rate-limiting step. As a result, IR studies revealed a strong interaction between MI and the gold catalyst, supporting the proposed mechanism, in which rate-limiting step involves an electrophilic attack of the aldehyde on the enolate formed from the deprotonated MI.« less

  8. The wiper model: avalanche dynamics in an exclusion process

    NASA Astrophysics Data System (ADS)

    Politi, Antonio; Romano, M. Carmen

    2013-10-01

    The exclusion-process model (Ciandrini et al 2010 Phys. Rev. E 81 051904) describing traffic of particles with internal stepping dynamics reveals the presence of strong correlations in realistic regimes. Here we study such a model in the limit of an infinitely fast translocation time, where the evolution can be interpreted as a ‘wiper’ that moves to dry neighbouring sites. We trace back the existence of long-range correlations to the existence of avalanches, where many sites are dried at once. At variance with self-organized criticality, in the wiper model avalanches have a typical size equal to the logarithm of the lattice size. In the thermodynamic limit, we find that the hydrodynamic behaviour is a mixture of stochastic (diffusive) fluctuations and increasingly coherent periodic oscillations that are reminiscent of a collective dynamics.

  9. SCANNING NEAR-FIELD OPTICAL MICROSCOPY

    PubMed Central

    Vobornik, Dušan; Vobornik, Slavenka

    2008-01-01

    An average human eye can see details down to 0,07 mm in size. The ability to see smaller details of the matter is correlated with the development of the science and the comprehension of the nature. Today’s science needs eyes for the nano-world. Examples are easily found in biology and medical sciences. There is a great need to determine shape, size, chemical composition, molecular structure and dynamic properties of nano-structures. To do this, microscopes with high spatial, spectral and temporal resolution are required. Scanning Near-field Optical Microscopy (SNOM) is a new step in the evolution of microscopy. The conventional, lens-based microscopes have their resolution limited by diffraction. SNOM is not subject to this limitation and can offer up to 70 times better resolution. PMID:18318675

  10. Interaction of rate- and size-effect using a dislocation density based strain gradient viscoplasticity model

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung N.; Siegmund, Thomas; Tomar, Vikas; Kruzic, Jamie J.

    2017-12-01

    Size effects occur in non-uniform plastically deformed metals confined in a volume on the scale of micrometer or sub-micrometer. Such problems have been well studied using strain gradient rate-independent plasticity theories. Yet, plasticity theories describing the time-dependent behavior of metals in the presence of size effects are presently limited, and there is no consensus about how the size effects vary with strain rates or whether there is an interaction between them. This paper introduces a constitutive model which enables the analysis of complex load scenarios, including loading rate sensitivity, creep, relaxation and interactions thereof under the consideration of plastic strain gradient effects. A strain gradient viscoplasticity constitutive model based on the Kocks-Mecking theory of dislocation evolution, namely the strain gradient Kocks-Mecking (SG-KM) model, is established and allows one to capture both rate and size effects, and their interaction. A formulation of the model in the finite element analysis framework is derived. Numerical examples are presented. In a special virtual creep test with the presence of plastic strain gradients, creep rates are found to diminish with the specimen size, and are also found to depend on the loading rate in an initial ramp loading step. Stress relaxation in a solid medium containing cylindrical microvoids is predicted to increase with decreasing void radius and strain rate in a prior ramp loading step.

  11. Combining the Complete Active Space Self-Consistent Field Method and the Full Configuration Interaction Quantum Monte Carlo within a Super-CI Framework, with Application to Challenging Metal-Porphyrins.

    PubMed

    Li Manni, Giovanni; Smart, Simon D; Alavi, Ali

    2016-03-08

    A novel stochastic Complete Active Space Self-Consistent Field (CASSCF) method has been developed and implemented in the Molcas software package. A two-step procedure is used, in which the CAS configuration interaction secular equations are solved stochastically with the Full Configuration Interaction Quantum Monte Carlo (FCIQMC) approach, while orbital rotations are performed using an approximated form of the Super-CI method. This new method does not suffer from the strong combinatorial limitations of standard MCSCF implementations using direct schemes and can handle active spaces well in excess of those accessible to traditional CASSCF approaches. The density matrix formulation of the Super-CI method makes this step independent of the size of the CI expansion, depending exclusively on one- and two-body density matrices with indices restricted to the relatively small number of active orbitals. No sigma vectors need to be stored in memory for the FCIQMC eigensolver--a substantial gain in comparison to implementations using the Davidson method, which require three or more vectors of the size of the CI expansion. Further, no orbital Hessian is computed, circumventing limitations on basis set expansions. Like the parent FCIQMC method, the present technique is scalable on massively parallel architectures. We present in this report the method and its application to the free-base porphyrin, Mg(II) porphyrin, and Fe(II) porphyrin. In the present study, active spaces up to 32 electrons and 29 orbitals in orbital expansions containing up to 916 contracted functions are treated with modest computational resources. Results are quite promising even without accounting for the correlation outside the active space. The systems here presented clearly demonstrate that large CASSCF calculations are possible via FCIQMC-CASSCF without limitations on basis set size.

  12. Mechanism of two-step vapour-crystal nucleation in a pore

    NASA Astrophysics Data System (ADS)

    van Meel, J. A.; Liu, Y.; Frenkel, D.

    2015-09-01

    We present a numerical study of the effect of hemispherical pores on the nucleation of Lennard-Jones crystals from the vapour phase. As predicted by Page and Sear, there is a narrow range of pore radii, where vapour-liquid nucleation can become a two-step process. A similar observation was made for different pore geometries by Giacomello et al. We find that the maximum nucleation rate depends on both the size and the adsorption strength of the pore. Moreover, a poe can be more effective than a planar wall with the same strength of attraction. Pore-induced vapour-liquid nucleation turns out to be the rate-limiting step for crystal nucleation. This implies that crystal nucleation can be enhanced by a judicious choice of the wetting properties of a microporous nucleating agent.

  13. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review.

    PubMed

    Kristunas, Caroline; Morris, Tom; Gray, Laura

    2017-11-15

    To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Any, not limited to healthcare settings. Any taking part in an SW-CRT published up to March 2016. The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22-0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Branching random walk with step size coming from a power law

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Ayan; Subhra Hazra, Rajat; Roy, Parthanil

    2015-09-01

    In their seminal work, Brunet and Derrida made predictions on the random point configurations associated with branching random walks. We shall discuss the limiting behavior of such point configurations when the displacement random variables come from a power law. In particular, we establish that two prediction of remains valid in this setup and investigate various other issues mentioned in their paper.

  15. On the development of efficient algorithms for three dimensional fluid flow

    NASA Technical Reports Server (NTRS)

    Maccormack, R. W.

    1988-01-01

    The difficulties of constructing efficient algorithms for three-dimensional flow are discussed. Reasonable candidates are analyzed and tested, and most are found to have obvious shortcomings. Yet, there is promise that an efficient class of algorithms exist between the severely time-step sized-limited explicit or approximately factored algorithms and the computationally intensive direct inversion of large sparse matrices by Gaussian elimination.

  16. Default perception of high-speed motion

    PubMed Central

    Wexler, Mark; Glennerster, Andrew; Cavanagh, Patrick; Ito, Hiroyuki; Seno, Takeharu

    2013-01-01

    When human observers are exposed to even slight motion signals followed by brief visual transients—stimuli containing no detectable coherent motion signals—they perceive large and salient illusory jumps. This visually striking effect, which we call “high phi,” challenges well-entrenched assumptions about the perception of motion, namely the minimal-motion principle and the breakdown of coherent motion perception with steps above an upper limit called dmax. Our experiments with transients, such as texture randomization or contrast reversal, show that the magnitude of the jump depends on spatial frequency and transient duration—but not on the speed of the inducing motion signals—and the direction of the jump depends on the duration of the inducer. Jump magnitude is robust across jump directions and different types of transient. In addition, when a texture is actually displaced by a large step beyond the upper step size limit of dmax, a breakdown of coherent motion perception is expected; however, in the presence of an inducer, observers again perceive coherent displacements at or just above dmax. In summary, across a large variety of stimuli, we find that when incoherent motion noise is preceded by a small bias, instead of perceiving little or no motion—as suggested by the minimal-motion principle—observers perceive jumps whose amplitude closely follows their own dmax limits. PMID:23572578

  17. Steepest descent method implementation on unconstrained optimization problem using C++ program

    NASA Astrophysics Data System (ADS)

    Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.

    2018-03-01

    Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.

  18. Simultaneous digital quantification and fluorescence-based size characterization of massively parallel sequencing libraries.

    PubMed

    Laurie, Matthew T; Bertout, Jessica A; Taylor, Sean D; Burton, Joshua N; Shendure, Jay A; Bielas, Jason H

    2013-08-01

    Due to the high cost of failed runs and suboptimal data yields, quantification and determination of fragment size range are crucial steps in the library preparation process for massively parallel sequencing (or next-generation sequencing). Current library quality control methods commonly involve quantification using real-time quantitative PCR and size determination using gel or capillary electrophoresis. These methods are laborious and subject to a number of significant limitations that can make library calibration unreliable. Herein, we propose and test an alternative method for quality control of sequencing libraries using droplet digital PCR (ddPCR). By exploiting a correlation we have discovered between droplet fluorescence and amplicon size, we achieve the joint quantification and size determination of target DNA with a single ddPCR assay. We demonstrate the accuracy and precision of applying this method to the preparation of sequencing libraries.

  19. Rare events in stochastic populations under bursty reproduction

    NASA Astrophysics Data System (ADS)

    Be'er, Shay; Assaf, Michael

    2016-11-01

    Recently, a first step was made by the authors towards a systematic investigation of the effect of reaction-step-size noise—uncertainty in the step size of the reaction—on the dynamics of stochastic populations. This was done by investigating the effect of bursty influx on the switching dynamics of stochastic populations. Here we extend this formalism to account for bursty reproduction processes, and improve the accuracy of the formalism to include subleading-order corrections. Bursty reproduction appears in various contexts, where notable examples include bursty viral production from infected cells, and reproduction of mammals involving varying number of offspring. The main question we quantitatively address is how bursty reproduction affects the overall fate of the population. We consider two complementary scenarios: population extinction and population survival; in the former a population gets extinct after maintaining a long-lived metastable state, whereas in the latter a population proliferates despite undergoing a deterministic drift towards extinction. In both models reproduction occurs in bursts, sampled from an arbitrary distribution. Using the WKB approach, we show in the extinction problem that bursty reproduction broadens the quasi-stationary distribution of population sizes in the metastable state, which results in a drastic reduction of the mean time to extinction compared to the non-bursty case. In the survival problem, it is shown that bursty reproduction drastically increases the survival probability of the population. Close to the bifurcation limit our analytical results simplify considerably and are shown to depend solely on the mean and variance of the burst-size distribution. Our formalism is demonstrated on several realistic distributions which all compare well with numerical Monte-Carlo simulations.

  20. In Vitro and In Vivo Single Myosin Step-Sizes in Striated Muscle a

    PubMed Central

    Burghardt, Thomas P.; Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin

    2016-01-01

    Myosin in muscle transduces ATP free energy into the mechanical work of moving actin. It has a motor domain transducer containing ATP and actin binding sites, and, mechanical elements coupling motor impulse to the myosin filament backbone providing transduction/mechanical-coupling. The mechanical coupler is a lever-arm stabilized by bound essential and regulatory light chains. The lever-arm rotates cyclically to impel bound filamentous actin. Linear actin displacement due to lever-arm rotation is the myosin step-size. A high-throughput quantum dot labeled actin in vitro motility assay (Qdot assay) measures motor step-size in the context of an ensemble of actomyosin interactions. The ensemble context imposes a constant velocity constraint for myosins interacting with one actin filament. In a cardiac myosin producing multiple step-sizes, a “second characterization” is step-frequency that adjusts longer step-size to lower frequency maintaining a linear actin velocity identical to that from a shorter step-size and higher frequency actomyosin cycle. The step-frequency characteristic involves and integrates myosin enzyme kinetics, mechanical strain, and other ensemble affected characteristics. The high-throughput Qdot assay suits a new paradigm calling for wide surveillance of the vast number of disease or aging relevant myosin isoforms that contrasts with the alternative model calling for exhaustive research on a tiny subset myosin forms. The zebrafish embryo assay (Z assay) performs single myosin step-size and step-frequency assaying in vivo combining single myosin mechanical and whole muscle physiological characterizations in one model organism. The Qdot and Z assays cover “bottom-up” and “top-down” assaying of myosin characteristics. PMID:26728749

  1. Glass frit nebulizer for atomic spectrometry

    USGS Publications Warehouse

    Layman, L.R.

    1982-01-01

    The nebuilizatlon of sample solutions Is a critical step In most flame or plasma atomic spectrometrlc methods. A novel nebulzatlon technique, based on a porous glass frit, has been Investigated. Basic operating parameters and characteristics have been studied to determine how thte new nebulizer may be applied to atomic spectrometrlc methods. The results of preliminary comparisons with pneumatic nebulizers Indicate several notable differences. The frit nebulizer produces a smaller droplet size distribution and has a higher sample transport efficiency. The mean droplet size te approximately 0.1 ??m, and up to 94% of the sample te converted to usable aerosol. The most significant limitations In the performance of the frit nebulizer are the stow sample equMbratton time and the requirement for wash cycles between samples. Loss of solute by surface adsorption and contamination of samples by leaching from the glass were both found to be limitations only In unusual cases. This nebulizer shows great promise where sample volume te limited or where measurements require long nebullzatlon times.

  2. Role of step size and max dwell time in anatomy based inverse optimization for prostate implants

    PubMed Central

    Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha

    2013-01-01

    In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323

  3. Fully Flexible Docking of Medium Sized Ligand Libraries with RosettaLigand

    PubMed Central

    DeLuca, Samuel; Khar, Karen; Meiler, Jens

    2015-01-01

    RosettaLigand has been successfully used to predict binding poses in protein-small molecule complexes. However, the RosettaLigand docking protocol is comparatively slow in identifying an initial starting pose for the small molecule (ligand) making it unfeasible for use in virtual High Throughput Screening (vHTS). To overcome this limitation, we developed a new sampling approach for placing the ligand in the protein binding site during the initial ‘low-resolution’ docking step. It combines the translational and rotational adjustments to the ligand pose in a single transformation step. The new algorithm is both more accurate and more time-efficient. The docking success rate is improved by 10–15% in a benchmark set of 43 protein/ligand complexes, reducing the number of models that typically need to be generated from 1000 to 150. The average time to generate a model is reduced from 50 seconds to 10 seconds. As a result we observe an effective 30-fold speed increase, making RosettaLigand appropriate for docking medium sized ligand libraries. We demonstrate that this improved initial placement of the ligand is critical for successful prediction of an accurate binding position in the ‘high-resolution’ full atom refinement step. PMID:26207742

  4. Maximizing PTH Anabolic Osteoporosis Therapy

    DTIC Science & Technology

    2015-09-01

    SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18 . NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON USAMRMC a. REPORT U b. ABSTRACT U c...normalization or endogenous controls and calculates fold 263 changes with P values. Gene expression data were normalized to five endogenous controls ( 18S ...adapters were ligated and the sample was size-fractionated (200-300 bp) on an agarose gel. After a final 294 PCR amplification step ( 18 cycles), the

  5. A diffusive information preservation method for small Knudsen number flows

    NASA Astrophysics Data System (ADS)

    Fei, Fei; Fan, Jing

    2013-06-01

    The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.

  6. Ultrafast learning in a hard-limited neural network pattern recognizer

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Lun J.

    1996-03-01

    As we published in the last five years, the supervised learning in a hard-limited perceptron system can be accomplished in a noniterative manner if the input-output mapping to be learned satisfies a certain positive-linear-independency (or PLI) condition. When this condition is satisfied (for most practical pattern recognition applications, this condition should be satisfied,) the connection matrix required to meet this mapping can be obtained noniteratively in one step. Generally, there exist infinitively many solutions for the connection matrix when the PLI condition is satisfied. We can then select an optimum solution such that the recognition of any untrained patterns will become optimally robust in the recognition mode. The learning speed is very fast and close to real-time because the learning process is noniterative and one-step. This paper reports the theoretical analysis and the design of a practical charter recognition system for recognizing hand-written alphabets. The experimental result is recorded in real-time on an unedited video tape for demonstration purposes. It is seen from this real-time movie that the recognition of the untrained hand-written alphabets is invariant to size, location, orientation, and writing sequence, even the training is done with standard size, standard orientation, central location and standard writing sequence.

  7. Multi-off-grid methods in multi-step integration of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Beaudet, P. R.

    1974-01-01

    Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.

  8. Does acid-base equilibrium correlate with remnant liver volume during stepwise liver resection?

    PubMed

    Golriz, Mohammad; Abbasi, Sepehr; Fathi, Parham; Majlesara, Ali; Brenner, Thorsten; Mehrabi, Arianeb

    2017-10-01

    Small for size and flow syndrome (SFSF) is one of the most challenging complications following extended hepatectomy (EH). After EH, hepatic artery flow decreases and portal vein flow increases per 100 g of remnant liver volume (RLV). This causes hypoxia followed by metabolic acidosis. A correlation between acidosis and posthepatectomy liver failure has been postulated but not studied systematically in a large animal model or clinical setting. In our study, we performed stepwise liver resections on nine pigs to defined SFSF limits as follows: step 1: segment II/III resection, step 2: segment IV resection, step 3: segment V/VIII resection (RLV: 75, 50, and 25%, respectively). Blood gas values were measured before and after each step using four catheters inserted into the carotid artery, internal jugular vein, hepatic artery, and portal vein. The pH, [Formula: see text], and base excess (BE) decreased, but [Formula: see text] values increased after 75% resection in the portal and jugular veins. EH correlated with reduced BE in the hepatic artery. Pco 2 values increased after 75% resection in the jugular vein. In contrast, arterial Po 2 increased after every resection, whereas the venous Po 2 decreased slightly. There were differences in venous [Formula: see text], BE in the hepatic artery, and Pco 2 in the jugular vein after 75% liver resection. Because 75% resection is the limit for SFSF, these noninvasive blood evaluations may be used to predict SFSF. Further studies with long-term follow-up are required to validate this correlation. NEW & NOTEWORTHY This is the first study to evaluate acid-base parameters in major central and hepatic vessels during stepwise liver resection. The pH, [Formula: see text], and base excess (BE) decreased, but [Formula: see text] values increased after 75% resection in the portal and jugular veins. Extended hepatectomy correlated with reduced BE in the hepatic artery. Because 75% resection is the limit for small for size and flow syndrome (SFSF), postresection blood gas evaluations may be used to predict SFSF. Copyright © 2017 the American Physiological Society.

  9. USE OF THE SDO POINTING CONTROLLERS FOR INSTRUMENT CALIBRATION MANEUVERS

    NASA Technical Reports Server (NTRS)

    Vess, Melissa F.; Starin, Scott R.; Morgenstern, Wendy M.

    2005-01-01

    During the science phase of the Solar Dynamics Observatory mission, the three science instruments require periodic instrument calibration maneuvers with a frequency of up to once per month. The command sequences for these maneuvers vary in length from a handful of steps to over 200 steps, and individual steps vary in size from 5 arcsec per step to 22.5 degrees per step. Early in the calibration maneuver development, it was determined that the original attitude sensor complement could not meet the knowledge requirements for the instrument calibration maneuvers in the event of a sensor failure. Because the mission must be single fault tolerant, an attitude determination trade study was undertaken to determine the impact of adding an additional attitude sensor versus developing alternative, potentially complex, methods of performing the maneuvers in the event of a sensor failure. To limit the impact to the science data capture budget, these instrument calibration maneuvers must be performed as quickly as possible while maintaining the tight pointing and knowledge required to obtain valid data during the calibration. To this end, the decision was made to adapt a linear pointing controller by adjusting gains and adding an attitude limiter so that it would be able to slew quickly and still achieve steady pointing once on target. During the analysis of this controller, questions arose about the stability of the controller during slewing maneuvers due to the combination of the integral gain, attitude limit, and actuator saturation. Analysis was performed and a method for disabling the integral action while slewing was incorporated to ensure stability. A high fidelity simulation is used to simulate the various instrument calibration maneuvers.

  10. An Ai Chi-based aquatic group improves balance and reduces falls in community-dwelling adults: A pilot observational cohort study.

    PubMed

    Skinner, Elizabeth H; Dinh, Tammy; Hewitt, Melissa; Piper, Ross; Thwaites, Claire

    2016-11-01

    Falls are associated with morbidity, loss of independence, and mortality. While land-based group exercise and Tai Chi programs reduce the risk of falls, aquatic therapy may allow patients to complete balance exercises with less pain and fear of falling; however, limited data exist. The objective of the study was to pilot the implementation of an aquatic group based on Ai Chi principles (Aquabalance) and to evaluate the safety, intervention acceptability, and intervention effect sizes. Pilot observational cohort study. Forty-two outpatients underwent a single 45-minute weekly group aquatic Ai Chi-based session for eight weeks (Aquabalance). Safety was monitored using organizational reporting systems. Patient attendance, satisfaction, and self-reported falls were also recorded. Balance measures included the Timed Up and Go (TUG) test, the Four Square Step Test (FSST), and the unilateral Step Tests. Forty-two patients completed the program. It was feasible to deliver Aquabalance, as evidenced by the median (IQR) attendance rate of 8.0 (7.8, 8.0) out of 8. No adverse events occurred and participants reported high satisfaction levels. Improvements were noted on the TUG, 10-meter walk test, the Functional Reach Test, the FSST, and the unilateral step tests (p < 0.05). The proportion of patients defined as high falls risk reduced from 38% to 21%. The study was limited by its small sample size, single-center nature, and the absence of a control group. Aquabalance was safe, well-attended, and acceptable to participants. A randomized controlled assessor-blinded trial is required.

  11. Auxotonic to isometric contraction transitioning in a beating heart causes myosin step-size to down shift

    PubMed Central

    Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin

    2017-01-01

    Myosin motors in cardiac ventriculum convert ATP free energy to the work of moving blood volume under pressure. The actin bound motor cyclically rotates its lever-arm/light-chain complex linking motor generated torque to the myosin filament backbone and translating actin against resisting force. Previous research showed that the unloaded in vitro motor is described with high precision by single molecule mechanical characteristics including unitary step-sizes of approximately 3, 5, and 8 nm and their relative step-frequencies of approximately 13, 50, and 37%. The 3 and 8 nm unitary step-sizes are dependent on myosin essential light chain (ELC) N-terminus actin binding. Step-size and step-frequency quantitation specifies in vitro motor function including duty-ratio, power, and strain sensitivity metrics. In vivo, motors integrated into the muscle sarcomere form the more complex and hierarchically functioning muscle machine. The goal of the research reported here is to measure single myosin step-size and step-frequency in vivo to assess how tissue integration impacts motor function. A photoactivatable GFP tags the ventriculum myosin lever-arm/light-chain complex in the beating heart of a live zebrafish embryo. Detected single GFP emission reports time-resolved myosin lever-arm orientation interpreted as step-size and step-frequency providing single myosin mechanical characteristics over the active cycle. Following step-frequency of cardiac ventriculum myosin transitioning from low to high force in relaxed to auxotonic to isometric contraction phases indicates that the imposition of resisting force during contraction causes the motor to down-shift to the 3 nm step-size accounting for >80% of all the steps in the near-isometric phase. At peak force, the ATP initiated actomyosin dissociation is the predominant strain inhibited transition in the native myosin contraction cycle. The proposed model for motor down-shifting and strain sensing involves ELC N-terminus actin binding. Overall, the approach is a unique bottom-up single molecule mechanical characterization of a hierarchically functional native muscle myosin. PMID:28423017

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less

  13. Are randomly grown graphs really random?

    PubMed

    Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H

    2001-10-01

    We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.

  14. Graphite grain-size spectrum and molecules from core-collapse supernovae

    NASA Astrophysics Data System (ADS)

    Clayton, Donald D.; Meyer, Bradley S.

    2018-01-01

    Our goal is to compute the abundances of carbon atomic complexes that emerge from the C + O cores of core-collapse supernovae. We utilize our chemical reaction network in which every atomic step of growth employs a quantum-mechanically guided reaction rate. This tool follows step-by-step the growth of linear carbon chain molecules from C atoms in the oxygen-rich C + O cores. We postulate that once linear chain molecules reach a sufficiently large size, they isomerize to ringed molecules, which serve as seeds for graphite grain growth. We demonstrate our technique for merging the molecular reaction network with a parallel program that can follow 1017 steps of C addition onto the rare seed species. Due to radioactivity within the C + O core, abundant ambient oxygen is unable to convert C to CO, except to a limited degree that actually facilitates carbon molecular ejecta. But oxygen severely minimizes the linear-carbon-chain abundances. Despite the tiny abundances of these linear-carbon-chain molecules, they can give rise to a small abundance of ringed-carbon molecules that serve as the nucleations on which graphite grain growth builds. We expand the C + O-core gas adiabatically from 6000 K for 109 s when reactions have essentially stopped. These adiabatic tracks emulate the actual expansions of the supernova cores. Using a standard model of 1056 atoms of C + O core ejecta having O/C = 3, we calculate standard ejection yields of graphite grains of all sizes produced, of the CO molecular abundance, of the abundances of linear-carbon molecules, and of Buckminsterfullerene. None of these except CO was expected from the C + O cores just a few years past.

  15. Model calibration and validation for OFMSW and sewage sludge co-digestion reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esposito, G., E-mail: giovanni.esposito@unicas.it; Frunzo, L., E-mail: luigi.frunzo@unina.it; Panico, A., E-mail: anpanico@unina.it

    2011-12-15

    Highlights: > Disintegration is the limiting step of the anaerobic co-digestion process. > Disintegration kinetic constant does not depend on the waste particle size. > Disintegration kinetic constant depends only on the waste nature and composition. > The model calibration can be performed on organic waste of any particle size. - Abstract: A mathematical model has recently been proposed by the authors to simulate the biochemical processes that prevail in a co-digestion reactor fed with sewage sludge and the organic fraction of municipal solid waste. This model is based on the Anaerobic Digestion Model no. 1 of the International Watermore » Association, which has been extended to include the co-digestion processes, using surface-based kinetics to model the organic waste disintegration and conversion to carbohydrates, proteins and lipids. When organic waste solids are present in the reactor influent, the disintegration process is the rate-limiting step of the overall co-digestion process. The main advantage of the proposed modeling approach is that the kinetic constant of such a process does not depend on the waste particle size distribution (PSD) and rather depends only on the nature and composition of the waste particles. The model calibration aimed to assess the kinetic constant of the disintegration process can therefore be conducted using organic waste samples of any PSD, and the resulting value will be suitable for all the organic wastes of the same nature as the investigated samples, independently of their PSD. This assumption was proven in this study by biomethane potential experiments that were conducted on organic waste samples with different particle sizes. The results of these experiments were used to calibrate and validate the mathematical model, resulting in a good agreement between the simulated and observed data for any investigated particle size of the solid waste. This study confirms the strength of the proposed model and calibration procedure, which can thus be used to assess the treatment efficiency and predict the methane production of full-scale digesters.« less

  16. A review of hybrid implicit explicit finite difference time domain method

    NASA Astrophysics Data System (ADS)

    Chen, Juan

    2018-06-01

    The finite-difference time-domain (FDTD) method has been extensively used to simulate varieties of electromagnetic interaction problems. However, because of its Courant-Friedrich-Levy (CFL) condition, the maximum time step size of this method is limited by the minimum size of cell used in the computational domain. So the FDTD method is inefficient to simulate the electromagnetic problems which have very fine structures. To deal with this problem, the Hybrid Implicit Explicit (HIE)-FDTD method is developed. The HIE-FDTD method uses the hybrid implicit explicit difference in the direction with fine structures to avoid the confinement of the fine spatial mesh on the time step size. So this method has much higher computational efficiency than the FDTD method, and is extremely useful for the problems which have fine structures in one direction. In this paper, the basic formulations, time stability condition and dispersion error of the HIE-FDTD method are presented. The implementations of several boundary conditions, including the connect boundary, absorbing boundary and periodic boundary are described, then some applications and important developments of this method are provided. The goal of this paper is to provide an historical overview and future prospects of the HIE-FDTD method.

  17. Bio-Inspired Aggregation Control of Carbon Nanotubes for Ultra-Strong Composites

    PubMed Central

    Han, Yue; Zhang, Xiaohua; Yu, Xueping; Zhao, Jingna; Li, Shan; Liu, Feng; Gao, Peng; Zhang, Yongyi; Zhao, Tong; Li, Qingwen

    2015-01-01

    High performance nanocomposites require well dispersion and high alignment of the nanometer-sized components, at a high mass or volume fraction as well. However, the road towards such composite structure is severely hindered due to the easy aggregation of these nanometer-sized components. Here we demonstrate a big step to approach the ideal composite structure for carbon nanotube (CNT) where all the CNTs were highly packed, aligned, and unaggregated, with the impregnated polymers acting as interfacial adhesions and mortars to build up the composite structure. The strategy was based on a bio-inspired aggregation control to limit the CNT aggregation to be sub 20–50 nm, a dimension determined by the CNT growth. After being stretched with full structural relaxation in a multi-step way, the CNT/polymer (bismaleimide) composite yielded super-high tensile strengths up to 6.27–6.94 GPa, more than 100% higher than those of carbon fiber/epoxy composites, and toughnesses up to 117–192 MPa. We anticipate that the present study can be generalized for developing multifunctional and smart nanocomposites where all the surfaces of nanometer-sized components can take part in shear transfer of mechanical, thermal, and electrical signals. PMID:26098627

  18. Two-Step Amyloid Aggregation: Sequential Lag Phase Intermediates

    NASA Astrophysics Data System (ADS)

    Castello, Fabio; Paredes, Jose M.; Ruedas-Rama, Maria J.; Martin, Miguel; Roldan, Mar; Casares, Salvador; Orte, Angel

    2017-01-01

    The self-assembly of proteins into fibrillar structures called amyloid fibrils underlies the onset and symptoms of neurodegenerative diseases, such as Alzheimer’s and Parkinson’s. However, the molecular basis and mechanism of amyloid aggregation are not completely understood. For many amyloidogenic proteins, certain oligomeric intermediates that form in the early aggregation phase appear to be the principal cause of cellular toxicity. Recent computational studies have suggested the importance of nonspecific interactions for the initiation of the oligomerization process prior to the structural conversion steps and template seeding, particularly at low protein concentrations. Here, using advanced single-molecule fluorescence spectroscopy and imaging of a model SH3 domain, we obtained direct evidence that nonspecific aggregates are required in a two-step nucleation mechanism of amyloid aggregation. We identified three different oligomeric types according to their sizes and compactness and performed a full mechanistic study that revealed a mandatory rate-limiting conformational conversion step. We also identified the most cytotoxic species, which may be possible targets for inhibiting and preventing amyloid aggregation.

  19. Characterizing the roles of changing population size and selection on the evolution of flux control in metabolic pathways.

    PubMed

    Orlenko, Alena; Chi, Peter B; Liberles, David A

    2017-05-25

    Understanding the genotype-phenotype map is fundamental to our understanding of genomes. Genes do not function independently, but rather as part of networks or pathways. In the case of metabolic pathways, flux through the pathway is an important next layer of biological organization up from the individual gene or protein. Flux control in metabolic pathways, reflecting the importance of mutation to individual enzyme genes, may be evolutionarily variable due to the role of mutation-selection-drift balance. The evolutionary stability of rate limiting steps and the patterns of inter-molecular co-evolution were evaluated in a simulated pathway with a system out of equilibrium due to fluctuating selection, population size, or positive directional selection, to contrast with those under stabilizing selection. Depending upon the underlying population genetic regime, fluctuating population size was found to increase the evolutionary stability of rate limiting steps in some scenarios. This result was linked to patterns of local adaptation of the population. Further, during positive directional selection, as with more complex mutational scenarios, an increase in the observation of inter-molecular co-evolution was observed. Differences in patterns of evolution when systems are in and out of equilibrium, including during positive directional selection may lead to predictable differences in observed patterns for divergent evolutionary scenarios. In particular, this result might be harnessed to detect differences between compensatory processes and directional processes at the pathway level based upon evolutionary observations in individual proteins. Detecting functional shifts in pathways reflects an important milestone in predicting when changes in genotypes result in changes in phenotypes.

  20. X-ray simulations method for the large field of view

    NASA Astrophysics Data System (ADS)

    Schelokov, I. A.; Grigoriev, M. V.; Chukalina, M. V.; Asadchikov, V. E.

    2018-03-01

    In the standard approach, X-ray simulation is usually limited to the step of spatial sampling to calculate the convolution of integrals of the Fresnel type. Explicitly the sampling step is determined by the size of the last Fresnel zone in the beam aperture. In other words, the spatial sampling is determined by the precision of integral convolution calculations and is not connected with the space resolution of an optical scheme. In the developed approach the convolution in the normal space is replaced by computations of the shear strain of ambiguity function in the phase space. The spatial sampling is then determined by the space resolution of an optical scheme. The sampling step can differ in various directions because of the source anisotropy. The approach was used to simulate original images in the X-ray Talbot interferometry and showed that the simulation can be applied to optimize the methods of postprocessing.

  1. Methodological aspects of an adaptive multidirectional pattern search to optimize speech perception using three hearing-aid algorithms

    NASA Astrophysics Data System (ADS)

    Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes

    2004-12-01

    In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .

  2. A Conformational Transition in the Myosin VI Converter Contributes to the Variable Step Size

    PubMed Central

    Ovchinnikov, V.; Cecchini, M.; Vanden-Eijnden, E.; Karplus, M.

    2011-01-01

    Myosin VI (MVI) is a dimeric molecular motor that translocates backwards on actin filaments with a surprisingly large and variable step size, given its short lever arm. A recent x-ray structure of MVI indicates that the large step size can be explained in part by a novel conformation of the converter subdomain in the prepowerstroke state, in which a 53-residue insert, unique to MVI, reorients the lever arm nearly parallel to the actin filament. To determine whether the existence of the novel converter conformation could contribute to the step-size variability, we used a path-based free-energy simulation tool, the string method, to show that there is a small free-energy difference between the novel converter conformation and the conventional conformation found in other myosins. This result suggests that MVI can bind to actin with the converter in either conformation. Models of MVI/MV chimeric dimers show that the variability in the tilting angle of the lever arm that results from the two converter conformations can lead to step-size variations of ∼12 nm. These variations, in combination with other proposed mechanisms, could explain the experimentally determined step-size variability of ∼25 nm for wild-type MVI. Mutations to test the findings by experiment are suggested. PMID:22098742

  3. Effects of homogenization treatment on recrystallization behavior of 7150 aluminum sheet during post-rolling annealing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Zhanying; Department of Applied Science, University of Québec at Chicoutimi, Saguenay, QC G7H 2B1; Zhao, Gang

    2016-04-15

    The effects of two homogenization treatments applied to the direct chill (DC) cast billet on the recrystallization behavior in 7150 aluminum alloy during post-rolling annealing have been investigated using the electron backscatter diffraction (EBSD) technique. Following hot and cold rolling to the sheet, measured orientation maps, the recrystallization fraction and grain size, the misorientation angle and the subgrain size were used to characterize the recovery and recrystallization processes at different annealing temperatures. The results were compared between the conventional one-step homogenization and the new two-step homogenization, with the first step being pretreated at 250 °C. Al{sub 3}Zr dispersoids with highermore » densities and smaller sizes were obtained after the two-step homogenization, which strongly retarded subgrain/grain boundary mobility and inhibited recrystallization. Compared with the conventional one-step homogenized samples, a significantly lower recrystallized fraction and a smaller recrystallized grain size were obtained under all annealing conditions after cold rolling in the two-step homogenized samples. - Highlights: • Effects of two homogenization treatments on recrystallization in 7150 Al sheets • Quantitative study on the recrystallization evolution during post-rolling annealing • Al{sub 3}Zr dispersoids with higher densities and smaller sizes after two-step treatment • Higher recrystallization resistance of 7150 sheets with two-step homogenization.« less

  4. Influence of the size reduction of organic waste on their anaerobic digestion.

    PubMed

    Palmowski, L M; Müller, J A

    2000-01-01

    The rate-limiting step in anaerobic digestion of organic solid waste is generally their hydrolysis. A size reduction of the particles and the resulting enlargement of the available specific surface can support the biological process in two ways. Firstly, in case of substrates with a high content of fibres and a low xegradability, their comminution yields to an improved digester gas production. This leads to a decreased amount of residues to be disposed of and to an increased quantity of useful digester gas. The second effect of the particle size reduction observed with all the substrates but particularly with those of low degradability is a reduction of the technical digestion time. Furthermore, the particle size of organic waste has an influence on the dewaterability after codigestion with sewage sludge. The presence of organic waste residues improves the dewaterability measured as specific resistance to filtration but this positive effect is attenuated if the particle size of the solids is reduced.

  5. WAKES: Wavelet Adaptive Kinetic Evolution Solvers

    NASA Astrophysics Data System (ADS)

    Mardirian, Marine; Afeyan, Bedros; Larson, David

    2016-10-01

    We are developing a general capability to adaptively solve phase space evolution equations mixing particle and continuum techniques in an adaptive manner. The multi-scale approach is achieved using wavelet decompositions which allow phase space density estimation to occur with scale dependent increased accuracy and variable time stepping. Possible improvements on the SFK method of Larson are discussed, including the use of multiresolution analysis based Richardson-Lucy Iteration, adaptive step size control in explicit vs implicit approaches. Examples will be shown with KEEN waves and KEEPN (Kinetic Electrostatic Electron Positron Nonlinear) waves, which are the pair plasma generalization of the former, and have a much richer span of dynamical behavior. WAKES techniques are well suited for the study of driven and released nonlinear, non-stationary, self-organized structures in phase space which have no fluid, limit nor a linear limit, and yet remain undamped and coherent well past the drive period. The work reported here is based on the Vlasov-Poisson model of plasma dynamics. Work supported by a Grant from the AFOSR.

  6. An atomistic simulation scheme for modeling crystal formation from solution.

    PubMed

    Kawska, Agnieszka; Brickmann, Jürgen; Kniep, Rüdiger; Hochrein, Oliver; Zahn, Dirk

    2006-01-14

    We present an atomistic simulation scheme for investigating crystal growth from solution. Molecular-dynamics simulation studies of such processes typically suffer from considerable limitations concerning both system size and simulation times. In our method this time-length scale problem is circumvented by an iterative scheme which combines a Monte Carlo-type approach for the identification of ion adsorption sites and, after each growth step, structural optimization of the ion cluster and the solvent by means of molecular-dynamics simulation runs. An important approximation of our method is based on assuming full structural relaxation of the aggregates between each of the growth steps. This concept only holds for compounds of low solubility. To illustrate our method we studied CaF2 aggregate growth from aqueous solution, which may be taken as prototypes for compounds of very low solubility. The limitations of our simulation scheme are illustrated by the example of NaCl aggregation from aqueous solution, which corresponds to a solute/solvent combination of very high salt solubility.

  7. One-step estimation of networked population size: Respondent-driven capture-recapture with anonymity.

    PubMed

    Khan, Bilal; Lee, Hsuan-Wei; Fellows, Ian; Dombrowski, Kirk

    2018-01-01

    Size estimation is particularly important for populations whose members experience disproportionate health issues or pose elevated health risks to the ambient social structures in which they are embedded. Efforts to derive size estimates are often frustrated when the population is hidden or hard-to-reach in ways that preclude conventional survey strategies, as is the case when social stigma is associated with group membership or when group members are involved in illegal activities. This paper extends prior research on the problem of network population size estimation, building on established survey/sampling methodologies commonly used with hard-to-reach groups. Three novel one-step, network-based population size estimators are presented, for use in the context of uniform random sampling, respondent-driven sampling, and when networks exhibit significant clustering effects. We give provably sufficient conditions for the consistency of these estimators in large configuration networks. Simulation experiments across a wide range of synthetic network topologies validate the performance of the estimators, which also perform well on a real-world location-based social networking data set with significant clustering. Finally, the proposed schemes are extended to allow them to be used in settings where participant anonymity is required. Systematic experiments show favorable tradeoffs between anonymity guarantees and estimator performance. Taken together, we demonstrate that reasonable population size estimates are derived from anonymous respondent driven samples of 250-750 individuals, within ambient populations of 5,000-40,000. The method thus represents a novel and cost-effective means for health planners and those agencies concerned with health and disease surveillance to estimate the size of hidden populations. We discuss limitations and future work in the concluding section.

  8. Anticipatory Postural Adjustment During Self-Initiated, Cued, and Compensatory Stepping in Healthy Older Adults and Patients With Parkinson Disease.

    PubMed

    Schlenstedt, Christian; Mancini, Martina; Horak, Fay; Peterson, Daniel

    2017-07-01

    To characterize anticipatory postural adjustments (APAs) across a variety of step initiation tasks in people with Parkinson disease (PD) and healthy subjects. Cross-sectional study. Step initiation was analyzed during self-initiated gait, perceptual cued gait, and compensatory forward stepping after platform perturbation. People with PD were assessed on and off levodopa. University research laboratory. People (N=31) with PD (n=19) and healthy aged-matched subjects (n=12). Not applicable. Mediolateral (ML) size of APAs (calculated from center of pressure recordings), step kinematics, and body alignment. With respect to self-initiated gait, the ML size of APAs was significantly larger during the cued condition and significantly smaller during the compensatory condition (P<.001). Healthy subjects and patients with PD did not differ in body alignment during the stance phase prior to stepping. No significant group effect was found for ML size of APAs between healthy subjects and patients with PD. However, the reduction in APA size from cued to compensatory stepping was significantly less pronounced in PD off medication compared with healthy subjects, as indicated by a significant group by condition interaction effect (P<.01). No significant differences were found comparing patients with PD on and off medications. Specific stepping conditions had a significant effect on the preparation and execution of step initiation. Therefore, APA size should be interpreted with respect to the specific stepping condition. Across-task changes in people with PD were less pronounced compared with healthy subjects. Antiparkinsonian medication did not significantly improve step initiation in this mildly affected PD cohort. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  9. Critical Motor Number for Fractional Steps of Cytoskeletal Filaments in Gliding Assays

    PubMed Central

    Li, Xin; Lipowsky, Reinhard; Kierfeld, Jan

    2012-01-01

    In gliding assays, filaments are pulled by molecular motors that are immobilized on a solid surface. By varying the motor density on the surface, one can control the number of motors that pull simultaneously on a single filament. Here, such gliding assays are studied theoretically using Brownian (or Langevin) dynamics simulations and taking the local force balance between motors and filaments as well as the force-dependent velocity of the motors into account. We focus on the filament stepping dynamics and investigate how single motor properties such as stalk elasticity and step size determine the presence or absence of fractional steps of the filaments. We show that each gliding assay can be characterized by a critical motor number, . Because of thermal fluctuations, fractional filament steps are only detectable as long as . The corresponding fractional filament step size is where is the step size of a single motor. We first apply our computational approach to microtubules pulled by kinesin-1 motors. For elastic motor stalks that behave as linear springs with a zero rest length, the critical motor number is found to be , and the corresponding distributions of the filament step sizes are in good agreement with the available experimental data. In general, the critical motor number depends on the elastic stalk properties and is reduced to for linear springs with a nonzero rest length. Furthermore, is shown to depend quadratically on the motor step size . Therefore, gliding assays consisting of actin filaments and myosin-V are predicted to exhibit fractional filament steps up to motor number . Finally, we show that fractional filament steps are also detectable for a fixed average motor number as determined by the surface density (or coverage) of the motors on the substrate surface. PMID:22927953

  10. CD-ROM preparation: An overview and guide

    NASA Technical Reports Server (NTRS)

    Daniel, Ralph E.; Jeschke, Mark W.; Schroer, James A.

    1995-01-01

    A primer on the options and procedures involved in producing CD-ROM products in a small to medium sized business operation is presented in language that persons with a minimal technical background can easily understand. The capabilities, limitations, and standards of CD-ROM technology are surveyed. Emphasis is placed on CD-ROM production, especially upon design, data conversion to an electronic medium, data file preparation, the use of vendors, and the steps for in-house production of CD-ROM products.

  11. In The Dark: Military Planning for a Catastrophic Critical Infrastructure Event

    DTIC Science & Technology

    2011-05-01

    source), and can be designed very easily. A trailer can carry a larger sized generator and multiple sites could be impacted by a coordinated attack...limited ingress and egress options. This scenario does not address EMP/ EMI , but for starters, this should be enough of a challenge with all normal...election of President Obama, warning that Russia would not tolerate the Bush Administration’s NATO missile shield , and that Russia would take steps to

  12. Development of 3D Oxide Fuel Mechanics Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, B. W.; Casagranda, A.; Pitts, S. A.

    This report documents recent work to improve the accuracy and robustness of the mechanical constitutive models used in the BISON fuel performance code. These developments include migration of the fuel mechanics models to be based on the MOOSE Tensor Mechanics module, improving the robustness of the smeared cracking model, implementing a capability to limit the time step size based on material model response, and improving the robustness of the return mapping iterations used in creep and plasticity models.

  13. One-Step Hydrothermal Approach to Synthesis Carbon Dots from D-Sorbitol for Detection of Iron(III) and Cell Imaging.

    PubMed

    Zhang, Junqiu; Yan, Juping; Wang, Yingte; Zhang, Yong

    2018-07-01

    A facile and economic approach to synthesis highly fluorescence carbon dots (CDs) via one-step hydrothermal treatment of D-sorbitol was presented. The as-synthesized CDs were characterized by good water solubility, well monodispersion, and excellent biocompatibility. Spherical CDs had a particle size about 5 nm and exhibited a quantum yield of 8.85% at excitation wavelength of 360 nm. In addition, the CDs can serve as fluorescent probe for sensitive and selective detection of Fe3+ ions with the detection limit of 1.16 μM. Moreover, the potential of the as-prepared carbon dots for biological application was confirmed by employing it for fluorescence imaging in MCF-7 cells.

  14. A triangular thin shell finite element: Nonlinear analysis. [structural analysis

    NASA Technical Reports Server (NTRS)

    Thomas, G. R.; Gallagher, R. H.

    1975-01-01

    Aspects of the formulation of a triangular thin shell finite element which pertain to geometrically nonlinear (small strain, finite displacement) behavior are described. The procedure for solution of the resulting nonlinear algebraic equations combines a one-step incremental (tangent stiffness) approach with one iteration in the Newton-Raphson mode. A method is presented which permits a rational estimation of step size in this procedure. Limit points are calculated by means of a superposition scheme coupled to the incremental side of the solution procedure while bifurcation points are calculated through a process of interpolation of the determinants of the tangent-stiffness matrix. Numerical results are obtained for a flat plate and two curved shell problems and are compared with alternative solutions.

  15. Statistical methods for conducting agreement (comparison of clinical tests) and precision (repeatability or reproducibility) studies in optometry and ophthalmology.

    PubMed

    McAlinden, Colm; Khadka, Jyoti; Pesudovs, Konrad

    2011-07-01

    The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  16. Niche filling slows the diversification of Himalayan songbirds.

    PubMed

    Price, Trevor D; Hooper, Daniel M; Buchanan, Caitlyn D; Johansson, Ulf S; Tietze, D Thomas; Alström, Per; Olsson, Urban; Ghosh-Harihar, Mousumi; Ishtiaq, Farah; Gupta, Sandeep K; Martens, Jochen; Harr, Bettina; Singh, Pratap; Mohan, Dhananjai

    2014-05-08

    Speciation generally involves a three-step process--range expansion, range fragmentation and the development of reproductive isolation between spatially separated populations. Speciation relies on cycling through these three steps and each may limit the rate at which new species form. We estimate phylogenetic relationships among all Himalayan songbirds to ask whether the development of reproductive isolation and ecological competition, both factors that limit range expansions, set an ultimate limit on speciation. Based on a phylogeny for all 358 species distributed along the eastern elevational gradient, here we show that body size and shape differences evolved early in the radiation, with the elevational band occupied by a species evolving later. These results are consistent with competition for niche space limiting species accumulation. Even the elevation dimension seems to be approaching ecological saturation, because the closest relatives both inside the assemblage and elsewhere in the Himalayas are on average separated by more than five million years, which is longer than it generally takes for reproductive isolation to be completed; also, elevational distributions are well explained by resource availability, notably the abundance of arthropods, and not by differences in diversification rates in different elevational zones. Our results imply that speciation rate is ultimately set by niche filling (that is, ecological competition for resources), rather than by the rate of acquisition of reproductive isolation.

  17. Modeling myosin VI stepping dynamics

    NASA Astrophysics Data System (ADS)

    Tehver, Riina

    Myosin VI is a molecular motor that transports intracellular cargo as well as acts as an anchor. The motor has been measured to have unusually large step size variation and it has been reported to make both long forward and short inchworm-like forward steps, as well as step backwards. We have been developing a model that incorporates this diverse stepping behavior in a consistent framework. Our model allows us to predict the dynamics of the motor under different conditions and investigate the evolutionary advantages of the large step size variation.

  18. Efficiencies for the statistics of size discrimination.

    PubMed

    Solomon, Joshua A; Morgan, Michael; Chubb, Charles

    2011-10-19

    Different laboratories have achieved a consensus regarding how well human observers can estimate the average orientation in a set of N objects. Such estimates are not only limited by visual noise, which perturbs the visual signal of each object's orientation, they are also inefficient: Observers effectively use only √N objects in their estimates (e.g., S. C. Dakin, 2001; J. A. Solomon, 2010). More controversial is the efficiency with which observers can estimate the average size in an array of circles (e.g., D. Ariely, 2001, 2008; S. C. Chong, S. J. Joo, T.-A. Emmanouil, & A. Treisman, 2008; K. Myczek & D. J. Simons, 2008). Of course, there are some important differences between orientation and size; nonetheless, it seemed sensible to compare the two types of estimate against the same ideal observer. Indeed, quantitative evaluation of statistical efficiency requires this sort of comparison (R. A. Fisher, 1925). Our first step was to measure the noise that limits size estimates when only two circles are compared. Our results (Weber fractions between 0.07 and 0.14 were necessary for 84% correct 2AFC performance) are consistent with the visual system adding the same amount of Gaussian noise to all logarithmically transduced circle diameters. We exaggerated this visual noise by randomly varying the diameters in (uncrowded) arrays of 1, 2, 4, and 8 circles and measured its effect on discrimination between mean sizes. Efficiencies inferred from all four observers significantly exceed 25% and, in two cases, approach 100%. More consistent are our measurements of just-noticeable differences in size variance. These latter results suggest between 62 and 75% efficiency for variance discriminations. Although our observers were no more efficient comparing size variances than they were at comparing mean sizes, they were significantly more precise. In other words, our results contain evidence for a non-negligible source of late noise that limits mean discriminations but not variance discriminations.

  19. Effective size selection of MoS2 nanosheets by a novel liquid cascade centrifugation: Influences of the flakes dimensions on electrochemical and photoelectrochemical applications.

    PubMed

    Kajbafvala, Marzieh; Farbod, Mansoor

    2018-05-14

    Although liquid phase exfoliation is a powerful method to produce MoS 2 nanosheets in large scale, but its effectiveness is limited by the diversity of produced nanosheets sizes. Here a novel approach for separation of MoS 2 flakes having various lateral sizes and thicknesses based on the cascaded centrifugation has been introduced. This method involves a pre-separation step which is performed through low-speed centrifugation to avoid the deposition of large area single and few-layers by the heavier particles. The bulk MoS 2 powders were dispersed in an aqueous solution of sodium cholate (SC) and sonicated for 12 h. The main separation step was performed using different speed centrifugation intervals of 10-11, 8-10, 6-8, 4-6, 2-4 and 0.5-2 krpm by which nanosheets containing 2, 4, 7, 8, 14, 18 and 29 layers were obtained respectively. The samples were characterized using XRD, FESEM, AFM, TEM, DLS and also UV-vis, Raman and PL spectroscopy measurements. Dynamic light scattering (DLS) measurements have confirmed the existence of a larger number of single or few-layers MoS 2 nanosheets compared to when the pre-separation step was not used. Finally, Photocurrent and cyclic voltammetry of different samples were measured and found that the flakes with bigger surface area had larger CV loop area. Our results provide a method for the preparation of a MoS 2 monolayer enriched suspension which can be used for different applications. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. One-step microwave-assisted synthesis of water-dispersible Fe3O4 magnetic nanoclusters for hyperthermia applications

    NASA Astrophysics Data System (ADS)

    Sathya, Ayyappan; Kalyani, S.; Ranoo, Surojit; Philip, John

    2017-10-01

    To realize magnetic hyperthermia as an alternate stand-alone therapeutic procedure for cancer treatment, magnetic nanoparticles with optimal performance, within the biologically safe limits, are to be produced using simple, reproducible and scalable techniques. Herein, we present a simple, one-step approach for synthesis of water-dispersible magnetic nanoclusters (MNCs) of superparamagnetic iron oxide by reducing of Fe2(SO4)3 in sodium acetate (alkali), poly ethylene glycol (capping ligand), and ethylene glycol (solvent and reductant) in a microwave reactor. The average size and saturation magnetization of the MNC's are tuned from 27 to 52 nm and 32 to 58 emu/g by increasing the reaction time from 10 to 600 s. Transmission electron microscopy images reveal that each MNC composed of large number of primary Fe3O4 nanoparticles. The synthesised MNCs show excellent colloidal stability in aqueous phase due to the adsorbed PEG layer. The highest SAR value of 215 ± 10 W/gFe observed in 52 nm size MNC at a frequency of 126 kHz and field of 63 kA/m suggest the potential use of these MNC in hyperthermia applications. This study further opens up the possibilities to develop metal ion-doped MNCs with tunable sizes suitable for various biomedical applications using microwave assisted synthesis.

  1. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width.

    PubMed

    Learn, R; Feigenbaum, E

    2016-06-01

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  2. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Learn, R.; Feigenbaum, E.

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  3. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE PAGES

    Learn, R.; Feigenbaum, E.

    2016-05-27

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  4. Simulation and experimental design of a new advanced variable step size Incremental Conductance MPPT algorithm for PV systems.

    PubMed

    Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir

    2016-05-01

    Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Exit probability of the one-dimensional q-voter model: Analytical results and simulations for large networks

    NASA Astrophysics Data System (ADS)

    Timpanaro, André M.; Prado, Carmen P. C.

    2014-05-01

    We discuss the exit probability of the one-dimensional q-voter model and present tools to obtain estimates about this probability, both through simulations in large networks (around 107 sites) and analytically in the limit where the network is infinitely large. We argue that the result E(ρ )=ρq/ρq+(1-ρ)q, that was found in three previous works [F. Slanina, K. Sznajd-Weron, and P. Przybyła, Europhys. Lett. 82, 18006 (2008), 10.1209/0295-5075/82/18006; R. Lambiotte and S. Redner, Europhys. Lett. 82, 18007 (2008), 10.1209/0295-5075/82/18007, for the case q =2; and P. Przybyła, K. Sznajd-Weron, and M. Tabiszewski, Phys. Rev. E 84, 031117 (2011), 10.1103/PhysRevE.84.031117, for q >2] using small networks (around 103 sites), is a good approximation, but there are noticeable deviations that appear even for small systems and that do not disappear when the system size is increased (with the notable exception of the case q =2). We also show that, under some simple and intuitive hypotheses, the exit probability must obey the inequality ρq/ρq+(1-ρ)≤E(ρ)≤ρ/ρ +(1-ρ)q in the infinite size limit. We believe this settles in the negative the suggestion made [S. Galam and A. C. R. Martins, Europhys. Lett. 95, 48005 (2001), 10.1209/0295-5075/95/48005] that this result would be a finite size effect, with the exit probability actually being a step function. We also show how the result that the exit probability cannot be a step function can be reconciled with the Galam unified frame, which was also a source of controversy.

  6. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  7. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  8. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  9. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  10. Size Effect of the 2-D Bodies on the Geothermal Gradient and Q-A Plot

    NASA Astrophysics Data System (ADS)

    Thakur, M.; Blackwell, D. D.

    2009-12-01

    Using numerical models we have investigated some of the criticisms on the Q-A plot of related to the effect of size of the body on the slope and reduced heat flow. The effects of horizontal conduction depend on the relative difference of radioactivity between the body and the country rock (assuming constant thermal conductivity). Horizontal heat transfer due to different 2-D bodies was numerically studied in order to quantify resulting temperature differences at the Moho and errors on the predication of Qr (reduced heat flow). Using the two end member distributions of radioactivity, the step model (thickness 10km) and exponential model, different 2-D models of horizontal scale (width) ranging from 10 -500 km were investigated. Increasing the horizontal size of the body tends to move observations closer towards the 1-D solution. A temperature difference of 50 oC is produced (for the step model) at Moho between models of width 10 km versus 500 km. In other words the 1-D solution effectively provides large scale averaging in terms of heat flow and temperature field in the lithosphere. For bodies’ ≤ 100 km wide the geotherms at shallower levels are affected, but at depth they converge and are 50 oC lower than that of the infinite plate model temperature. In case of 2-D bodies surface heat flow is decreased due to horizontal transfer of heat, which will shift the Q-A point vertically downward on the Q-A plot. The smaller the size of the body, the more will be the deviation from the 1-D solution and the more will be the movement of Q-A point downwards on a Q-A plot. On the Q-A plot, a limited points of bodies of different sizes with different radioactivity contrast (for the step and exponential model), exactly reproduce the reduced heat flow Qr. Thus the size of the body can affect the slope on a Q-A plot but Qr is not changed. Therefore, Qr ~ 32 mWm-2 obtained from the global terrain average Q-A plot represents the best estimate of stable continental mantle heat flow.

  11. Large-scale production of kappa-carrageenan droplets for gel-bead production: theoretical and practical limitations of size and production rate.

    PubMed

    Hunik, J H; Tramper, J

    1993-01-01

    Immobilization of biocatalysts in kappa-carrageenan gel beads is a widely used technique nowadays. Several methods are used to produce the gel beads. The gel-bead production rate is usually sufficient to make the relatively small quantities needed for bench-scale experiments. The droplet diameter can, within limits, be adjusted to the desired size, but it is difficult to predict because of the non-Newtonian fluid behavior of the kappa-carrageenan solution. Here we present the further scale-up of the extrusion technique with the theory to predict the droplet diameters for non-Newtonian fluids. The emphasis is on the droplet formation, which is the rate-limiting step in this extrusion technique. Uniform droplets were formed by breaking up a capillary jet with a sinusoidal signal of a vibration exciter. At the maximum production rate of 27.6 dm3/h, uniform droplets with a diameter of (2.1 +/- 0.12) x 10(-3) m were obtained. This maximum flow rate was limited by the power transfer of the vibration exciter to the liquid flow. It was possible to get a good prediction of the droplet diameter by estimating the local viscosity from shear-rate calculations and an experimental relation between the shear rate and viscosity. In this way the theory of Newtonian fluids could be used for the non-Newtonian kappa-carrageenan solution. The calculated optimal break-up frequencies and droplet sizes were in good agreement with those found in the experiments.

  12. Bayesian SEM for Specification Search Problems in Testing Factorial Invariance.

    PubMed

    Shi, Dexin; Song, Hairong; Liao, Xiaolan; Terry, Robert; Snyder, Lori A

    2017-01-01

    Specification search problems refer to two important but under-addressed issues in testing for factorial invariance: how to select proper reference indicators and how to locate specific non-invariant parameters. In this study, we propose a two-step procedure to solve these issues. Step 1 is to identify a proper reference indicator using the Bayesian structural equation modeling approach. An item is selected if it is associated with the highest likelihood to be invariant across groups. Step 2 is to locate specific non-invariant parameters, given that a proper reference indicator has already been selected in Step 1. A series of simulation analyses show that the proposed method performs well under a variety of data conditions, and optimal performance is observed under conditions of large magnitude of non-invariance, low proportion of non-invariance, and large sample sizes. We also provide an empirical example to demonstrate the specific procedures to implement the proposed method in applied research. The importance and influences are discussed regarding the choices of informative priors with zero mean and small variances. Extensions and limitations are also pointed out.

  13. Kinetic characterization of thermophilic and mesophilic anaerobic digestion for coffee grounds and waste activated sludge.

    PubMed

    Li, Qian; Qiao, Wei; Wang, Xiaochang; Takayanagi, Kazuyuki; Shofie, Mohammad; Li, Yu-You

    2015-02-01

    This study was conducted to characterize the kinetics of an anaerobic process (hydrolysis, acetogenesis, acidogenesis and methanogenesis) under thermophilic (55 °C) and mesophilic (35 °C) conditions with coffee grounds and waste activated sludge (WAS) as the substrates. Special focus was given to the kinetics of propionic acid degradation to elucidate the accumulation of VFAs. Under the thermophilic condition, the methane production rate of all substrates (WAS, ground coffee and raw coffee) was about 1.5 times higher than that under the mesophilic condition. However, the effects on methane production of each substrate under the thermophilic condition differed: WAS increased by 35.8-48.2%, raw coffee decreased by 76.3-64.5% and ground coffee decreased by 74.0-57.9%. Based on the maximum reaction rate (Rmax) of each anaerobic stage obtained from the modified Gompertz model, acetogenesis was found to be the rate-limiting step for coffee grounds and WAS. This can be explained by the kinetics of propionate degradation under thermophilic condition in which a long lag-phase (more than 18 days) was observed, although the propionate concentration was only 500 mg/L. Under the mesophilic condition, acidogenesis and hydrolysis were found to be the rate-limiting step for coffee grounds and WAS, respectively. Even though reducing the particle size accelerated the methane production rate of coffee grounds, but did not change the rate-limiting step: acetogenesis in thermophilic and acidogenesis in mesophilic. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Effects of an aft facing step on the surface of a laminar flow glider wing

    NASA Technical Reports Server (NTRS)

    Sandlin, Doral R.; Saiki, Neal

    1993-01-01

    A motor glider was used to perform a flight test study on the effects of aft facing steps in a laminar boundary layer. This study focuses on two dimensional aft facing steps oriented spanwise to the flow. The size and location of the aft facing steps were varied in order to determine the critical size that will force premature transition. Transition over a step was found to be primarily a function of Reynolds number based on step height. Both of the step height Reynolds numbers for premature and full transition were determined. A hot film anemometry system was used to detect transition.

  15. Method of Lines Transpose an Implicit Vlasov Maxwell Solver for Plasmas

    DTIC Science & Technology

    2015-04-17

    boundary crossings should be rare. Numerical results for the Bennett pinch are given in Figure 9. In order to resolve large gradients near the center of the...contributing to the large error at the center of the beam due to large gradients there) and with the finite beam cut-off radius and the outflow boundary...usable time step size can be limited by the numerical accuracy of the method when there are large gradients (high-frequency content) in the solution. We

  16. Self-avoiding walks that cross a square

    NASA Astrophysics Data System (ADS)

    Burkhardt, T. W.; Guim, I.

    1991-10-01

    The authors consider self-avoiding walks that traverse an L*L square lattice. Whittington and Guttmann (1990) have proved the existence of a phase transition in the infinite-L limit at a critical value of the step fugacity. They make several finite-size scaling predictions for the critical region, using the relation between self-avoiding walks and the N-vector model of magnetism. Adsorbing as well as nonadsorbing boundaries are considered. The predictions are in good agreement with numerical data for L

  17. Kinesin Steps Do Not Alternate in Size☆

    PubMed Central

    Fehr, Adrian N.; Asbury, Charles L.; Block, Steven M.

    2008-01-01

    Abstract Kinesin is a two-headed motor protein that transports cargo inside cells by moving stepwise on microtubules. Its exact trajectory along the microtubule is unknown: alternative pathway models predict either uniform 8-nm steps or alternating 7- and 9-nm steps. By analyzing single-molecule stepping traces from “limping” kinesin molecules, we were able to distinguish alternate fast- and slow-phase steps and thereby to calculate the step sizes associated with the motions of each of the two heads. We also compiled step distances from nonlimping kinesin molecules and compared these distributions against models predicting uniform or alternating step sizes. In both cases, we find that kinesin takes uniform 8-nm steps, a result that strongly constrains the allowed models. PMID:18083906

  18. Growth of group II-VI semiconductor quantum dots with strong quantum confinement and low size dispersion

    NASA Astrophysics Data System (ADS)

    Pandey, Praveen K.; Sharma, Kriti; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.

    2003-11-01

    CdTe quantum dots embedded in glass matrix are grown using two-step annealing method. The results for the optical transmission characterization are analysed and compared with the results obtained from CdTe quantum dots grown using conventional single-step annealing method. A theoretical model for the absorption spectra is used to quantitatively estimate the size dispersion in the two cases. In the present work, it is established that the quantum dots grown using two-step annealing method have stronger quantum confinement, reduced size dispersion and higher volume ratio as compared to the single-step annealed samples. (

  19. Langevin dynamics in inhomogeneous media: Re-examining the Itô-Stratonovich dilemma

    NASA Astrophysics Data System (ADS)

    Farago, Oded; Grønbech-Jensen, Niels

    2014-01-01

    The diffusive dynamics of a particle in a medium with space-dependent friction coefficient is studied within the framework of the inertial Langevin equation. In this description, the ambiguous interpretation of the stochastic integral, known as the Itô-Stratonovich dilemma, is avoided since all interpretations converge to the same solution in the limit of small time steps. We use a newly developed method for Langevin simulations to measure the probability distribution of a particle diffusing in a flat potential. Our results reveal that both the Itô and Stratonovich interpretations converge very slowly to the uniform equilibrium distribution for vanishing time step sizes. Three other conventions exhibit significantly improved accuracy: (i) the "isothermal" (Hänggi) convention, (ii) the Stratonovich convention corrected by a drift term, and (iii) a newly proposed convention employing two different effective friction coefficients representing two different averages of the friction function during the time step. We argue that the most physically accurate dynamical description is provided by the third convention, in which the particle experiences a drift originating from the dissipation instead of the fluctuation term. This feature is directly related to the fact that the drift is a result of an inertial effect that cannot be well understood in the Brownian, overdamped limit of the Langevin equation.

  20. Modeling ultrasound propagation through material of increasing geometrical complexity.

    PubMed

    Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen

    2018-06-01

    Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Subtraction of cap-trapped full-length cDNA libraries to select rare transcripts.

    PubMed

    Hirozane-Kishikawa, Tomoko; Shiraki, Toshiyuki; Waki, Kazunori; Nakamura, Mari; Arakawa, Takahiro; Kawai, Jun; Fagiolini, Michela; Hensch, Takao K; Hayashizaki, Yoshihide; Carninci, Piero

    2003-09-01

    The normalization and subtraction of highly expressed cDNAs from relatively large tissues before cloning dramatically enhanced the gene discovery by sequencing for the mouse full-length cDNA encyclopedia, but these methods have not been suitable for limited RNA materials. To normalize and subtract full-length cDNA libraries derived from limited quantities of total RNA, here we report a method to subtract plasmid libraries excised from size-unbiased amplified lambda phage cDNA libraries that avoids heavily biasing steps such as PCR and plasmid library amplification. The proportion of full-length cDNAs and the gene discovery rate are high, and library diversity can be validated by in silico randomization.

  2. Controlling dental enamel-cavity ablation depth with optimized stepping parameters along the focal plane normal using a three axis, numerically controlled picosecond laser.

    PubMed

    Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Sun, Yuchun; Wang, Yong

    2015-02-01

    The purpose of this study was to establish a depth-control method in enamel-cavity ablation by optimizing the timing of the focal-plane-normal stepping and the single-step size of a three axis, numerically controlled picosecond laser. Although it has been proposed that picosecond lasers may be used to ablate dental hard tissue, the viability of such a depth-control method in enamel-cavity ablation remains uncertain. Forty-two enamel slices with approximately level surfaces were prepared and subjected to two-dimensional ablation by a picosecond laser. The additive-pulse layer, n, was set to 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70. A three-dimensional microscope was then used to measure the ablation depth, d, to obtain a quantitative function relating n and d. Six enamel slices were then subjected to three dimensional ablation to produce 10 cavities, respectively, with additive-pulse layer and single-step size set to corresponding values. The difference between the theoretical and measured values was calculated for both the cavity depth and the ablation depth of a single step. These were used to determine minimum-difference values for both the additive-pulse layer (n) and single-step size (d). When the additive-pulse layer and the single-step size were set 5 and 45, respectively, the depth error had a minimum of 2.25 μm, and 450 μm deep enamel cavities were produced. When performing three-dimensional ablating of enamel with a picosecond laser, adjusting the timing of the focal-plane-normal stepping and the single-step size allows for the control of ablation-depth error to the order of micrometers.

  3. Critical motor number for fractional steps of cytoskeletal filaments in gliding assays.

    PubMed

    Li, Xin; Lipowsky, Reinhard; Kierfeld, Jan

    2012-01-01

    In gliding assays, filaments are pulled by molecular motors that are immobilized on a solid surface. By varying the motor density on the surface, one can control the number N of motors that pull simultaneously on a single filament. Here, such gliding assays are studied theoretically using brownian (or Langevin) dynamics simulations and taking the local force balance between motors and filaments as well as the force-dependent velocity of the motors into account. We focus on the filament stepping dynamics and investigate how single motor properties such as stalk elasticity and step size determine the presence or absence of fractional steps of the filaments. We show that each gliding assay can be characterized by a critical motor number, N(c). Because of thermal fluctuations, fractional filament steps are only detectable as long as N < N(c). The corresponding fractional filament step size is l/N where l is the step size of a single motor. We first apply our computational approach to microtubules pulled by kinesin-1 motors. For elastic motor stalks that behave as linear springs with a zero rest length, the critical motor number is found to be N(c) = 4, and the corresponding distributions of the filament step sizes are in good agreement with the available experimental data. In general, the critical motor number N(c) depends on the elastic stalk properties and is reduced to N(c) = 3 for linear springs with a nonzero rest length. Furthermore, N(c) is shown to depend quadratically on the motor step size l. Therefore, gliding assays consisting of actin filaments and myosin-V are predicted to exhibit fractional filament steps up to motor number N = 31. Finally, we show that fractional filament steps are also detectable for a fixed average motor number as determined by the surface density (or coverage) of the motors on the substrate surface.

  4. What is the alternative to the Alexander-Orbach relation?

    NASA Astrophysics Data System (ADS)

    Sokolov, Igor M.

    2016-03-01

    The Alexander-Orbach (AO) relation d w = 2d f /d s connecting the fractal dimension of a random walk’s (RW) trajectory d w or the exponent of anomalous diffusion α = 2/d w on a fractal structure with the fractal and spectral dimension of the structure itself plays a key role in discussion of dynamical properties of complex systems including living cells and single biomolecules. This relation however does not hold universally and breaks down for some structures like diffusion limited aggregates and Eden trees. We show that the alternative to the AO relation is the explicit dependence of the coefficient of the anomalous diffusion on the system’s size, i.e. the absence of its thermodynamical limit. The prerequisite for its breakdown is the dependence of the local structure of possible steps of the RW on the system’s size. The discussion is illustrated by the examples of diffusion on a Koch curve (AO-conform) and on a Cantor dust (violating AO relation).

  5. Psychologically Informed Implementations of Sugary-Drink Portion Limits

    PubMed Central

    John, Leslie K.; Donnelly, Grant E.; Roberto, Christina A.

    2017-01-01

    In 2012, the New York City Board of Health prohibited restaurants from selling sugary drinks in containers that would hold more than 16 oz. Although a state court ruled that the Board of Health did not have the authority to implement such a policy, it remains a legally viable option for governments and a voluntary option for restaurants. However, there is very limited empirical data on how such a policy might affect the purchasing and consumption of sugary drinks. We report four well-powered, incentive-compatible experiments in which we evaluated two possible ways that restaurants might comply with such a policy: bundling (i.e., dividing the contents of oversized cups into two regulation-size cups) and providing free refills (i.e., offering a regulation-size cup with unlimited refills). Bundling caused people to buy less soda. Free refills increased consumption, especially when a waiter served the refills. This perverse effect was reduced in self-service contexts that required walking just a few steps to get a refill. PMID:28362567

  6. Microstructure of room temperature ionic liquids at stepped graphite electrodes

    DOE PAGES

    Feng, Guang; Li, Song; Zhao, Wei; ...

    2015-07-14

    Molecular dynamics simulations of room temperature ionic liquid (RTIL) [emim][TFSI] at stepped graphite electrodes were performed to investigate the influence of the thickness of the electrode surface step on the microstructure of interfacial RTILs. A strong correlation was observed between the interfacial RTIL structure and the step thickness in electrode surface as well as the ion size. Specifically, when the step thickness is commensurate with ion size, the interfacial layering of cation/anion is more evident; whereas, the layering tends to be less defined when the step thickness is close to the half of ion size. Furthermore, two-dimensional microstructure of ionmore » layers exhibits different patterns and alignments of counter-ion/co-ion lattice at neutral and charged electrodes. As the cation/anion layering could impose considerable effects on ion diffusion, the detailed information of interfacial RTILs at stepped graphite presented here would help to understand the molecular mechanism of RTIL-electrode interfaces in supercapacitors.« less

  7. Surface-Directed Synthesis of Erbium-Doped Yttrium Oxide Nanoparticles within Organosilane Zeptoliter Containers

    PubMed Central

    2015-01-01

    We introduce an approach to synthesize rare earth oxide nanoparticles using high temperature without aggregation of the nanoparticles. The dispersity of the nanoparticles is controlled at the nanoscale by using small organosilane molds as reaction containers. Zeptoliter reaction vessels prepared from organosilane self-assembled monolayers (SAMs) were used for the surface-directed synthesis of rare earth oxide (REO) nanoparticles. Nanopores of octadecyltrichlorosilane were prepared on Si(111) using particle lithography with immersion steps. The nanopores were filled with a precursor solution of erbium and yttrium salts to confine the crystallization step to occur within individual zeptoliter-sized organosilane reaction vessels. Areas between the nanopores were separated by a matrix film of octadecyltrichlorosilane. With heating, the organosilane template was removed by calcination to generate a surface array of erbium-doped yttria nanoparticles. Nanoparticles synthesized by the surface-directed approach retain the periodic arrangement of the nanopores formed from mesoparticle masks. While bulk rare earth oxides can be readily prepared by solid state methods at high temperature (>900 °C), approaches for preparing REO nanoparticles are limited. Conventional wet chemistry methods are limited to low temperatures according to the boiling points of the solvents used for synthesis. To achieve crystallinity of REO nanoparticles requires steps for high-temperature processing of samples, which can cause self-aggregation and dispersity in sample diameters. The facile steps for particle lithography address the problems of aggregation and the requirement for high-temperature synthesis. PMID:25163977

  8. Optimal execution in high-frequency trading with Bayesian learning

    NASA Astrophysics Data System (ADS)

    Du, Bian; Zhu, Hongliang; Zhao, Jingdong

    2016-11-01

    We consider optimal trading strategies in which traders submit bid and ask quotes to maximize the expected quadratic utility of total terminal wealth in a limit order book. The trader's bid and ask quotes will be changed by the Poisson arrival of market orders. Meanwhile, the trader may update his estimate of other traders' target sizes and directions by Bayesian learning. The solution of optimal execution in the limit order book is a two-step procedure. First, we model an inactive trading with no limit order in the market. The dealer simply holds dollars and shares of stocks until terminal time. Second, he calibrates his bid and ask quotes to the limit order book. The optimal solutions are given by dynamic programming and in fact they are globally optimal. We also give numerical simulation to the value function and optimal quotes at the last part of the article.

  9. Parallel processing of embossing dies with ultrafast lasers

    NASA Astrophysics Data System (ADS)

    Jarczynski, Manfred; Mitra, Thomas; Brüning, Stephan; Du, Keming; Jenke, Gerald

    2018-02-01

    Functionalization of surfaces equips products and components with new features like hydrophilic behavior, adjustable gloss level, light management properties, etc. Small feature sizes demand diffraction-limited spots and adapted fluence for different materials. Through the availability of high power fast repeating ultrashort pulsed lasers and efficient optical processing heads delivering diffraction-limited small spot size of around 10μm it is feasible to achieve fluences higher than an adequate patterning requires. Hence, parallel processing is becoming of interest to increase the throughput and allow mass production of micro machined surfaces. The first step on the roadmap of parallel processing for cylinder embossing dies was realized with an eight- spot processing head based on ns-fiber laser with passive optical beam splitting, individual spot switching by acousto optical modulation and an advanced imaging. Patterning of cylindrical embossing dies shows a high efficiency of nearby 80%, diffraction-limited and equally spaced spots with pitches down to 25μm achieved by a compression using cascaded prism arrays. Due to the nanoseconds laser pulses the ablation shows the typical surrounding material deposition of a hot process. In the next step the processing head was adapted to a picosecond-laser source and the 500W fiber laser was replaced by an ultrashort pulsed laser with 300W, 12ps and a repetition frequency of up to 6MHz. This paper presents details about the processing head design and the analysis of ablation rates and patterns on steel, copper and brass dies. Furthermore, it gives an outlook on scaling the parallel processing head from eight to 16 individually switched beamlets to increase processing throughput and optimized utilization of the available ultrashort pulsed laser energy.

  10. Coulomb fission in dielectric dication clusters: experiment and theory on steps that may underpin the electrospray mechanism.

    PubMed

    Chen, Xiaojing; Bichoutskaia, Elena; Stace, Anthony J

    2013-05-16

    A series of five molecular dication clusters, (H2O)n(2+), (NH3)n(2+), (CH3CN)n(2+), (C5H5N)n(2+), and (C6H6)n(2+), have been studied for the purpose of identifying patterns of behavior close to the Rayleigh instability limit where the clusters might be expected to exhibit Coulomb fission. Experiments show that the instability limit for each dication covers a range of sizes and that on a time scale of 10(-4) s ions close to the limit can undergo either Coulomb fission or neutral evaporation. The observed fission pathways exhibit considerable asymmetry in the sizes of the charged fragments, and are associated with kinetic (ejection) energies of ~0.9 eV. Coulomb fission has been modeled using a theory recently formulated to describe how charged particles of dielectric materials interact with one another (Bichoutskaia et al. J. Chem. Phys. 2010, 133, 024105). The calculated electrostatic interaction energy between separating fragments accounts for the observed asymmetric fragmentation and for the magnitudes of the measured ejection energies. The close match between theory and experiment suggests that a significant fraction of excess charge resides on the surfaces of the fragment ions. The experiments provided support for a fundamental step in the electrospray ionization (ESI) mechanism, namely the ejection from droplets of small solvated charge carriers. At the same time, the theory shows how water and acetonitrile may behave slightly differently as ESI solvents. However, the theory also reveals deficiencies in the point-charge image-charge model that has previously been used to quantify Coulomb fission in the electrospray process.

  11. Efficiency and flexibility using implicit methods within atmosphere dycores

    NASA Astrophysics Data System (ADS)

    Evans, K. J.; Archibald, R.; Norman, M. R.; Gardner, D. J.; Woodward, C. S.; Worley, P.; Taylor, M.

    2016-12-01

    A suite of explicit and implicit methods are evaluated for a range of configurations of the shallow water dynamical core within the spectral-element Community Atmosphere Model (CAM-SE) to explore their relative computational performance. The configurations are designed to explore the attributes of each method under different but relevant model usage scenarios including varied spectral order within an element, static regional refinement, and scaling to large problem sizes. The limitations and benefits of using explicit versus implicit, with different discretizations and parameters, are discussed in light of trade-offs such as MPI communication, memory, and inherent efficiency bottlenecks. For the regionally refined shallow water configurations, the implicit BDF2 method is about the same efficiency as an explicit Runge-Kutta method, without including a preconditioner. Performance of the implicit methods with the residual function executed on a GPU is also presented; there is speed up for the residual relative to a CPU, but overwhelming transfer costs motivate moving more of the solver to the device. Given the performance behavior of implicit methods within the shallow water dynamical core, the recommendation for future work using implicit solvers is conditional based on scale separation and the stiffness of the problem. The strong growth of linear iterations with increasing resolution or time step size is the main bottleneck to computational efficiency. Within the hydrostatic dynamical core, of CAM-SE, we present results utilizing approximate block factorization preconditioners implemented using the Trilinos library of solvers. They reduce the cost of linear system solves and improve parallel scalability. We provide a summary of the remaining efficiency considerations within the preconditioner and utilization of the GPU, as well as a discussion about the benefits of a time stepping method that provides converged and stable solutions for a much wider range of time step sizes. As more complex model components, for example new physics and aerosols, are connected in the model, having flexibility in the time stepping will enable more options for combining and resolving multiple scales of behavior.

  12. The ELPA library: scalable parallel eigenvalue solutions for electronic structure theory and computational science.

    PubMed

    Marek, A; Blum, V; Johanni, R; Havu, V; Lang, B; Auckenthaler, T; Heinecke, A; Bungartz, H-J; Lederer, H

    2014-05-28

    Obtaining the eigenvalues and eigenvectors of large matrices is a key problem in electronic structure theory and many other areas of computational science. The computational effort formally scales as O(N(3)) with the size of the investigated problem, N (e.g. the electron count in electronic structure theory), and thus often defines the system size limit that practical calculations cannot overcome. In many cases, more than just a small fraction of the possible eigenvalue/eigenvector pairs is needed, so that iterative solution strategies that focus only on a few eigenvalues become ineffective. Likewise, it is not always desirable or practical to circumvent the eigenvalue solution entirely. We here review some current developments regarding dense eigenvalue solvers and then focus on the Eigenvalue soLvers for Petascale Applications (ELPA) library, which facilitates the efficient algebraic solution of symmetric and Hermitian eigenvalue problems for dense matrices that have real-valued and complex-valued matrix entries, respectively, on parallel computer platforms. ELPA addresses standard as well as generalized eigenvalue problems, relying on the well documented matrix layout of the Scalable Linear Algebra PACKage (ScaLAPACK) library but replacing all actual parallel solution steps with subroutines of its own. For these steps, ELPA significantly outperforms the corresponding ScaLAPACK routines and proprietary libraries that implement the ScaLAPACK interface (e.g. Intel's MKL). The most time-critical step is the reduction of the matrix to tridiagonal form and the corresponding backtransformation of the eigenvectors. ELPA offers both a one-step tridiagonalization (successive Householder transformations) and a two-step transformation that is more efficient especially towards larger matrices and larger numbers of CPU cores. ELPA is based on the MPI standard, with an early hybrid MPI-OpenMPI implementation available as well. Scalability beyond 10,000 CPU cores for problem sizes arising in the field of electronic structure theory is demonstrated for current high-performance computer architectures such as Cray or Intel/Infiniband. For a matrix of dimension 260,000, scalability up to 295,000 CPU cores has been shown on BlueGene/P.

  13. Step Permeability on the Pt(111) Surface

    NASA Astrophysics Data System (ADS)

    Altman, Michael

    2005-03-01

    Surface morphology will be affected, or even dictated, by kinetic limitations that may be present during growth. Asymmetric step attachment is recognized to be an important and possibly common cause of morphological growth instabilities. However, the impact of this kinetic limitation on growth morphology may be hindered by other factors such as the rate limiting step and step permeability. This strongly motivates experimental measurements of these quantities in real systems. Using low energy electron microscopy, we have measured step flow velocities in growth on the Pt(111) surface. The dependence of step velocity upon adjacent terrace width clearly shows evidence of asymmetric step attachment and step permeability. Step velocity is modeled by solving the diffusion equation simultaneously on several adjacent terraces subject to boundary conditions at intervening steps that include asymmetric step attachment and step permeability. This analysis allows a quantitative evaluation of step permeability and the kinetic length, which characterizes the rate limiting step continuously between diffusion and attachment-detachment limited regimes. This work provides information that is greatly needed to set physical bounds on the parameters that are used in theoretical treatments of growth. The observation that steps are permeable even on a simple metal surface should also stimulate more experimental measurements and theoretical treatments of this effect.

  14. Combinative Particle Size Reduction Technologies for the Production of Drug Nanocrystals

    PubMed Central

    Salazar, Jaime; Müller, Rainer H.; Möschwitzer, Jan P.

    2014-01-01

    Nanosizing is a suitable method to enhance the dissolution rate and therefore the bioavailability of poorly soluble drugs. The success of the particle size reduction processes depends on critical factors such as the employed technology, equipment, and drug physicochemical properties. High pressure homogenization and wet bead milling are standard comminution techniques that have been already employed to successfully formulate poorly soluble drugs and bring them to market. However, these techniques have limitations in their particle size reduction performance, such as long production times and the necessity of employing a micronized drug as the starting material. This review article discusses the development of combinative methods, such as the NANOEDGE, H 96, H 69, H 42, and CT technologies. These processes were developed to improve the particle size reduction effectiveness of the standard techniques. These novel technologies can combine bottom-up and/or top-down techniques in a two-step process. The combinative processes lead in general to improved particle size reduction effectiveness. Faster production of drug nanocrystals and smaller final mean particle sizes are among the main advantages. The combinative particle size reduction technologies are very useful formulation tools, and they will continue acquiring importance for the production of drug nanocrystals. PMID:26556191

  15. Decorrelation correction for nanoparticle tracking analysis of dilute polydisperse suspensions in bulk flow

    NASA Astrophysics Data System (ADS)

    Hartman, John; Kirby, Brian

    2017-03-01

    Nanoparticle tracking analysis, a multiprobe single particle tracking technique, is a widely used method to quickly determine the concentration and size distribution of colloidal particle suspensions. Many popular tools remove non-Brownian components of particle motion by subtracting the ensemble-average displacement at each time step, which is termed dedrifting. Though critical for accurate size measurements, dedrifting is shown here to introduce significant biasing error and can fundamentally limit the dynamic range of particle size that can be measured for dilute heterogeneous suspensions such as biological extracellular vesicles. We report a more accurate estimate of particle mean-square displacement, which we call decorrelation analysis, that accounts for correlations between individual and ensemble particle motion, which are spuriously introduced by dedrifting. Particle tracking simulation and experimental results show that this approach more accurately determines particle diameters for low-concentration polydisperse suspensions when compared with standard dedrifting techniques.

  16. A study of scandia and rhenium doped tungsten matrix dispenser cathode

    NASA Astrophysics Data System (ADS)

    Wang, Jinshu; Li, Lili; Liu, Wei; Wang, Yanchun; Zhao, Lei; Zhou, Meiling

    2007-10-01

    Scandia and rhenium doped tungsten powders were prepared by solid-liquid doping combined with two-step reduction method. The experimental results show that scandia was distributed evenly on the surface of tungsten particles. The addition of scandia and rhenium could decrease the particle size of doped tungsten, for example, the tungsten powders doped with Sc 2O 3 and Re had the average size of about 50 nm in diameter. By using this kind of powder, scandia and rhenium doped tungsten matrix with the sub-micrometer sized tungsten grains was obtained. This kind of matrix exhibited good anti-bombardment insensitivity at high temperature. The emission property result showed that high space charge limited current densities of more than 60 A/cm 2 at 900 °C could be obtained for this cathode. A Ba-Sc-O multilayer about 100 nm in thickness formed at the surface of cathode after activation led to the high emission property.

  17. Time-asymptotic solutions of the Navier-Stokes equation for free shear flows using an alternating-direction implicit method

    NASA Technical Reports Server (NTRS)

    Rudy, D. H.; Morris, D. J.

    1976-01-01

    An uncoupled time asymptotic alternating direction implicit method for solving the Navier-Stokes equations was tested on two laminar parallel mixing flows. A constant total temperature was assumed in order to eliminate the need to solve the full energy equation; consequently, static temperature was evaluated by using algebraic relationship. For the mixing of two supersonic streams at a Reynolds number of 1,000, convergent solutions were obtained for a time step 5 times the maximum allowable size for an explicit method. The solution diverged for a time step 10 times the explicit limit. Improved convergence was obtained when upwind differencing was used for convective terms. Larger time steps were not possible with either upwind differencing or the diagonally dominant scheme. Artificial viscosity was added to the continuity equation in order to eliminate divergence for the mixing of a subsonic stream with a supersonic stream at a Reynolds number of 1,000.

  18. Step-Climbing Power Wheelchairs: A Literature Review

    PubMed Central

    Sundaram, S. Andrea; Wang, Hongwu; Ding, Dan

    2017-01-01

    Background: Power wheelchairs capable of overcoming environmental barriers, such as uneven terrain, curbs, or stairs, have been under development for more than a decade. Method: We conducted a systematic review of the scientific and engineering literature to identify these devices, and we provide brief descriptions of the mechanism and method of operation for each. We also present data comparing their capabilities in terms of step climbing and standard wheelchair functions. Results: We found that all the devices presented allow for traversal of obstacles that cannot be accomplished with traditional power wheelchairs, but the slow speeds and small wheel diameters of some designs make them only moderately effective in the basic area of efficient transport over level ground and the size and configuration of some others limit maneuverability in tight spaces. Conclusion: We propose that safety and performance test methods more comprehensive than the International Organization for Standards (ISO) testing protocols be developed for measuring the capabilities of advanced wheelchairs with step-climbing and other environment-negotiating features to allow comparison of their clinical effectiveness. PMID:29339886

  19. Simultaneous Detection of Four Foodborne Viruses in Food Samples Using a One-Step Multiplex Reverse Transcription PCR.

    PubMed

    Lee, Shin-Young; Kim, Mi-Ju; Kim, Hyun-Joong; Jeong, KwangCheol Casey; Kim, Hae-Yeong

    2018-02-28

    A one-step multiplex reverse transcription PCR (RT-PCR) method comprising six primer sets (for the detection of norovirus GI and GII, hepatitis A virus, rotavirus, and astrovirus) was developed to simultaneously detect four kinds of pathogenic viruses. The size of the PCR products for norovirus GI and GII, hepatitis A virus (VP3/VP1 and P2A regions), rotavirus, and astrovirus were 330, 164, 244, 198, 629, and 449 bp, respectively. The RT-PCR with the six primer sets showed specificity for the pathogenic viruses. The detection limit of the developed multiplex RT-PCR, as evaluated using serially diluted viral RNAs, was comparable to that of one-step single RT-PCR. Moreover, this multiplex RT-PCR was evaluated using food samples such as water, oysters, lettuce, and vegetable product. These food samples were artificially spiked with the four kinds of viruses in diverse combinations, and the spiked viruses in all food samples were detected successfully.

  20. Relative dosimetrical verification in high dose rate brachytherapy using two-dimensional detector array IMatriXX

    PubMed Central

    Manikandan, A.; Biplab, Sarkar; David, Perianayagam A.; Holla, R.; Vivek, T. R.; Sujatha, N.

    2011-01-01

    For high dose rate (HDR) brachytherapy, independent treatment verification is needed to ensure that the treatment is performed as per prescription. This study demonstrates dosimetric quality assurance of the HDR brachytherapy using a commercially available two-dimensional ion chamber array called IMatriXX, which has a detector separation of 0.7619 cm. The reference isodose length, step size, and source dwell positional accuracy were verified. A total of 24 dwell positions, which were verified for positional accuracy gave a total error (systematic and random) of –0.45 mm, with a standard deviation of 1.01 mm and maximum error of 1.8 mm. Using a step size of 5 mm, reference isodose length (the length of 100% isodose line) was verified for single and multiple catheters of same and different source loadings. An error ≤1 mm was measured in 57% of tests analyzed. Step size verification for 2, 3, 4, and 5 cm was performed and 70% of the step size errors were below 1 mm, with maximum of 1.2 mm. The step size ≤1 cm could not be verified by the IMatriXX as it could not resolve the peaks in dose profile. PMID:21897562

  1. The optimal design of stepped wedge trials with equal allocation to sequences and a comparison to other trial designs.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew

    2017-12-01

    Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.

  2. Limits in point to point resolution of MOS based pixels detector arrays

    NASA Astrophysics Data System (ADS)

    Fourches, N.; Desforge, D.; Kebbiri, M.; Kumar, V.; Serruys, Y.; Gutierrez, G.; Leprêtre, F.; Jomard, F.

    2018-01-01

    In high energy physics point-to-point resolution is a key prerequisite for particle detector pixel arrays. Current and future experiments require the development of inner-detectors able to resolve the tracks of particles down to the micron range. Present-day technologies, although not fully implemented in actual detectors, can reach a 5-μm limit, this limit being based on statistical measurements, with a pixel-pitch in the 10 μm range. This paper is devoted to the evaluation of the building blocks for use in pixel arrays enabling accurate tracking of charged particles. Basing us on simulations we will make here a quantitative evaluation of the physical and technological limits in pixel size. Attempts to design small pixels based on SOI technology will be briefly recalled here. A design based on CMOS compatible technologies that allow a reduction of the pixel size below the micrometer is introduced here. Its physical principle relies on a buried carrier-localizing collecting gate. The fabrication process needed by this pixel design can be based on existing process steps used in silicon microelectronics. The pixel characteristics will be discussed as well as the design of pixel arrays. The existing bottlenecks and how to overcome them will be discussed in the light of recent ion implantation and material characterization experiments.

  3. Linear micromechanical stepping drive for pinhole array positioning

    NASA Astrophysics Data System (ADS)

    Endrödy, Csaba; Mehner, Hannes; Grewe, Adrian; Hoffmann, Martin

    2015-05-01

    A compact linear micromechanical stepping drive for positioning a 7 × 5.5 mm2 optical pinhole array is presented. The system features a step size of 13.2 µm and a full displacement range of 200 µm. The electrostatic inch-worm stepping mechanism shows a compact design capable of positioning a payload 50% of its own weight. The stepping drive movement, step sizes and position accuracy are characterized. The actuated pinhole array is integrated in a confocal chromatic hyperspectral imaging system, where coverage of the object plane, and therefore the useful picture data, can be multiplied by 14 in contrast to a non-actuated array.

  4. COMPARISON OF IMPLICIT SCHEMES TO SOLVE EQUATIONS OF RADIATION HYDRODYNAMICS WITH A FLUX-LIMITED DIFFUSION APPROXIMATION: NEWTON–RAPHSON, OPERATOR SPLITTING, AND LINEARIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tetsu, Hiroyuki; Nakamoto, Taishi, E-mail: h.tetsu@geo.titech.ac.jp

    Radiation is an important process of energy transport, a force, and a basis for synthetic observations, so radiation hydrodynamics (RHD) calculations have occupied an important place in astrophysics. However, although the progress in computational technology is remarkable, their high numerical cost is still a persistent problem. In this work, we compare the following schemes used to solve the nonlinear simultaneous equations of an RHD algorithm with the flux-limited diffusion approximation: the Newton–Raphson (NR) method, operator splitting, and linearization (LIN), from the perspective of the computational cost involved. For operator splitting, in addition to the traditional simple operator splitting (SOS) scheme,more » we examined the scheme developed by Douglas and Rachford (DROS). We solve three test problems (the thermal relaxation mode, the relaxation and the propagation of linear waves, and radiating shock) using these schemes and then compare their dependence on the time step size. As a result, we find the conditions of the time step size necessary for adopting each scheme. The LIN scheme is superior to other schemes if the ratio of radiation pressure to gas pressure is sufficiently low. On the other hand, DROS can be the most efficient scheme if the ratio is high. Although the NR scheme can be adopted independently of the regime, especially in a problem that involves optically thin regions, the convergence tends to be worse. In all cases, SOS is not practical.« less

  5. Supercritical Fluid Technologies to Fabricate Proliposomes.

    PubMed

    Falconer, James R; Svirskis, Darren; Adil, Ali A; Wu, Zimei

    2015-01-01

    Proliposomes are stable drug carrier systems designed to form liposomes upon addition of an aqueous phase. In this review, current trends in the use of supercritical fluid (SCF) technologies to prepare proliposomes are discussed. SCF methods are used in pharmaceutical research and industry to address limitations associated with conventional methods of pro/liposome fabrication. The SCF solvent methods of proliposome preparation are eco-friendly (known as green technology) and, along with the SCF anti-solvent methods, could be advantageous over conventional methods; enabling better design of particle morphology (size and shape). The major hurdles of SCF methods include poor scalability to industrial manufacturing which may result in variable particle characteristics. In the case of SCF anti-solvent methods, another hurdle is the reliance on organic solvents. However, the amount of solvent required is typically less than that used by the conventional methods. Another hurdle is that most of the SCF methods used have complicated manufacturing processes, although once the setup has been completed, SCF technologies offer a single-step process in the preparation of proliposomes compared to the multiple steps required by many other methods. Furthermore, there is limited research into how proliposomes will be converted into liposomes for the end-user, and how such a product can be prepared reproducibly in terms of vesicle size and drug loading. These hurdles must be overcome and with more research, SCF methods, especially where the SCF acts as a solvent, have the potential to offer a strong alternative to the conventional methods to prepare proliposomes.

  6. Real-time inverse planning for Gamma Knife radiosurgery.

    PubMed

    Wu, Q Jackie; Chankong, Vira; Jitprapaikulsarn, Suradet; Wessels, Barry W; Einstein, Douglas B; Mathayomchan, Boonyanit; Kinsella, Timothy J

    2003-11-01

    The challenges of real-time Gamma Knife inverse planning are the large number of variables involved and the unknown search space a priori. With limited collimator sizes, shots have to be heavily overlapped to form a smooth prescription isodose line that conforms to the irregular target shape. Such overlaps greatly influence the total number of shots per plan, making pre-determination of the total number of shots impractical. However, this total number of shots usually defines the search space, a pre-requisite for most of the optimization methods. Since each shot only covers part of the target, a collection of shots in different locations and various collimator sizes selected makes up the global dose distribution that conforms to the target. Hence, planning or placing these shots is a combinatorial optimization process that is computationally expensive by nature. We have previously developed a theory of shot placement and optimization based on skeletonization. The real-time inverse planning process, reported in this paper, is an expansion and the clinical implementation of this theory. The complete planning process consists of two steps. The first step is to determine an optimal number of shots including locations and sizes and to assign initial collimator size to each of the shots. The second step is to fine-tune the weights using a linear-programming technique. The objective function is to minimize the total dose to the target boundary (i.e., maximize the dose conformity). Results of an ellipsoid test target and ten clinical cases are presented. The clinical cases are also compared with physician's manual plans. The target coverage is more than 99% for manual plans and 97% for all the inverse plans. The RTOG PITV conformity indices for the manual plans are between 1.16 and 3.46, compared to 1.36 to 2.4 for the inverse plans. All the inverse plans are generated in less than 2 min, making real-time inverse planning a reality.

  7. Comparison between Phase-Shift Full-Bridge Converters with Noncoupled and Coupled Current-Doubler Rectifier

    PubMed Central

    Tsai, Cheng-Tao; Tseng, Sheng-Yu

    2013-01-01

    This paper presents comparison between phase-shift full-bridge converters with noncoupled and coupled current-doubler rectifier. In high current capability and high step-down voltage conversion, a phase-shift full-bridge converter with a conventional current-doubler rectifier has the common limitations of extremely low duty ratio and high component stresses. To overcome these limitations, a phase-shift full-bridge converter with a noncoupled current-doubler rectifier (NCDR) or a coupled current-doubler rectifier (CCDR) is, respectively, proposed and implemented. In this study, performance analysis and efficiency obtained from a 500 W phase-shift full-bridge converter with two improved current-doubler rectifiers are presented and compared. From their prototypes, experimental results have verified that the phase-shift full-bridge converter with NCDR has optimal duty ratio, lower component stresses, and output current ripple. In component count and efficiency comparison, CCDR has fewer components and higher efficiency at full load condition. For small size and high efficiency requirements, CCDR is relatively suitable for high step-down voltage and high efficiency applications. PMID:24381521

  8. The Markov process admits a consistent steady-state thermodynamic formalism

    NASA Astrophysics Data System (ADS)

    Peng, Liangrong; Zhu, Yi; Hong, Liu

    2018-01-01

    The search for a unified formulation for describing various non-equilibrium processes is a central task of modern non-equilibrium thermodynamics. In this paper, a novel steady-state thermodynamic formalism was established for general Markov processes described by the Chapman-Kolmogorov equation. Furthermore, corresponding formalisms of steady-state thermodynamics for the master equation and Fokker-Planck equation could be rigorously derived in mathematics. To be concrete, we proved that (1) in the limit of continuous time, the steady-state thermodynamic formalism for the Chapman-Kolmogorov equation fully agrees with that for the master equation; (2) a similar one-to-one correspondence could be established rigorously between the master equation and Fokker-Planck equation in the limit of large system size; (3) when a Markov process is restrained to one-step jump, the steady-state thermodynamic formalism for the Fokker-Planck equation with discrete state variables also goes to that for master equations, as the discretization step gets smaller and smaller. Our analysis indicated that general Markov processes admit a unified and self-consistent non-equilibrium steady-state thermodynamic formalism, regardless of underlying detailed models.

  9. Comparison between phase-shift full-bridge converters with noncoupled and coupled current-doubler rectifier.

    PubMed

    Tsai, Cheng-Tao; Su, Jye-Chau; Tseng, Sheng-Yu

    2013-01-01

    This paper presents comparison between phase-shift full-bridge converters with noncoupled and coupled current-doubler rectifier. In high current capability and high step-down voltage conversion, a phase-shift full-bridge converter with a conventional current-doubler rectifier has the common limitations of extremely low duty ratio and high component stresses. To overcome these limitations, a phase-shift full-bridge converter with a noncoupled current-doubler rectifier (NCDR) or a coupled current-doubler rectifier (CCDR) is, respectively, proposed and implemented. In this study, performance analysis and efficiency obtained from a 500 W phase-shift full-bridge converter with two improved current-doubler rectifiers are presented and compared. From their prototypes, experimental results have verified that the phase-shift full-bridge converter with NCDR has optimal duty ratio, lower component stresses, and output current ripple. In component count and efficiency comparison, CCDR has fewer components and higher efficiency at full load condition. For small size and high efficiency requirements, CCDR is relatively suitable for high step-down voltage and high efficiency applications.

  10. Statistical Modeling of Robotic Random Walks on Different Terrain

    NASA Astrophysics Data System (ADS)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  11. Finite element model updating using the shadow hybrid Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  12. Modeling and measurement of vesicle pools at the cone ribbon synapse: changes in release probability are solely responsible for voltage-dependent changes in release

    PubMed Central

    Thoreson, Wallace B.; Van Hook, Matthew J.; Parmelee, Caitlyn; Curto, Carina

    2015-01-01

    Post-synaptic responses are a product of quantal amplitude (Q), size of the releasable vesicle pool (N), and release probability (P). Voltage-dependent changes in presynaptic Ca2+ entry alter post-synaptic responses primarily by changing P but have also been shown to influence N. With simultaneous whole cell recordings from cone photoreceptors and horizontal cells in tiger salamander retinal slices, we measured N and P at cone ribbon synapses by using a train of depolarizing pulses to stimulate release and deplete the pool. We developed an analytical model that calculates the total pool size contributing to release under different stimulus conditions by taking into account the prior history of release and empirically-determined properties of replenishment. The model provided a formula that calculates vesicle pool size from measurements of the initial post-synaptic response and limiting rate of release evoked by a train of pulses, the fraction of release sites available for replenishment, and the time constant for replenishment. Results of the model showed that weak and strong depolarizing stimuli evoked release with differing probabilities but the same size vesicle pool. Enhancing intraterminal Ca2+ spread by lowering Ca2+ buffering or applying BayK8644 did not increase PSCs evoked with strong test steps showing there is a fixed upper limit to pool size. Together, these results suggest that light-evoked changes in cone membrane potential alter synaptic release solely by changing release probability. PMID:26541100

  13. Coupling a Reactive Transport Code with a Global Land Surface Model for Mechanistic Biogeochemistry Representation: 1. Addressing the Challenge of Nonnegativity

    DOE PAGES

    Tang, Guoping; Yuan, Fengming; Bisht, Gautam; ...

    2016-01-01

    Reactive transport codes (e.g., PFLOTRAN) are increasingly used to improve the representation of biogeochemical processes in terrestrial ecosystem models (e.g., the Community Land Model, CLM). As CLM and PFLOTRAN use explicit and implicit time stepping, implementation of CLM biogeochemical reactions in PFLOTRAN can result in negative concentration, which is not physical and can cause numerical instability and errors. The objective of this work is to address the nonnegativity challenge to obtain accurate, efficient, and robust solutions. We illustrate the implementation of a reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant nitrogen uptake reactions and test the implementation atmore » arctic, temperate, and tropical sites. We examine use of scaling back the update during each iteration (SU), log transformation (LT), and downregulating the reaction rate to account for reactant availability limitation to enforce nonnegativity. Both SU and LT guarantee nonnegativity but with implications. When a very small scaling factor occurs due to either consumption or numerical overshoot, and the iterations are deemed converged because of too small an update, SU can introduce excessive numerical error. LT involves multiplication of the Jacobian matrix by the concentration vector, which increases the condition number, decreases the time step size, and increases the computational cost. Neither SU nor SE prevents zero concentration. When the concentration is close to machine precision or 0, a small positive update stops all reactions for SU, and LT can fail due to a singular Jacobian matrix. The consumption rate has to be downregulated such that the solution to the mathematical representation is positive. A first-order rate downregulates consumption and is nonnegative, and adding a residual concentration makes it positive. For zero-order rate or when the reaction rate is not a function of a reactant, representing the availability limitation of each reactant with a Monod substrate limiting function provides a smooth transition between a zero-order rate when the reactant is abundant and first-order rate when the reactant becomes limiting. When the half saturation is small, marching through the transition may require small time step sizes to resolve the sharp change within a small range of concentration values. Our results from simple tests and CLM-PFLOTRAN simulations caution against use of SU and indicate that accurate, stable, and relatively efficient solutions can be achieved with LT and downregulation with Monod substrate limiting function and residual concentration.« less

  14. Dependence of Hurricane intensity and structures on vertical resolution and time-step size

    NASA Astrophysics Data System (ADS)

    Zhang, Da-Lin; Wang, Xiaoxue

    2003-09-01

    In view of the growing interests in the explicit modeling of clouds and precipitation, the effects of varying vertical resolution and time-step sizes on the 72-h explicit simulation of Hurricane Andrew (1992) are studied using the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR) mesoscale model (i.e., MM5) with the finest grid size of 6 km. It is shown that changing vertical resolution and time-step size has significant effects on hurricane intensity and inner-core cloud/precipitation, but little impact on the hurricane track. In general, increasing vertical resolution tends to produce a deeper storm with lower central pressure and stronger three-dimensional winds, and more precipitation. Similar effects, but to a less extent, occur when the time-step size is reduced. It is found that increasing the low-level vertical resolution is more efficient in intensifying a hurricane, whereas changing the upper-level vertical resolution has little impact on the hurricane intensity. Moreover, the use of a thicker surface layer tends to produce higher maximum surface winds. It is concluded that the use of higher vertical resolution, a thin surface layer, and smaller time-step sizes, along with higher horizontal resolution, is desirable to model more realistically the intensity and inner-core structures and evolution of tropical storms as well as the other convectively driven weather systems.

  15. Serial transverse enteroplasty to facilitate enteral autonomy in selected children with short bowel syndrome.

    PubMed

    Wester, T; Borg, H; Naji, H; Stenström, P; Westbacke, G; Lilja, H E

    2014-09-01

    Serial transverse enteroplasty (STEP) was first described in 2003 as a method for lengthening and tapering of the bowel in short bowel syndrome. The aim of this multicentre study was to review the outcome of a Swedish cohort of children who underwent STEP. All children who had a STEP procedure at one of the four centres of paediatric surgery in Sweden between September 2005 and January 2013 were included in this observational cohort study. Demographic details, and data from the time of STEP and at follow-up were collected from the case records and analysed. Twelve patients had a total of 16 STEP procedures; four children underwent a second STEP. The first STEP was performed at a median age of 5·8 (range 0·9-19·0) months. There was no death at a median follow-up of 37·2 (range 3·0-87·5) months and no child had small bowel transplantation. Seven of the 12 children were weaned from parenteral nutrition at a median of 19·5 (range 2·3-42·9) months after STEP. STEP is a useful procedure for selected patients with short bowel syndrome and seems to facilitate weaning from parenteral nutrition. At mid-term follow-up a majority of the children had achieved enteral autonomy. The study is limited by the small sample size and lack of a control group. © 2014 The Authors. BJS published by John Wiley & Sons Ltd on behalf of BJS Society Ltd.

  16. Lab-on-a-disc agglutination assay for protein detection by optomagnetic readout and optical imaging using nano- and micro-sized magnetic beads.

    PubMed

    Uddin, Rokon; Burger, Robert; Donolato, Marco; Fock, Jeppe; Creagh, Michael; Hansen, Mikkel Fougt; Boisen, Anja

    2016-11-15

    We present a biosensing platform for the detection of proteins based on agglutination of aptamer coated magnetic nano- or microbeads. The assay, from sample to answer, is integrated on an automated, low-cost microfluidic disc platform. This ensures fast and reliable results due to a minimum of manual steps involved. The detection of the target protein was achieved in two ways: (1) optomagnetic readout using magnetic nanobeads (MNBs); (2) optical imaging using magnetic microbeads (MMBs). The optomagnetic readout of agglutination is based on optical measurement of the dynamics of MNB aggregates whereas the imaging method is based on direct visualization and quantification of the average size of MMB aggregates. By enhancing magnetic particle agglutination via application of strong magnetic field pulses, we obtained identical limits of detection of 25pM with the same sample-to-answer time (15min 30s) using the two differently sized beads for the two detection methods. In both cases a sample volume of only 10µl is required. The demonstrated automation, low sample-to-answer time and portability of both detection instruments as well as integration of the assay on a low-cost disc are important steps for the implementation of these as portable tools in an out-of-lab setting. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Compressed ECG biometric: a fast, secured and efficient method for identification of CVD patient.

    PubMed

    Sufi, Fahim; Khalil, Ibrahim; Mahmood, Abdun

    2011-12-01

    Adoption of compression technology is often required for wireless cardiovascular monitoring, due to the enormous size of Electrocardiography (ECG) signal and limited bandwidth of Internet. However, compressed ECG must be decompressed before performing human identification using present research on ECG based biometric techniques. This additional step of decompression creates a significant processing delay for identification task. This becomes an obvious burden on a system, if this needs to be done for a trillion of compressed ECG per hour by the hospital. Even though the hospital might be able to come up with an expensive infrastructure to tame the exuberant processing, for small intermediate nodes in a multihop network identification preceded by decompression is confronting. In this paper, we report a technique by which a person can be identified directly from his / her compressed ECG. This technique completely obviates the step of decompression and therefore upholds biometric identification less intimidating for the smaller nodes in a multihop network. The biometric template created by this new technique is lower in size compared to the existing ECG based biometrics as well as other forms of biometrics like face, finger, retina etc. (up to 8302 times lower than face template and 9 times lower than existing ECG based biometric template). Lower size of the template substantially reduces the one-to-many matching time for biometric recognition, resulting in a faster biometric authentication mechanism.

  18. Finite Size Corrections to the Parisi Overlap Function in the GREM

    NASA Astrophysics Data System (ADS)

    Derrida, Bernard; Mottishaw, Peter

    2018-01-01

    We investigate the effects of finite size corrections on the overlap probabilities in the Generalized Random Energy Model in two situations where replica symmetry is broken in the thermodynamic limit. Our calculations do not use replicas, but shed some light on what the replica method should give for finite size corrections. In the gradual freezing situation, which is known to exhibit full replica symmetry breaking, we show that the finite size corrections lead to a modification of the simple relations between the sample averages of the overlaps Y_k between k configurations predicted by replica theory. This can be interpreted as fluctuations in the replica block size with a negative variance. The mechanism is similar to the one we found recently in the random energy model in Derrida and Mottishaw (J Stat Mech 2015(1): P01021, 2015). We also consider a simultaneous freezing situation, which is known to exhibit one step replica symmetry breaking. We show that finite size corrections lead to full replica symmetry breaking and give a more complete derivation of the results presented in Derrida and Mottishaw (Europhys Lett 115(4): 40005, 2016) for the directed polymer on a tree.

  19. A flow-free droplet-based device for high throughput polymorphic crystallization.

    PubMed

    Yang, Shih-Mo; Zhang, Dapeng; Chen, Wang; Chen, Shih-Chi

    2015-06-21

    Crystallization is one of the most crucial steps in the process of pharmaceutical formulation. In recent years, emulsion-based platforms have been developed and broadly adopted to generate high quality products. However, these conventional approaches such as stirring are still limited in several aspects, e.g., unstable crystallization conditions and broad size distribution; besides, only simple crystal forms can be produced. In this paper, we present a new flow-free droplet-based formation process for producing highly controlled crystallization with two examples: (1) NaCl crystallization reveals the ability to package saturated solution into nanoliter droplets, and (2) glycine crystallization demonstrates the ability to produce polymorphic crystallization forms by controlling the droplet size and temperature. In our process, the saturated solution automatically fills the microwell array powered by degassed bulk PDMS. A critical oil covering step is then introduced to isolate the saturated solution and control the water dissolution rate. Utilizing surface tension, the solution is uniformly packaged in the form of thousands of isolating droplets at the bottom of each microwell of 50-300 μm diameter. After water dissolution, individual crystal structures are automatically formed inside the microwell array. This approach facilitates the study of different glycine growth processes: α-form generated inside the droplets and γ-form generated at the edge of the droplets. With precise temperature control over nanoliter-sized droplets, the growth of ellipsoidal crystalline agglomerates of glycine was achieved for the first time. Optical and SEM images illustrate that the ellipsoidal agglomerates consist of 2-5 μm glycine clusters with inner spiral structures of ~35 μm screw pitch. Lastly, the size distribution of spherical crystalline agglomerates (SAs) produced from microwells of different sizes was measured to have a coefficient variation (CV) of less than 5%, showing crystal sizes can be precisely controlled by microwell sizes with high uniformity. This new method can be used to reliably fabricate monodispersed crystals for pharmaceutical applications.

  20. A new theory for multistep discretizations of stiff ordinary differential equations: Stability with large step sizes

    NASA Technical Reports Server (NTRS)

    Majda, G.

    1985-01-01

    A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.

  1. Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo

    With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.

  2. Blind beam-hardening correction from Poisson measurements

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2016-02-01

    We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.

  3. Reduction of the discretization stencil of direct forcing immersed boundary methods on rectangular cells: The ghost node shifting method

    NASA Astrophysics Data System (ADS)

    Picot, Joris; Glockner, Stéphane

    2018-07-01

    We present an analytical study of discretization stencils for the Poisson problem and the incompressible Navier-Stokes problem when used with some direct forcing immersed boundary methods. This study uses, but is not limited to, second-order discretization and Ghost-Cell Finite-Difference methods. We show that the stencil size increases with the aspect ratio of rectangular cells, which is undesirable as it breaks assumptions of some linear system solvers. To circumvent this drawback, a modification of the Ghost-Cell Finite-Difference methods is proposed to reduce the size of the discretization stencil to the one observed for square cells, i.e. with an aspect ratio equal to one. Numerical results validate this proposed method in terms of accuracy and convergence, for the Poisson problem and both Dirichlet and Neumann boundary conditions. An improvement on error levels is also observed. In addition, we show that the application of the chosen Ghost-Cell Finite-Difference methods to the Navier-Stokes problem, discretized by a pressure-correction method, requires an additional interpolation step. This extra step is implemented and validated through well known test cases of the Navier-Stokes equations.

  4. Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.

    2016-12-01

    The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187

  5. Analysis of stability for stochastic delay integro-differential equations.

    PubMed

    Zhang, Yu; Li, Longsuo

    2018-01-01

    In this paper, we concern stability of numerical methods applied to stochastic delay integro-differential equations. For linear stochastic delay integro-differential equations, it is shown that the mean-square stability is derived by the split-step backward Euler method without any restriction on step-size, while the Euler-Maruyama method could reproduce the mean-square stability under a step-size constraint. We also confirm the mean-square stability of the split-step backward Euler method for nonlinear stochastic delay integro-differential equations. The numerical experiments further verify the theoretical results.

  6. Synchrotron Infrared Confocal Microspectroscopical Detection of Heterogeneity Within Chemically Modified Single Starch Granules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetzel, D.; Shi, Y; Reffner, J

    This reports the first detection of chemical heterogeneity in octenyl succinic anhydride modified single starch granules using a Fourier transform infrared (FT-IR) microspectroscopical technique that combines diffraction-limited infrared microspectroscopy with a step size that is less than the mask projected spot size focused on the plane of the sample. The high spatial resolution was achieved with the combination of the application of a synchrotron infrared source and the confocal image plane masking system of the double-pass single-mask Continuum{reg_sign} infrared microscope. Starch from grains such as corn and wheat exists in granules. The size of the granules depends on the plantmore » producing the starch. Granules used in this study typically had a median size of 15 {micro}m. In the production of modified starch, an acid anhydride typically is reacted with OH groups of the starch polymer. The resulting esterification adds the ester carbonyl (1723 cm{sup -1}) organic functional group to the polymer and the hydrocarbon chain of the ester contributes to the CH{sub 2} stretching vibration to enhance the intensity of the 2927 cm{sup -1} band. Detection of the relative modifying population on a single granule was accomplished by ratioing the baseline adjusted peak area of the carbonyl functional group to that of a carbohydrate band. By stepping a confocally defined infrared beam as small as 5 {micro}m x 5 {micro}m across a starch granule 1 {micro}m at a time in both the x and y directions, the heterogeneity is detected with the highest possible spatial resolution.« less

  7. Assembly, growth, and catalytic activity of gold nanoparticles in hollow carbon nanofibers.

    PubMed

    La Torre, Alessandro; Giménez-López, Maria del Carmen; Fay, Michael W; Rance, Graham A; Solomonsz, William A; Chamberlain, Thomas W; Brown, Paul D; Khlobystov, Andrei N

    2012-03-27

    Graphitized carbon nanofibers (GNFs) act as efficient templates for the growth of gold nanoparticles (AuNPs) adsorbed on the interior (and exterior) of the tubular nanostructures. Encapsulated AuNPs are stabilized by interactions with the step-edges of the individual graphitic nanocones, of which GNFs are composed, and their size is limited to approximately 6 nm, while AuNPs adsorbed on the atomically flat graphitic surfaces of the GNF exterior continue their growth to 13 nm and beyond under the same heat treatment conditions. The corrugated structure of the GNF interior imposes a significant barrier for the migration of AuNPs, so that their growth mechanism is restricted to Ostwald ripening. Conversely, nanoparticles adsorbed on smooth GNF exterior surfaces are more likely to migrate and coalesce into larger nanoparticles, as revealed by in situ transmission electron microscopy imaging. The presence of alkyl thiol surfactant within the GNF channels changes the dynamics of the AuNP transformations, as surfactant molecules adsorbed on the surface of the AuNPs diminished the stabilization effect of the step-edges, thus allowing nanoparticles to grow until their diameters reach the internal diameter of the host nanofiber. Nanoparticles thermally evolved within the GNF channel exhibit alignment, perpendicular to the GNF axis due to interactions with the step-edges and parallel to the axis because of graphitic facets of the nanocones. Despite their small size, AuNPs in GNF possess high stability and remain unchanged at temperatures up to 300 °C in ambient atmosphere. Nanoparticles immobilized at the step-edges within GNF are shown to act as effective catalysts promoting the transformation of dimethylphenylsilane to bis(dimethylphenyl)disiloxane with a greater than 10-fold enhancement of selectivity as compared to free-standing or surface-adsorbed nanoparticles. © 2012 American Chemical Society

  8. Semi-automated hydrophobic interaction chromatography column scouting used in the two-step purification of recombinant green fluorescent protein.

    PubMed

    Stone, Orrin J; Biette, Kelly M; Murphy, Patrick J M

    2014-01-01

    Hydrophobic interaction chromatography (HIC) most commonly requires experimental determination (i.e., scouting) in order to select an optimal chromatographic medium for purifying a given target protein. Neither a two-step purification of untagged green fluorescent protein (GFP) from crude bacterial lysate using sequential HIC and size exclusion chromatography (SEC), nor HIC column scouting elution profiles of GFP, have been previously reported. Bacterial lysate expressing recombinant GFP was sequentially adsorbed to commercially available HIC columns containing butyl, octyl, and phenyl-based HIC ligands coupled to matrices of varying bead size. The lysate was fractionated using a linear ammonium phosphate salt gradient at constant pH. Collected HIC eluate fractions containing retained GFP were then pooled and further purified using high-resolution preparative SEC. Significant differences in presumptive GFP elution profiles were observed using in-line absorption spectrophotometry (A395) and post-run fluorimetry. SDS-PAGE and western blot demonstrated that fluorometric detection was the more accurate indicator of GFP elution in both HIC and SEC purification steps. Comparison of composite HIC column scouting data indicated that a phenyl ligand coupled to a 34 µm matrix produced the highest degree of target protein capture and separation. Conducting two-step protein purification using the preferred HIC medium followed by SEC resulted in a final, concentrated product with >98% protein purity. In-line absorbance spectrophotometry was not as precise of an indicator of GFP elution as post-run fluorimetry. These findings demonstrate the importance of utilizing a combination of detection methods when evaluating purification strategies. GFP is a well-characterized model protein, used heavily in educational settings and by researchers with limited protein purification experience, and the data and strategies presented here may aid in development other of HIC-compatible protein purification schemes.

  9. Designing a diverse high-quality library for crystallography-based FBDD screening.

    PubMed

    Tounge, Brett A; Parker, Michael H

    2011-01-01

    A well-chosen set of fragments is able to cover a large chemical space using a small number of compounds. The actual size and makeup of the fragment set is dependent on the screening method since each technique has its own practical limits in terms of the number of compounds that can be screened and requirements for compound solubility. In this chapter, an overview of the general requirements for a fragment library is presented for different screening platforms. In the case of the FBDD work at Johnson & Johnson Pharmaceutical Research and Development, L.L.C., our main screening technology is X-ray crystallography. Since every soaked protein crystal needs to be diffracted and a protein structure determined to delineate if a fragment binds, the size of our initial screening library cannot be a rate-limiting factor. For this reason, we have chosen 900 as the appropriate primary fragment library size. To choose the best set, we have developed our own mix of simple property ("Rule of 3") and "bad" substructure filtering. While this gets one a long way in terms of limiting the fragment pool, there are still tens of thousands of compounds to choose from after this initial step. Many of the choices left at this stage are not drug-like, so we have developed an FBDD Score to help select a 900-compound set. The details of this score and the filtering are presented. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Process for preparation of large-particle-size monodisperse latexes

    NASA Technical Reports Server (NTRS)

    Vanderhoff, J. W.; Micale, F. J.; El-Aasser, M. S.; Kornfeld, D. M. (Inventor)

    1981-01-01

    Monodisperse latexes having a particle size in the range of 2 to 40 microns are prepared by seeded emulsion polymerization in microgravity. A reaction mixture containing smaller monodisperse latex seed particles, predetermined amounts of monomer, emulsifier, initiator, inhibitor and water is placed in a microgravity environment, and polymerization is initiated by heating. The reaction is allowed to continue until the seed particles grow to a predetermined size, and the resulting enlarged particles are then recovered. A plurality of particle-growing steps can be used to reach larger sizes within the stated range, with enlarge particles from the previous steps being used as seed particles for the succeeding steps. Microgravity enables preparation of particles in the stated size range by avoiding gravity related problems of creaming and settling, and flocculation induced by mechanical shear that have precluded their preparation in a normal gravity environment.

  11. Energy of Supported Metal Catalysts: From Single Atoms to Large Metal Nanoparticles

    DOE PAGES

    James, Trevor E.; Hemmingson, Stephanie L.; Campbell, Charles T.

    2015-08-14

    It is known that many catalysts consist of late transition metal nanoparticles dispersed across oxide supports. The chemical potential of the metal atoms in these particles correlate with their catalytic activity and long-term thermal stability. This chemical potential versus particle size across the full size range between the single isolated atom and bulklike limits is reported here for the first time for any metal on any oxide. The chemical potential of Cu atoms on CeO 2(111) surfaces, determined by single crystal adsorption calorimetry of gaseous Cu atoms onto slightly reduced CeO 2(111) at 100 and 300 K is shown tomore » decrease dramatically with increasing Cu cluster size. The Cu chemical potential is ~110 kJ/mol higher for isolated Cu adatoms on stoichometric terrace sites than for Cu in nanoparticles exceeding 2.5 nm diameter, where it reaches the bulk Cu(solid) limit. In Cu dimers, Cu’s chemical potential is ~57 kJ/mol lower at step edges than on stoichiometric terrace sites. Since Cu avoids oxygen vacancies, these monomer and dimer results are not strongly influenced by the 2.5% oxygen vacancies present on this CeO 2 surface and are thus considered representative of stoichiometric CeO 2(111) surfaces.« less

  12. Combining gas-phase electrophoretic mobility molecular analysis (GEMMA), light scattering, field flow fractionation and cryo electron microscopy in a multidimensional approach to characterize liposomal carrier vesicles

    PubMed Central

    Gondikas, Andreas; von der Kammer, Frank; Hofmann, Thilo; Marchetti-Deschmann, Martina; Allmaier, Günter; Marko-Varga, György; Andersson, Roland

    2017-01-01

    For drug delivery, characterization of liposomes regarding size, particle number concentrations, occurrence of low-sized liposome artefacts and drug encapsulation are of importance to understand their pharmacodynamic properties. In our study, we aimed to demonstrate the applicability of nano Electrospray Gas-Phase Electrophoretic Mobility Molecular Analyser (nES GEMMA) as a suitable technique for analyzing these parameters. We measured number-based particle concentrations, identified differences in size between nominally identical liposomal samples, and detected the presence of low-diameter material which yielded bimodal particle size distributions. Subsequently, we compared these findings to dynamic light scattering (DLS) data and results from light scattering experiments coupled to Asymmetric Flow-Field Flow Fractionation (AF4), the latter improving the detectability of smaller particles in polydisperse samples due to a size separation step prior detection. However, the bimodal size distribution could not be detected due to method inherent limitations. In contrast, cryo transmission electron microscopy corroborated nES GEMMA results. Hence, gas-phase electrophoresis proved to be a versatile tool for liposome characterization as it could analyze both vesicle size and size distribution. Finally, a correlation of nES GEMMA results with cell viability experiments was carried out to demonstrate the importance of liposome batch-to-batch control as low-sized sample components possibly impact cell viability. PMID:27639623

  13. High efficient perovskite solar cell material CH3NH3PbI3: Synthesis of films and their characterization

    NASA Astrophysics Data System (ADS)

    Bera, Amrita Mandal; Wargulski, Dan Ralf; Unold, Thomas

    2018-04-01

    Hybrid organometal perovskites have been emerged as promising solar cell material and have exhibited solar cell efficiency more than 20%. Thin films of Methylammonium lead iodide CH3NH3PbI3 perovskite materials have been synthesized by two different (one step and two steps) methods and their morphological properties have been studied by scanning electron microscopy and optical microscope imaging. The morphology of the perovskite layer is one of the most important parameters which affect solar cell efficiency. The morphology of the films revealed that two steps method provides better surface coverage than the one step method. However, the grain sizes were smaller in case of two steps method. The films prepared by two steps methods on different substrates revealed that the grain size also depend on the substrate where an increase of the grain size was found from glass substrate to FTO with TiO2 blocking layer to FTO without any change in the surface coverage area. Present study reveals that an improved quality of films can be obtained by two steps method by an optimization of synthesis processes.

  14. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    NASA Astrophysics Data System (ADS)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with the code Gauss2 [S. Gonzalez-Pinto, R. Rojas-Bello, Gauss2, a Fortran 90 code for second order initial value problems, ], based on an adaptive two stage Runge-Kutta-Gauss method with this discontinuous step-size policy.

  15. Prospective relationships between body weight and physical activity: an observational analysis from the NAVIGATOR study.

    PubMed

    Preiss, David; Thomas, Laine E; Wojdyla, Daniel M; Haffner, Steven M; Gill, Jason M R; Yates, Thomas; Davies, Melanie J; Holman, Rury R; McMurray, John J; Califf, Robert M; Kraus, William E

    2015-08-14

    While bidirectional relationships exist between body weight and physical activity, direction of causality remains uncertain and previous studies have been limited by self-reported activity or weight and small sample size. We investigated the prospective relationships between weight and physical activity. Observational analysis of data from the Nateglinide And Valsartan in Impaired Glucose Tolerance Outcomes Research (NAVIGATOR) study, a double-blinded randomised clinical trial of nateglinide and valsartan, respectively. Multinational study of 9306 participants. Participants with biochemically confirmed impaired glucose tolerance had annual measurements of both weight and step count using research grade pedometers, worn for 7 days consecutively. Along with randomisation to valsartan or placebo plus nateglinide or placebo, participants took part in a lifestyle modification programme. Longitudinal regression using weight as response value and physical activity as predictor value was conducted, adjusted for baseline covariates. Analysis was then repeated with physical activity as response value and weight as predictor value. Only participants with a response value preceded by at least three annual response values were included. Adequate data were available for 2811 (30%) of NAVIGATOR participants. Previous weight (χ(2)=16.8; p<0.0001), but not change in weight (χ(2)=0.1; p=0.71) was inversely associated with subsequent step count, indicating lower subsequent levels of physical activity in heavier individuals. Change in step count (χ(2)=5.9; p=0.02) but not previous step count (χ(2)=0.9; p=0.34) was inversely associated with subsequent weight. However, in the context of trajectories already established for weight (χ(2) for previous weight measurements 747.3; p<0.0001) and physical activity (χ(2) for previous step count 432.6; p<0.0001), these effects were of limited clinical importance. While a prospective bidirectional relationship was observed between weight and physical activity, the magnitude of any effect was very small in the context of natural trajectories already established for these variables. NCT00097786. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Standing wave design and optimization of a simulated moving bed chromatography for separation of xylobiose and xylose under the constraints on product concentration and pressure drop.

    PubMed

    Lee, Chung-Gi; Choi, Jae-Hwan; Park, Chanhun; Wang, Nien-Hwa Linda; Mun, Sungyong

    2017-12-08

    The feasibility of a simulated moving bed (SMB) technology for the continuous separation of high-purity xylobiose (X2) from the output of a β-xylosidase X1→X2 reaction has recently been confirmed. To ensure high economical efficiency of the X2 production method based on the use of xylose (X1) as a starting material, it is essential to accomplish the comprehensive optimization of the X2-separation SMB process in such a way that its X2 productivity can be maximized while maintaining the X2 product concentration from the SMB as high as possible in consideration of a subsequent lyophilization step. To address this issue, a suitable SMB optimization tool for the aforementioned task was prepared based on standing wave design theory. The prepared tool was then used to optimize the SMB operation parameters, column configuration, total column number, adsorbent particle size, and X2 yield while meeting the constraints on X2 purity, X2 product concentration, and pressure drop. The results showed that the use of a larger particle size caused the productivity to be limited by the constraint on X2 product concentration, and a maximum productivity was attained by choosing the particle size such that the effect of the X2-concentration limiting factor could be balanced with that of pressure-drop limiting factor. If the target level of X2 product concentration was elevated, higher productivity could be achieved by decreasing particle size, raising the level of X2 yield, and increasing the column number in the zones containing the front and rear of X2 solute band. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. [Influence on microstructure of dental zirconia ceramics prepared by two-step sintering].

    PubMed

    Jian, Chao; Li, Ning; Wu, Zhikai; Teng, Jing; Yan, Jiazhen

    2013-10-01

    To investigate the microstructure of dental zirconia ceramics prepared by two-step sintering. Nanostructured zirconia powder was dry compacted, cold isostatic pressed, and pre-sintered. The pre-sintered discs were cut processed into samples. Conventional sintering, single-step sintering, and two-step sintering were carried out, and density and grain size of the samples were measured. Afterward, T1 and/or T2 of two-step sintering ranges were measured. Effects on microstructure of different routes, which consisted of two-step sintering and conventional sintering were discussed. The influence of T1 and/or T2 on density and grain size were analyzed as well. The range of T1 was between 1450 degrees C and 1550 degrees C, and the range of T2 was between 1250 degrees C and 1350 degrees C. Compared with conventional sintering, finer microstructure of higher density and smaller grain could be obtained by two-step sintering. Grain growth was dependent on T1, whereas density was not much related with T1. However, density was dependent on T2, and grain size was minimally influenced. Two-step sintering could ensure a sintering body with high density and small grain, which is good for optimizing the microstructure of dental zirconia ceramics.

  18. Optimal setups for forced-choice staircases with fixed step sizes.

    PubMed

    García-Pérez, M A

    2000-01-01

    Forced-choice staircases with fixed step sizes are used in a variety of formats whose relative merits have never been studied. This paper presents a comparative study aimed at determining their optimal format. Factors included in the study were the up/down rule, the length (number of reversals), and the size of the steps. The study also addressed the issue of whether a protocol involving three staircases running for N reversals each (with a subsequent average of the estimates provided by each individual staircase) has better statistical properties than an alternative protocol involving a single staircase running for 3N reversals. In all cases the size of a step up was different from that of a step down, in the appropriate ratio determined by García-Pérez (Vision Research, 1998, 38, 1861 - 1881). The results of a simulation study indicate that a) there are no conditions in which the 1-down/1-up rule is advisable; b) different combinations of up/down rule and number of reversals appear equivalent in terms of precision and cost: c) using a single long staircase with 3N reversals is more efficient than running three staircases with N reversals each: d) to avoid bias and attain sufficient accuracy, threshold estimates should be based on at least 30 reversals: and e) to avoid excessive cost and imprecision, the size of the step up should be between 2/3 and 3/3 the (known or presumed) spread of the psychometric function. An empirical study with human subjects confirmed the major characteristics revealed by the simulations.

  19. Ab initio calculations of optical properties of silver clusters: cross-over from molecular to nanoscale behavior

    NASA Astrophysics Data System (ADS)

    Titantah, John T.; Karttunen, Mikko

    2016-05-01

    Electronic and optical properties of silver clusters were calculated using two different ab initio approaches: (1) based on all-electron full-potential linearized-augmented plane-wave method and (2) local basis function pseudopotential approach. Agreement is found between the two methods for small and intermediate sized clusters for which the former method is limited due to its all-electron formulation. The latter, due to non-periodic boundary conditions, is the more natural approach to simulate small clusters. The effect of cluster size is then explored using the local basis function approach. We find that as the cluster size increases, the electronic structure undergoes a transition from molecular behavior to nanoparticle behavior at a cluster size of 140 atoms (diameter ~1.7 nm). Above this cluster size the step-like electronic structure, evident as several features in the imaginary part of the polarizability of all clusters smaller than Ag147, gives way to a dominant plasmon peak localized at wavelengths 350 nm ≤ λ ≤ 600 nm. It is, thus, at this length-scale that the conduction electrons' collective oscillations that are responsible for plasmonic resonances begin to dominate the opto-electronic properties of silver nanoclusters.

  20. Double emulsion formation through hierarchical flow-focusing microchannel

    NASA Astrophysics Data System (ADS)

    Azarmanesh, Milad; Farhadi, Mousa; Azizian, Pooya

    2016-03-01

    A microfluidic device is presented for creating double emulsions, controlling their sizes and also manipulating encapsulation processes. As a result of three immiscible liquids' interaction using dripping instability, double emulsions can be produced elegantly. Effects of dimensionless numbers are investigated which are Weber number of the inner phase (Wein), Capillary number of the inner droplet (Cain), and Capillary number of the outer droplet (Caout). They affect the formation process, inner and outer droplet size, and separation frequency. Direct numerical simulation of governing equations was done using volume of fluid method and adaptive mesh refinement technique. Two kinds of double emulsion formation, the two-step and the one-step, were simulated in which the thickness of the sheath of double emulsions can be adjusted. Altering each dimensionless number will change detachment location, outer droplet size and droplet formation period. Moreover, the decussate regime of the double-emulsion/empty-droplet is observed in low Wein. This phenomenon can be obtained by adjusting the Wein in which the maximum size of the sheath is discovered. Also, the results show that Cain has significant influence on the outer droplet size in the two-step process, while Caout affects the sheath in the one-step formation considerably.

  1. Cellular packing, mechanical stress and the evolution of multicellularity

    NASA Astrophysics Data System (ADS)

    Jacobeen, Shane; Pentz, Jennifer T.; Graba, Elyes C.; Brandys, Colin G.; Ratcliff, William C.; Yunker, Peter J.

    2018-03-01

    The evolution of multicellularity set the stage for sustained increases in organismal complexity1-5. However, a fundamental aspect of this transition remains largely unknown: how do simple clusters of cells evolve increased size when confronted by forces capable of breaking intracellular bonds? Here we show that multicellular snowflake yeast clusters6-8 fracture due to crowding-induced mechanical stress. Over seven weeks ( 291 generations) of daily selection for large size, snowflake clusters evolve to increase their radius 1.7-fold by reducing the accumulation of internal stress. During this period, cells within the clusters evolve to be more elongated, concomitant with a decrease in the cellular volume fraction of the clusters. The associated increase in free space reduces the internal stress caused by cellular growth, thus delaying fracture and increasing cluster size. This work demonstrates how readily natural selection finds simple, physical solutions to spatial constraints that limit the evolution of group size—a fundamental step in the evolution of multicellularity.

  2. Valid approximation of spatially distributed grain size distributions - A priori information encoded to a feedforward network

    NASA Astrophysics Data System (ADS)

    Berthold, T.; Milbradt, P.; Berkhahn, V.

    2018-04-01

    This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.

  3. Prospective Optimization with Limited Resources

    PubMed Central

    Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei

    2015-01-01

    The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their “depth of computation”) and how often they attempted to incorporate new information about the future rewards (their “recalculation period”). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation. PMID:26367309

  4. Prospective Optimization with Limited Resources.

    PubMed

    Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei

    2015-09-01

    The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their "depth of computation") and how often they attempted to incorporate new information about the future rewards (their "recalculation period"). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation.

  5. Large-Scale High-Resolution Cylinder Wake Measurements in a Wind Tunnel using Tomographic PIV with sCMOS Cameras

    NASA Astrophysics Data System (ADS)

    Michaelis, Dirk; Schroeder, Andreas

    2012-11-01

    Tomographic PIV has triggered vivid activity, reflected in a large number of publications, covering both: development of the technique and a wide range of fluid dynamic experiments. Maturing of tomo PIV allows the application in medium to large scale wind tunnels. Limiting factor for wind tunnel application is the small size of the measurement volume, being typically about of 50 × 50 × 15 mm3. Aim of this study is the optimization towards large measurement volumes and high spatial resolution performing cylinder wake measurements in a 1 meter wind tunnel. Main limiting factors for the volume size are the laser power and the camera sensitivity. So, a high power laser with 800 mJ per pulse is used together with low noise sCMOS cameras, mounted in forward scattering direction to gain intensity due to the Mie scattering characteristics. A mirror is used to bounce the light back, to have all cameras in forward scattering. Achievable particle density is growing with number of cameras, so eight cameras are used for a high spatial resolution. Optimizations lead to volume size of 230 × 200 × 52 mm3 = 2392 cm3, more than 60 times larger than previously. 281 × 323 × 68 vectors are calculated with spacing of 0.76 mm. The achieved measurement volume size and spatial resolution is regarded as a major step forward in the application of tomo PIV in wind tunnels. Supported by EU-project: no. 265695.

  6. Toward Exploring the Structure of Monolayer to Few-layer TaS2 by Efficient Ultrasound-free Exfoliation

    NASA Astrophysics Data System (ADS)

    Hu, Yiwei; Hao, Qiaoyan; Zhu, Baichuan; Li, Biao; Gao, Zhan; Wang, Yan; Tang, Kaibin

    2018-01-01

    Tantalum disulfide nanosheets have attracted great interest due to its electronic properties and device applications. Traditional solution-ased ultrasonic process is limited by ultrasound which may cause the disintegration into submicron-sized flake. Here, an efficient multi-step intercalation and ultrasound-free process has been successfully used to exfoliate 1T-TaS2. The obtained TaS2 nanosheets reveal an average thickness of 3 nm and several micrometers in size. The formation of few-layer TaS2 nanosheets as well as monolayer TaS2 sheets is further confirmed by atomic force microscopy images. The few-layer TaS2 nanosheets remain the 1T structure, whereas monolayer TaS2 sheets show lattice distortion and may adopt the 1H-like structure with trigonal prism coordination.

  7. Precision heat forming of tetrafluoroethylene tubing

    NASA Technical Reports Server (NTRS)

    Ruiz, W. V.; Thatcher, C. S. (Inventor)

    1981-01-01

    An invention that provides a method of altering the size of tetrafluoroethylene tubing which is only available in limited combination of wall thicknesses and diameter are discussed. The method includes the steps of sliding the tetrafluoroethylene tubing onto an aluminum mandrel and clamping the ends of the tubing to the mandrel by means of clamps. The tetrafluorethylene tubing and mandrel are then placed in a supporting coil which with the mandrel and tetrafluorethylene tubing are then positioned in a insulated steel pipe which is normally covered with a fiber glass insulator to smooth out temperature distribution therein. The entire structure is then placed in an event which heats the tetrafluorethylene tubing which is then shrunk by the heat to the outer dimension of the aluminum mandrel. After cooling the aluminum mandrel is removed from the newly sized tetrafluorethylene tubing by a conventional chemical milling process.

  8. Detection thresholds for small haptic effects

    NASA Astrophysics Data System (ADS)

    Dosher, Jesse A.; Hannaford, Blake

    2002-02-01

    We are interested in finding out whether or not haptic interfaces will be useful in portable and hand held devices. Such systems will have severe constraints on force output. Our first step is to investigate the lower limits at which haptic effects can be perceived. In this paper we report on experiments studying the effects of varying the amplitude, size, shape, and pulse-duration of a haptic feature. Using a specific haptic device we measure the smallest detectable haptics effects, with active exploration of saw-tooth shaped icons sized 3, 4 and 5 mm, a sine-shaped icon 5 mm wide, and static pulses 50, 100, and 150 ms in width. Smooth shaped icons resulted in a detection threshold of approximately 55 mN, almost twice that of saw-tooth shaped icons which had a threshold of 31 mN.

  9. An improved VSS NLMS algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan

    2017-08-01

    In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.

  10. Creep of quartz by dislocation and grain boundary processes

    NASA Astrophysics Data System (ADS)

    Fukuda, J. I.; Holyoke, C. W., III; Kronenberg, A. K.

    2015-12-01

    Wet polycrystalline quartz aggregates deformed at temperatures T of 600°-900°C and strain rates of 10-4-10-6 s-1 at a confining pressure Pc of 1.5 GPa exhibit plasticity at low T, governed by dislocation glide and limited recovery, and grain size-sensitive creep at high T, governed by diffusion and sliding at grain boundaries. Quartz aggregates were HIP-synthesized, subjecting natural milky quartz powder to T=900°C and Pc=1.5 GPa, and grain sizes (2 to 25 mm) were varied by annealing at these conditions for up to 10 days. Infrared absorption spectra exhibit a broad OH band at 3400 cm-1 due to molecular water inclusions with a calculated OH content (~4000 ppm, H/106Si) that is unchanged by deformation. Rate-stepping experiments reveal different stress-strain rate functions at different temperatures and grain sizes, which correspond to differing stress-temperature sensitivities. At 600-700°C and grain sizes of 5-10 mm, flow law parameters compare favorably with those for basal plasticity and dislocation creep of wet quartzites (effective stress exponents n of 3 to 6 and activation enthalpy H* ~150 kJ/mol). Deformed samples show undulatory extinction, limited recrystallization, and c-axis maxima parallel to the shortening direction. Similarly fine-grained samples deformed at 800°-900°C exhibit flow parameters n=1.3-2.0 and H*=135-200 kJ/mol corresponding to grain size-sensitive Newtonian creep. Deformed samples show some undulatory extinction and grain sizes change by recrystallization; however, grain boundary deformation processes are indicated by the low value of n. Our experimental results for grain size-sensitive creep can be compared with models of grain boundary diffusion and grain boundary sliding using measured rates of silicon grain boundary diffusion. While many quartz mylonites show microstructural and textural evidence for dislocation creep, results for grain size-sensitive creep may apply to very fine-grained (<10 mm) quartz mylonites.

  11. Analysis Techniques for Microwave Dosimetric Data.

    DTIC Science & Technology

    1985-10-01

    the number of steps in the frequency list . 0062 C ----------------------------------------------------------------------- 0063 CALL FILE2() 0064...starting frequency, 0061 C the step size, and the number of steps in the frequency list . 0062 C

  12. Testing electroexplosive devices by programmed pulsing techniques

    NASA Technical Reports Server (NTRS)

    Rosenthal, L. A.; Menichelli, V. J.

    1976-01-01

    A novel method for testing electroexplosive devices is proposed wherein capacitor discharge pulses, with increasing energy in a step-wise fashion, are delivered to the device under test. The size of the energy increment can be programmed so that firing takes place after many, or after only a few, steps. The testing cycle is automatically terminated upon firing. An energy-firing contour relating the energy required to the programmed step size describes the single-pulse firing energy and the possible sensitization or desensitization of the explosive device.

  13. Influence of Age, Maturity, and Body Size on the Spatiotemporal Determinants of Maximal Sprint Speed in Boys.

    PubMed

    Meyers, Robert W; Oliver, Jon L; Hughes, Michael G; Lloyd, Rhodri S; Cronin, John B

    2017-04-01

    Meyers, RW, Oliver, JL, Hughes, MG, Lloyd, RS, and Cronin, JB. Influence of age, maturity, and body size on the spatiotemporal determinants of maximal sprint speed in boys. J Strength Cond Res 31(4): 1009-1016, 2017-The aim of this study was to investigate the influence of age, maturity, and body size on the spatiotemporal determinants of maximal sprint speed in boys. Three-hundred and seventy-five boys (age: 13.0 ± 1.3 years) completed a 30-m sprint test, during which maximal speed, step length, step frequency, contact time, and flight time were recorded using an optical measurement system. Body mass, height, leg length, and a maturity offset represented somatic variables. Step frequency accounted for the highest proportion of variance in speed (∼58%) in the pre-peak height velocity (pre-PHV) group, whereas step length explained the majority of the variance in speed (∼54%) in the post-PHV group. In the pre-PHV group, mass was negatively related to speed, step length, step frequency, and contact time; however, measures of stature had a positive influence on speed and step length yet a negative influence on step frequency. Speed and step length were also negatively influence by mass in the post-PHV group, whereas leg length continued to positively influence step length. The results highlighted that pre-PHV boys may be deemed step frequency reliant, whereas those post-PHV boys may be marginally step length reliant. Furthermore, the negative influence of body mass, both pre-PHV and post-PHV, suggests that training to optimize sprint performance in youth should include methods such as plyometric and strength training, where a high neuromuscular focus and the development force production relative to body weight are key foci.

  14. Scanning tunneling microscope with a rotary piezoelectric stepping motor

    NASA Astrophysics Data System (ADS)

    Yakimov, V. N.

    1996-02-01

    A compact scanning tunneling microscope (STM) with a novel rotary piezoelectric stepping motor for coarse positioning has been developed. An inertial method for rotating of the rotor by the pair of piezoplates has been used in the piezomotor. Minimal angular step size was about several arcsec with the spindle working torque up to 1 N×cm. Design of the STM was noticeably simplified by utilization of the piezomotor with such small step size. A shaft eccentrically attached to the piezomotor spindle made it possible to push and pull back the cylindrical bush with the tubular piezoscanner. A linear step of coarse positioning was about 50 nm. STM resolution in vertical direction was better than 0.1 nm without an external vibration isolation.

  15. Neuronal differentiation of human mesenchymal stem cells in response to the domain size of graphene substrates.

    PubMed

    Lee, Yoo-Jung; Seo, Tae Hoon; Lee, Seula; Jang, Wonhee; Kim, Myung Jong; Sung, Jung-Suk

    2018-01-01

    Graphene is a noncytotoxic monolayer platform with unique physical, chemical, and biological properties. It has been demonstrated that graphene substrate may provide a promising biocompatible scaffold for stem cell therapy. Because chemical vapor deposited graphene has a two dimensional polycrystalline structure, it is important to control the individual domain size to obtain desirable properties for nano-material. However, the biological effects mediated by differences in domain size of graphene have not yet been reported. On the basis of the control of graphene domain achieved by one-step growth (1step-G, small domain) and two-step growth (2step-G, large domain) process, we found that the neuronal differentiation of bone marrow-derived human mesenchymal stem cells (hMSCs) highly depended on the graphene domain size. The defects at the domain boundaries in 1step-G graphene was higher (×8.5) and had a relatively low (13% lower) contact angle of water droplet than 2step-G graphene, leading to enhanced cell-substrate adhesion and upregulated neuronal differentiation of hMSCs. We confirmed that the strong interactions between cells and defects at the domain boundaries in 1step-G graphene can be obtained due to their relatively high surface energy, which is stronger than interactions between cells and graphene surfaces. Our results may provide valuable information on the development of graphene-based scaffold by understanding which properties of graphene domain influence cell adhesion efficacy and stem cell differentiation. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 106A: 43-51, 2018. © 2017 Wiley Periodicals, Inc.

  16. Role of Edges in Complex Network Epidemiology

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Jiang, Zhi-Hong; Wang, Hui; Xie, Fei; Chen, Chao

    2012-09-01

    In complex network epidemiology, diseases spread along contacting edges between individuals and different edges may play different roles in epidemic outbreaks. Quantifying the efficiency of edges is an important step towards arresting epidemics. In this paper, we study the efficiency of edges in general susceptible-infected-recovered models, and introduce the transmission capability to measure the efficiency of edges. Results show that deleting edges with the highest transmission capability will greatly decrease epidemics on scale-free networks. Basing on the message passing approach, we get exact mathematical solution on configuration model networks with edge deletion in the large size limit.

  17. Evaluation of lung and chest wall mechanics during anaesthesia using the PEEP-step method.

    PubMed

    Persson, P; Stenqvist, O; Lundin, S

    2018-04-01

    Postoperative pulmonary complications are common. Between patients there are differences in lung and chest wall mechanics. Individualised mechanical ventilation based on measurement of transpulmonary pressures would be a step forward. A previously described method evaluates lung and chest wall mechanics from a change of ΔPEEP and calculation of change in end-expiratory lung volume (ΔEELV). The aim of the present study was to validate this PEEP-step method (PSM) during general anaesthesia by comparing it with the conventional method using oesophageal pressure (PES) measurements. In 24 lung healthy subjects (BMI 18.5-32), three different sizes of PEEP steps were performed during general anaesthesia and ΔEELVs were calculated. Transpulmonary driving pressure (ΔPL) for a tidal volume equal to each ΔEELV was measured using PES measurements and compared to ΔPEEP with limits of agreement and intraclass correlation coefficients (ICC). ΔPL calculated with both methods was compared with a Bland-Altman plot. Mean differences between ΔPEEP and ΔPL were <0.15 cm H 2 O, 95% limits of agreements -2.1 to 2.0 cm H 2 O, ICC 0.6-0.83. Mean differences between ΔPL calculated by both methods were <0.2 cm H 2 O. Ratio of lung elastance and respiratory system elastance was 0.5-0.95. The large variation in mechanical properties among the lung healthy patients stresses the need for individualised ventilator settings based on measurements of lung and chest wall mechanics. The agreement between ΔPLs measured by the two methods during general anaesthesia suggests the use of the non-invasive PSM in this patient population. NCT 02830516. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Effects of Turbulence Model and Numerical Time Steps on Von Karman Flow Behavior and Drag Accuracy of Circular Cylinder

    NASA Astrophysics Data System (ADS)

    Amalia, E.; Moelyadi, M. A.; Ihsan, M.

    2018-04-01

    The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.

  19. The application of STEP-technology® for particle and protein dispersion detection studies in biopharmaceutical research.

    PubMed

    Gross-Rother, J; Herrmann, N; Blech, M; Pinnapireddy, S R; Garidel, P; Bakowsky, U

    2018-05-30

    Particle detection and analysis techniques are essential in biopharmaceutical industries to evaluate the quality of various parenteral formulations regarding product safety, product quality and to meet the regulations set by the authority agencies. Several particle analysis systems are available on the market, but for the operator, it is quite challenging to identify the suitable method to analyze the sample. At the same time these techniques are the basis to gain a better understanding in biophysical processes, e.g. protein interaction and aggregation processes. The STEP-Technology® (Space and Time resolved Extinction Profiles), as used in the analytical photocentrifuge LUMiSizer®, has been shown to be an effective and promising technique to investigate particle suspensions and emulsions in various fields. In this study, we evaluated the potentials and limitations of this technique for biopharmaceutical model samples. For a first experimental approach, we measured silica and polystyrene (PS) particle standard suspensions with given particle density and refractive index (RI). The concluding evaluation was performed using a variety of relevant data sets to demonstrate the significant influences of the particle density for the final particle size distribution (PSD). The most challenging property required for successful detection, turbidity, was stated and limits have been set based on the depicted absorbance value at 320 nm (A320 values). Furthermore, we produced chemically cross-linked protein particle suspensions to model physically "stable" protein aggregates. These results of LUMiSizer® analysis have been compared to the orthogonal methods of nanoparticle tracking analysis (NTA), dynamic light scattering (DLS) and micro-flow imaging (MFI). Sedimentation velocity distributions showed similar tendencies, but the PSDs and absolute size values could not be obtained. In conclusion, we could demonstrate some applications as well as limitations of this technique for biopharmaceutical samples. In comparison to orthogonal methods this technique is a great complementary approach if particle data e.g. density or refractive index can be determined. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  1. Simulation methods with extended stability for stiff biochemical Kinetics.

    PubMed

    Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin

    2010-08-11

    With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.

  2. Organization of research team for nano-associated safety assessment in effort to study nanotoxicology of zinc oxide and silica nanoparticles

    PubMed Central

    Kim, Yu-Ri; Park, Sung Ha; Lee, Jong-Kwon; Jeong, Jayoung; Kim, Ja Hei; Meang, Eun-Ho; Yoon, Tae Hyun; Lim, Seok Tae; Oh, Jae-Min; An, Seong Soo A; Kim, Meyoung-Kon

    2014-01-01

    Currently, products made with nanomaterials are used widely, especially in biology, bio-technologies, and medical areas. However, limited investigations on potential toxicities of nanomaterials are available. Hence, diverse and systemic toxicological data with new methods for nanomaterials are needed. In order to investigate the nanotoxicology of nanoparticles (NPs), the Research Team for Nano-Associated Safety Assessment (RT-NASA) was organized in three parts and launched. Each part focused on different contents of research directions: investigators in part I were responsible for the efficient management and international cooperation on nano-safety studies; investigators in part II performed the toxicity evaluations on target organs such as assessment of genotoxicity, immunotoxicity, or skin penetration; and investigators in part III evaluated the toxicokinetics of NPs with newly developed techniques for toxicokinetic analyses and methods for estimating nanotoxicity. The RT-NASA study was carried out in six steps: need assessment, physicochemical property, toxicity evaluation, toxicokinetics, peer review, and risk communication. During the need assessment step, consumer responses were analyzed based on sex, age, education level, and household income. Different sizes of zinc oxide and silica NPs were purchased and coated with citrate, L-serine, and L-arginine in order to modify surface charges (eight different NPs), and each of the NPs were characterized by various techniques, for example, zeta potentials, scanning electron microscopy, and transmission electron microscopy. Evaluation of the “no observed adverse effect level” and systemic toxicities of all NPs were performed by thorough evaluation steps and the toxicokinetics step, which included in vivo studies with zinc oxide and silica NPs. A peer review committee was organized to evaluate and verify the reliability of toxicity tests, and the risk communication step was also needed to convey the current findings to academia, industry, and consumers. Several limitations were encountered in the RT-NASA project, and they are discussed for consideration for improvements in future studies. PMID:25565821

  3. Organization of research team for nano-associated safety assessment in effort to study nanotoxicology of zinc oxide and silica nanoparticles.

    PubMed

    Kim, Yu-Ri; Park, Sung Ha; Lee, Jong-Kwon; Jeong, Jayoung; Kim, Ja Hei; Meang, Eun-Ho; Yoon, Tae Hyun; Lim, Seok Tae; Oh, Jae-Min; An, Seong Soo A; Kim, Meyoung-Kon

    2014-01-01

    Currently, products made with nanomaterials are used widely, especially in biology, bio-technologies, and medical areas. However, limited investigations on potential toxicities of nanomaterials are available. Hence, diverse and systemic toxicological data with new methods for nanomaterials are needed. In order to investigate the nanotoxicology of nanoparticles (NPs), the Research Team for Nano-Associated Safety Assessment (RT-NASA) was organized in three parts and launched. Each part focused on different contents of research directions: investigators in part I were responsible for the efficient management and international cooperation on nano-safety studies; investigators in part II performed the toxicity evaluations on target organs such as assessment of genotoxicity, immunotoxicity, or skin penetration; and investigators in part III evaluated the toxicokinetics of NPs with newly developed techniques for toxicokinetic analyses and methods for estimating nanotoxicity. The RT-NASA study was carried out in six steps: need assessment, physicochemical property, toxicity evaluation, toxicokinetics, peer review, and risk communication. During the need assessment step, consumer responses were analyzed based on sex, age, education level, and household income. Different sizes of zinc oxide and silica NPs were purchased and coated with citrate, L-serine, and L-arginine in order to modify surface charges (eight different NPs), and each of the NPs were characterized by various techniques, for example, zeta potentials, scanning electron microscopy, and transmission electron microscopy. Evaluation of the "no observed adverse effect level" and systemic toxicities of all NPs were performed by thorough evaluation steps and the toxicokinetics step, which included in vivo studies with zinc oxide and silica NPs. A peer review committee was organized to evaluate and verify the reliability of toxicity tests, and the risk communication step was also needed to convey the current findings to academia, industry, and consumers. Several limitations were encountered in the RT-NASA project, and they are discussed for consideration for improvements in future studies.

  4. Determination of the structures of small gold clusters on stepped magnesia by density functional calculations.

    PubMed

    Damianos, Konstantina; Ferrando, Riccardo

    2012-02-21

    The structural modifications of small supported gold clusters caused by realistic surface defects (steps) in the MgO(001) support are investigated by computational methods. The most stable gold cluster structures on a stepped MgO(001) surface are searched for in the size range up to 24 Au atoms, and locally optimized by density-functional calculations. Several structural motifs are found within energy differences of 1 eV: inclined leaflets, arched leaflets, pyramidal hollow cages and compact structures. We show that the interaction with the step clearly modifies the structures with respect to adsorption on the flat defect-free surface. We find that leaflet structures clearly dominate for smaller sizes. These leaflets are either inclined and quasi-horizontal, or arched, at variance with the case of the flat surface in which vertical leaflets prevail. With increasing cluster size pyramidal hollow cages begin to compete against leaflet structures. Cage structures become more and more favourable as size increases. The only exception is size 20, at which the tetrahedron is found as the most stable isomer. This tetrahedron is however quite distorted. The comparison of two different exchange-correlation functionals (Perdew-Burke-Ernzerhof and local density approximation) show the same qualitative trends. This journal is © The Royal Society of Chemistry 2012

  5. The effect of external forces on discrete motion within holographic optical tweezers.

    PubMed

    Eriksson, E; Keen, S; Leach, J; Goksör, M; Padgett, M J

    2007-12-24

    Holographic optical tweezers is a widely used technique to manipulate the individual positions of optically trapped micron-sized particles in a sample. The trap positions are changed by updating the holographic image displayed on a spatial light modulator. The updating process takes a finite time, resulting in a temporary decrease of the intensity, and thus the stiffness, of the optical trap. We have investigated this change in trap stiffness during the updating process by studying the motion of an optically trapped particle in a fluid flow. We found a highly nonlinear behavior of the change in trap stiffness vs. changes in step size. For step sizes up to approximately 300 nm the trap stiffness is decreasing. Above 300 nm the change in trap stiffness remains constant for all step sizes up to one particle radius. This information is crucial for optical force measurements using holographic optical tweezers.

  6. Finite-difference modeling with variable grid-size and adaptive time-step in porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Wu, Guochen

    2014-04-01

    Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.

  7. Combining gas-phase electrophoretic mobility molecular analysis (GEMMA), light scattering, field flow fractionation and cryo electron microscopy in a multidimensional approach to characterize liposomal carrier vesicles.

    PubMed

    Urey, Carlos; Weiss, Victor U; Gondikas, Andreas; von der Kammer, Frank; Hofmann, Thilo; Marchetti-Deschmann, Martina; Allmaier, Günter; Marko-Varga, György; Andersson, Roland

    2016-11-20

    For drug delivery, characterization of liposomes regarding size, particle number concentrations, occurrence of low-sized liposome artefacts and drug encapsulation are of importance to understand their pharmacodynamic properties. In our study, we aimed to demonstrate the applicability of nano Electrospray Gas-Phase Electrophoretic Mobility Molecular Analyser (nES GEMMA) as a suitable technique for analyzing these parameters. We measured number-based particle concentrations, identified differences in size between nominally identical liposomal samples, and detected the presence of low-diameter material which yielded bimodal particle size distributions. Subsequently, we compared these findings to dynamic light scattering (DLS) data and results from light scattering experiments coupled to Asymmetric Flow-Field Flow Fractionation (AF4), the latter improving the detectability of smaller particles in polydisperse samples due to a size separation step prior detection. However, the bimodal size distribution could not be detected due to method inherent limitations. In contrast, cryo transmission electron microscopy corroborated nES GEMMA results. Hence, gas-phase electrophoresis proved to be a versatile tool for liposome characterization as it could analyze both vesicle size and size distribution. Finally, a correlation of nES GEMMA results with cell viability experiments was carried out to demonstrate the importance of liposome batch-to-batch control as low-sized sample components possibly impact cell viability. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Development of Portable Aerosol Mobility Spectrometer for Personal and Mobile Aerosol Measurement

    PubMed Central

    Kulkarni, Pramod; Qi, Chaolong; Fukushima, Nobuhiko

    2017-01-01

    We describe development of a Portable Aerosol Mobility Spectrometer (PAMS) for size distribution measurement of submicrometer aerosol. The spectrometer is designed for use in personal or mobile aerosol characterization studies and measures approximately 22.5 × 22.5 × 15 cm and weighs about 4.5 kg including the battery. PAMS uses electrical mobility technique to measure number-weighted particle size distribution of aerosol in the 10–855 nm range. Aerosol particles are electrically charged using a dual-corona bipolar corona charger, followed by classification in a cylindrical miniature differential mobility analyzer. A condensation particle counter is used to detect and count particles. The mobility classifier was operated at an aerosol flow rate of 0.05 L/min, and at two different user-selectable sheath flows of 0.2 L/min (for wider size range 15–855 nm) and 0.4 L/min (for higher size resolution over the size range of 10.6–436 nm). The instrument was operated in voltage stepping mode to retrieve the size distribution, which took approximately 1–2 minutes, depending on the configuration. Sizing accuracy and resolution were probed and found to be within the 25% limit of NIOSH criterion for direct-reading instruments (NIOSH 2012). Comparison of size distribution measurements from PAMS and other commercial mobility spectrometers showed good agreement. The instrument offers unique measurement capability for on-person or mobile size distribution measurements of ultrafine and nanoparticle aerosol. PMID:28413241

  9. Application of lateral photovoltage towards contactless light beam induced current measurements and its dependence on the finite beam size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhale, Atul Prakash; Rao, K. S. R. Koteswara, E-mail: ksrkrao@physics.iisc.erent.in

    2014-07-15

    The nature of the signal due to light beam induced current (LBIC) at the remote contacts is verified as a lateral photovoltage for non-uniformly illuminated planar p-n junction devices; simulation and experimental results are presented. The limitations imposed by the ohmic contacts are successfully overcome by the introduction of capacitively coupled remote contacts, which yield similar results without any significant loss in the estimated material and device parameters. It is observed that the LBIC measurements introduce artefacts such as shift in peak position with increasing laser power. Simulation of LBIC signal as a function of characteristic length L{sub c} ofmore » photo-generated carriers and for different beam diameters has resulted in the observed peak shifts, thus attributed to the finite size of the beam. Further, the idea of capacitively coupled contacts has been extended to contactless measurements using pressure contacts with an oxidized aluminium electrodes. This technique avoids the contagious sample processing steps, which may introduce unintentional defects and contaminants into the material and devices under observation. Thus, we present here, the remote contact LBIC as a practically non-destructive tool in the evaluation of device parameters and welcome its use during fabrication steps.« less

  10. Semiautomatic Segmentation of Glioma on Mobile Devices.

    PubMed

    Wu, Ya-Ping; Lin, Yu-Song; Wu, Wei-Guo; Yang, Cong; Gu, Jian-Qin; Bai, Yan; Wang, Mei-Yun

    2017-01-01

    Brain tumor segmentation is the first and the most critical step in clinical applications of radiomics. However, segmenting brain images by radiologists is labor intense and prone to inter- and intraobserver variability. Stable and reproducible brain image segmentation algorithms are thus important for successful tumor detection in radiomics. In this paper, we propose a supervised brain image segmentation method, especially for magnetic resonance (MR) brain images with glioma. This paper uses hard edge multiplicative intrinsic component optimization to preprocess glioma medical image on the server side, and then, the doctors could supervise the segmentation process on mobile devices in their convenient time. Since the preprocessed images have the same brightness for the same tissue voxels, they have small data size (typically 1/10 of the original image size) and simple structure of 4 types of intensity value. This observation thus allows follow-up steps to be processed on mobile devices with low bandwidth and limited computing performance. Experiments conducted on 1935 brain slices from 129 patients show that more than 30% of the sample can reach 90% similarity; over 60% of the samples can reach 85% similarity, and more than 80% of the sample could reach 75% similarity. The comparisons with other segmentation methods also demonstrate both efficiency and stability of the proposed approach.

  11. Rock sampling. [method for controlling particle size distribution

    NASA Technical Reports Server (NTRS)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  12. Measurement Invariance Conventions and Reporting: The State of the Art and Future Directions for Psychological Research

    PubMed Central

    Putnick, Diane L.; Bornstein, Marc H.

    2016-01-01

    Measurement invariance assesses the psychometric equivalence of a construct across groups or across time. Measurement noninvariance suggests that a construct has a different structure or meaning to different groups or on different measurement occasions in the same group, and so the construct cannot be meaningfully tested or construed across groups or across time. Hence, prior to testing mean differences across groups or measurement occasions (e.g., boys and girls, pretest and posttest), or differential relations of the construct across groups, it is essential to assess the invariance of the construct. Conventions and reporting on measurement invariance are still in flux, and researchers are often left with limited understanding and inconsistent advice. Measurement invariance is tested and established in different steps. This report surveys the state of measurement invariance testing and reporting, and details the results of a literature review of studies that tested invariance. Most tests of measurement invariance include configural, metric, and scalar steps; a residual invariance step is reported for fewer tests. Alternative fit indices (AFIs) are reported as model fit criteria for the vast majority of tests; χ2 is reported as the single index in a minority of invariance tests. Reporting AFIs is associated with higher levels of achieved invariance. Partial invariance is reported for about one-third of tests. In general, sample size, number of groups compared, and model size are unrelated to the level of invariance achieved. Implications for the future of measurement invariance testing, reporting, and best practices are discussed. PMID:27942093

  13. Surface treated carbon catalysts produced from waste tires for fatty acids to biofuel conversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hood, Zachary D.; Adhikari, Shiba P.; Wright, Marcus W.

    A method of making solid acid catalysts includes the step of sulfonating waste tire pieces in a first sulfonation step. The sulfonated waste tire pieces are pyrolyzed to produce carbon composite pieces having a pore size less than 10 nm. The carbon composite pieces are then ground to produce carbon composite powders having a size less than 50 .mu.m. The carbon composite particles are sulfonated in a second sulfonation step to produce sulfonated solid acid catalysts. A method of making biofuels and solid acid catalysts are also disclosed.

  14. Transient shear viscosity of weakly aggregating polystyrene latex dispersions

    NASA Astrophysics Data System (ADS)

    de Rooij, R.; Potanin, A. A.; van den Ende, D.; Mellema, J.

    1994-04-01

    The transient behavior of the viscosity (stress growth) of a weakly aggregating polystyrene latex dispersion after a step from a high shear rate to a lower shear rate has been measured and modeled. Single particles cluster together into spherical fractal aggregates. The steady state size of these aggregates is determined by the shear stresses exerted on the latter by the flow field. The restructuring process taking place when going from a starting situation with monodisperse spherical aggregates to larger monodisperse spherical aggregates is described by the capture of primary fractal aggregates by growing aggregates until a new steady state is reached. It is assumed that the aggregation mechanism is diffusion limited. The model is valid if the radii of primary aggregates Rprim are much smaller than the radii of the growing aggregates. Fitting the model to experimental data at two volume fractions and a number of step sizes in shear rate yielded physically reasonable values of Rprim at fractal dimensions 2.1≤df≤2.2. The latter range is in good agreement with the range 2.0≤df≤2.3 obtained from steady shear results. The experimental data have also been fitted to a numerical solution of the diffusion equation for primary aggregates for a cell model with moving boundary, also yielding 2.1≤df≤2.2. The range for df found from both approaches agrees well with the range df≊2.1-2.2 determined from computer simulations on diffusion-limited aggregation including restructuring or thermal breakup after formation of bonds. Thus a simple model has been put forward which may capture the basic features of the aggregating model dispersion on a microstructural level and leads to physically acceptable parameter values.

  15. Improving image quality in laboratory x-ray phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    De Marco, F.; Marschner, M.; Birnbacher, L.; Viermetz, M.; Noël, P.; Herzen, J.; Pfeiffer, F.

    2017-03-01

    Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≍ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.

  16. Method and apparatus for sizing and separating warp yarns using acoustical energy

    DOEpatents

    Sheen, Shuh-Haw; Chien, Hual-Te; Raptis, Apostolos C.; Kupperman, David S.

    1998-01-01

    A slashing process for preparing warp yarns for weaving operations including the steps of sizing and/or desizing the yarns in an acoustic resonance box and separating the yarns with a leasing apparatus comprised of a set of acoustically agitated lease rods. The sizing step includes immersing the yarns in a size solution contained in an acoustic resonance box. Acoustic transducers are positioned against the exterior of the box for generating an acoustic pressure field within the size solution. Ultrasonic waves that result from the acoustic pressure field continuously agitate the size solution to effect greater mixing and more uniform application and penetration of the size onto the yarns. The sized yarns are then separated by passing the warp yarns over and under lease rods. Electroacoustic transducers generate acoustic waves along the longitudinal axis of the lease rods, creating a shearing motion on the surface of the rods for splitting the yarns.

  17. STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies.

    PubMed

    Hepburn, Iain; Chen, Weiliang; Wils, Stefan; De Schutter, Erik

    2012-05-10

    Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. We describe STEPS, a stochastic reaction-diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction-diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. STEPS simulates models of cellular reaction-diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/

  18. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies

    PubMed Central

    2012-01-01

    Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates models of cellular reaction–diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/ PMID:22574658

  19. Integrable microwave filter based on a photonic crystal delay line.

    PubMed

    Sancho, Juan; Bourderionnet, Jerome; Lloret, Juan; Combrié, Sylvain; Gasulla, Ivana; Xavier, Stephane; Sales, Salvador; Colman, Pierre; Lehoucq, Gaelle; Dolfi, Daniel; Capmany, José; De Rossi, Alfredo

    2012-01-01

    The availability of a tunable delay line with a chip-size footprint is a crucial step towards the full implementation of integrated microwave photonic signal processors. Achieving a large and tunable group delay on a millimetre-sized chip is not trivial. Slow light concepts are an appropriate solution, if propagation losses are kept acceptable. Here we use a low-loss 1.5 mm-long photonic crystal waveguide to demonstrate both notch and band-pass microwave filters that can be tuned over the 0-50-GHz spectral band. The waveguide is capable of generating a controllable delay with limited signal attenuation (total insertion loss below 10 dB when the delay is below 70 ps) and degradation. Owing to the very small footprint of the delay line, a fully integrated device is feasible, also featuring more complex and elaborate filter functions.

  20. Finite-density effects in the Fredrickson-Andersen and Kob-Andersen kinetically-constrained models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teomy, Eial, E-mail: eialteom@post.tau.ac.il; Shokef, Yair, E-mail: shokef@tau.ac.il

    2014-08-14

    We calculate the corrections to the thermodynamic limit of the critical density for jamming in the Kob-Andersen and Fredrickson-Andersen kinetically-constrained models, and find them to be finite-density corrections, and not finite-size corrections. We do this by introducing a new numerical algorithm, which requires negligible computer memory since contrary to alternative approaches, it generates at each point only the necessary data. The algorithm starts from a single unfrozen site and at each step randomly generates the neighbors of the unfrozen region and checks whether they are frozen or not. Our results correspond to systems of size greater than 10{sup 7} ×more » 10{sup 7}, much larger than any simulated before, and are consistent with the rigorous bounds on the asymptotic corrections. We also find that the average number of sites that seed a critical droplet is greater than 1.« less

  1. Effect of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths.

    PubMed

    Payne, Velma L; Medvedeva, Olga; Legowski, Elizabeth; Castine, Melissa; Tseytlin, Eugene; Jukic, Drazen; Crowley, Rebecca S

    2009-11-01

    Determine effects of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths. Determine if limited enforcement in a medical tutoring system inhibits students from learning the optimal and most efficient solution path. Describe the type of deviations from the optimal solution path that occur during tutoring, and how these deviations change over time. Determine if the size of the problem-space (domain scope), has an effect on learning gains when using a tutor with limited enforcement. Analyzed data mined from 44 pathology residents using SlideTutor-a Medical Intelligent Tutoring System in Dermatopathology that teaches histopathologic diagnosis and reporting skills based on commonly used diagnostic algorithms. Two subdomains were included in the study representing sub-algorithms of different sizes and complexities. Effects of the tutoring system on student errors, goal states and solution paths were determined. Students gradually increase the frequency of steps that match the tutoring system's expectation of expert performance. Frequency of errors gradually declines in all categories of error significance. Student performance frequently differs from the tutor-defined optimal path. However, as students continue to be tutored, they approach the optimal solution path. Performance in both subdomains was similar for both errors and goal differences. However, the rate at which students progress toward the optimal solution path differs between the two domains. Tutoring in superficial perivascular dermatitis, the larger and more complex domain was associated with a slower rate of approximation towards the optimal solution path. Students benefit from a limited-enforcement tutoring system that leverages diagnostic algorithms but does not prevent alternative strategies. Even with limited enforcement, students converge toward the optimal solution path.

  2. Distribution of joint local and total size and of extension for avalanches in the Brownian force model

    NASA Astrophysics Data System (ADS)

    Delorme, Mathieu; Le Doussal, Pierre; Wiese, Kay Jörg

    2016-05-01

    The Brownian force model is a mean-field model for local velocities during avalanches in elastic interfaces of internal space dimension d , driven in a random medium. It is exactly solvable via a nonlinear differential equation. We study avalanches following a kick, i.e., a step in the driving force. We first recall the calculation of the distributions of the global size (total swept area) and of the local jump size for an arbitrary kick amplitude. We extend this calculation to the joint density of local and global sizes within a single avalanche in the limit of an infinitesimal kick. When the interface is driven by a single point, we find new exponents τ0=5 /3 and τ =7 /4 , depending on whether the force or the displacement is imposed. We show that the extension of a "single avalanche" along one internal direction (i.e., the total length in d =1 ) is finite, and we calculate its distribution following either a local or a global kick. In all cases, it exhibits a divergence P (ℓ ) ˜ℓ-3 at small ℓ . Most of our results are tested in a numerical simulation in dimension d =1 .

  3. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Applicability of corrosion control treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper...

  4. Effect of time step size and turbulence model on the open water hydrodynamic performance prediction of contra-rotating propellers

    NASA Astrophysics Data System (ADS)

    Wang, Zhan-zhi; Xiong, Ying

    2013-04-01

    A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.

  5. The Associations Between Clerkship Objective Structured Clinical Examination (OSCE) Grades and Subsequent Performance.

    PubMed

    Dong, Ting; Zahn, Christopher; Saguil, Aaron; Swygert, Kimberly A; Yoon, Michelle; Servey, Jessica; Durning, Steven

    2017-01-01

    Construct: We investigated the extent of the associations between medical students' clinical competency measured by performance in Objective Structured Clinical Examinations (OSCE) during Obstetrics/Gynecology and Family Medicine clerkships and later performance in both undergraduate and graduate medical education. There is a relative dearth of studies on the correlations between undergraduate OSCE scores and future exam performance within either undergraduate or graduate medical education and almost none on linking these simulated encounters to eventual patient care. Of the research studies that do correlate clerkship OSCE scores with future performance, these often have a small sample size and/or include only 1 clerkship. Students in USU graduating classes of 2007 through 2011 participated in the study. We investigated correlations between clerkship OSCE grades with United States Medical Licensing Examination Step 2 Clinical Knowledge, Clinical Skills, and Step 3 Exams scores as well as Postgraduate Year 1 program director's evaluation scores on Medical Expertise and Professionalism. We also conducted contingency table analysis to examine the associations between poor performance on clerkship OSCEs with failing Step 3 and receiving poor program director ratings. The correlation coefficients were weak between the clerkship OSCE grades and the outcomes. The strongest correlations existed between the clerkship OSCE grades and the Step 2 CS Integrated Clinical Encounter component score, Step 2 Clinical Skills, and Step 3 scores. Contingency table associations between poor performances on both clerkships OSCEs and poor Postgraduate Year 1 Program Director ratings were significant. The results of this study provide additional but limited validity evidence for the use of OSCEs during clinical clerkships given their associations with subsequent performance measures.

  6. Bivariate mass-size relation as a function of morphology as determined by Galaxy Zoo 2 crowdsourced visual classifications

    NASA Astrophysics Data System (ADS)

    Beck, Melanie; Scarlata, Claudia; Fortson, Lucy; Willett, Kyle; Galloway, Melanie

    2016-01-01

    It is well known that the mass-size distribution evolves as a function of cosmic time and that this evolution is different between passive and star-forming galaxy populations. However, the devil is in the details and the precise evolution is still a matter of debate since this requires careful comparison between similar galaxy populations over cosmic time while simultaneously taking into account changes in image resolution, rest-frame wavelength, and surface brightness dimming in addition to properly selecting representative morphological samples.Here we present the first step in an ambitious undertaking to calculate the bivariate mass-size distribution as a function of time and morphology. We begin with a large sample (~3 x 105) of SDSS galaxies at z ~ 0.1. Morphologies for this sample have been determined by Galaxy Zoo crowdsourced visual classifications and we split the sample not only by disk- and bulge-dominated galaxies but also in finer morphology bins such as bulge strength. Bivariate distribution functions are the only way to properly account for biases and selection effects. In particular, we quantify the mass-size distribution with a version of the parametric Maximum Likelihood estimator which has been modified to account for measurement errors as well as upper limits on galaxy sizes.

  7. The Costs of Carnivory

    PubMed Central

    Carbone, Chris; Teacher, Amber; Rowcliffe, J. Marcus

    2007-01-01

    Mammalian carnivores fall into two broad dietary groups: smaller carnivores (<20 kg) that feed on very small prey (invertebrates and small vertebrates) and larger carnivores (>20 kg) that specialize in feeding on large vertebrates. We develop a model that predicts the mass-related energy budgets and limits of carnivore size within these groups. We show that the transition from small to large prey can be predicted by the maximization of net energy gain; larger carnivores achieve a higher net gain rate by concentrating on large prey. However, because it requires more energy to pursue and subdue large prey, this leads to a 2-fold step increase in energy expenditure, as well as increased intake. Across all species, energy expenditure and intake both follow a three-fourths scaling with body mass. However, when each dietary group is considered individually they both display a shallower scaling. This suggests that carnivores at the upper limits of each group are constrained by intake and adopt energy conserving strategies to counter this. Given predictions of expenditure and estimates of intake, we predict a maximum carnivore mass of approximately a ton, consistent with the largest extinct species. Our approach provides a framework for understanding carnivore energetics, size, and extinction dynamics. PMID:17227145

  8. Evolution of Particle Size Distributions in Fragmentation Over Time

    NASA Astrophysics Data System (ADS)

    Charalambous, C. A.; Pike, W. T.

    2013-12-01

    We present a new model of fragmentation based on a probabilistic calculation of the repeated fracture of a particle population. The resulting continuous solution, which is in closed form, gives the evolution of fragmentation products from an initial block, through a scale-invariant power-law relationship to a final comminuted powder. Models for the fragmentation of particles have been developed separately in mainly two different disciplines: the continuous integro-differential equations of batch mineral grinding (Reid, 1965) and the fractal analysis of geophysics (Turcotte, 1986) based on a discrete model with a single probability of fracture. The first gives a time-dependent development of the particle-size distribution, but has resisted a closed-form solution, while the latter leads to the scale-invariant power laws, but with no time dependence. Bird (2009) recently introduced a bridge between these two approaches with a step-wise iterative calculation of the fragmentation products. The development of the particle-size distribution occurs with discrete steps: during each fragmentation event, the particles will repeatedly fracture probabilistically, cascading down the length scales to a final size distribution reached after all particles have failed to further fragment. We have identified this process as the equivalent to a sequence of trials for each particle with a fixed probability of fragmentation. Although the resulting distribution is discrete, it can be reformulated as a continuous distribution in maturity over time and particle size. In our model, Turcotte's power-law distribution emerges at a unique maturation index that defines a regime boundary. Up to this index, the fragmentation is in an erosional regime with the initial particle size setting the scaling. Fragmentation beyond this index is in a regime of comminution with rebreakage of the particles down to the size limit of fracture. The maturation index can increment continuously, for example under grinding conditions, or as discrete steps, such as with impact events. In both cases our model gives the energy associated with the fragmentation in terms of the developing surface area of the population. We show the agreement of our model to the evolution of particle size distributions associated with episodic and continuous fragmentation and how the evolution of some popular fractals may be represented using this approach. C. A. Charalambous and W. T. Pike (2013). Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model. Abstract Submitted to the AGU 46th Fall Meeting. Bird, N. R. A., Watts, C. W., Tarquis, A. M., & Whitmore, A. P. (2009). Modeling dynamic fragmentation of soil. Vadose Zone Journal, 8(1), 197-201. Reid, K. J. (1965). A solution to the batch grinding equation. Chemical Engineering Science, 20(11), 953-963. Turcotte, D. L. (1986). Fractals and fragmentation. Journal of Geophysical Research: Solid Earth 91(B2), 1921-1926.

  9. DNA bipedal motor walking dynamics: an experimental and theoretical study of the dependency on step size

    PubMed Central

    Khara, Dinesh C; Berger, Yaron; Ouldridge, Thomas E

    2018-01-01

    Abstract We present a detailed coarse-grained computer simulation and single molecule fluorescence study of the walking dynamics and mechanism of a DNA bipedal motor striding on a DNA origami. In particular, we study the dependency of the walking efficiency and stepping kinetics on step size. The simulations accurately capture and explain three different experimental observations. These include a description of the maximum possible step size, a decrease in the walking efficiency over short distances and a dependency of the efficiency on the walking direction with respect to the origami track. The former two observations were not expected and are non-trivial. Based on this study, we suggest three design modifications to improve future DNA walkers. Our study demonstrates the ability of the oxDNA model to resolve the dynamics of complex DNA machines, and its usefulness as an engineering tool for the design of DNA machines that operate in the three spatial dimensions. PMID:29294083

  10. Two-step size reduction and post-washing of steam exploded corn stover improving simultaneous saccharification and fermentation for ethanol production.

    PubMed

    Liu, Zhi-Hua; Chen, Hong-Zhang

    2017-01-01

    The simultaneous saccharification and fermentation (SSF) of corn stover biomass for ethanol production was performed by integrating steam explosion (SE) pretreatment, hydrolysis and fermentation. Higher SE pretreatment severity and two-step size reduction increased the specific surface area, swollen volume and water holding capacity of steam exploded corn stover (SECS) and hence facilitated the efficiency of hydrolysis and fermentation. The ethanol production and yield in SSF increased with the decrease of particle size and post-washing of SECS prior to fermentation to remove the inhibitors. Under the SE conditions of 1.5MPa and 9min using 2.0cm particle size, glucan recovery and conversion to glucose by enzymes were 86.2% and 87.2%, respectively. The ethanol concentration and yield were 45.0g/L and 85.6%, respectively. With this two-step size reduction and post-washing strategy, the water utilization efficiency, sugar recovery and conversion, and ethanol concentration and yield by the SSF process were improved. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Computational Analysis of Enhanced Magnetic Bioseparation in Microfluidic Systems with Flow-Invasive Magnetic Elements

    PubMed Central

    Khashan, S. A.; Alazzam, A.; Furlani, E. P.

    2014-01-01

    A microfluidic design is proposed for realizing greatly enhanced separation of magnetically-labeled bioparticles using integrated soft-magnetic elements. The elements are fixed and intersect the carrier fluid (flow-invasive) with their length transverse to the flow. They are magnetized using a bias field to produce a particle capture force. Multiple stair-step elements are used to provide efficient capture throughout the entire flow channel. This is in contrast to conventional systems wherein the elements are integrated into the walls of the channel, which restricts efficient capture to limited regions of the channel due to the short range nature of the magnetic force. This severely limits the channel size and hence throughput. Flow-invasive elements overcome this limitation and enable microfluidic bioseparation systems with superior scalability. This enhanced functionality is quantified for the first time using a computational model that accounts for the dominant mechanisms of particle transport including fully-coupled particle-fluid momentum transfer. PMID:24931437

  12. Location detection and tracking of moving targets by a 2D IR-UWB radar system.

    PubMed

    Nguyen, Van-Han; Pyun, Jae-Young

    2015-03-19

    In indoor environments, the Global Positioning System (GPS) and long-range tracking radar systems are not optimal, because of signal propagation limitations in the indoor environment. In recent years, the use of ultra-wide band (UWB) technology has become a possible solution for object detection, localization and tracking in indoor environments, because of its high range resolution, compact size and low cost. This paper presents improved target detection and tracking techniques for moving objects with impulse-radio UWB (IR-UWB) radar in a short-range indoor area. This is achieved through signal-processing steps, such as clutter reduction, target detection, target localization and tracking. In this paper, we introduce a new combination consisting of our proposed signal-processing procedures. In the clutter-reduction step, a filtering method that uses a Kalman filter (KF) is proposed. Then, in the target detection step, a modification of the conventional CLEAN algorithm which is used to estimate the impulse response from observation region is applied for the advanced elimination of false alarms. Then, the output is fed into the target localization and tracking step, in which the target location and trajectory are determined and tracked by using unscented KF in two-dimensional coordinates. In each step, the proposed methods are compared to conventional methods to demonstrate the differences in performance. The experiments are carried out using actual IR-UWB radar under different scenarios. The results verify that the proposed methods can improve the probability and efficiency of target detection and tracking.

  13. Stepping Stones Triple P-Positive Parenting Program for children with disability: a systematic review and meta-analysis.

    PubMed

    Tellegen, Cassandra L; Sanders, Matthew R

    2013-05-01

    This systematic review and meta-analysis evaluated the treatment effects of a behavioral family intervention, Stepping Stones Triple P (SSTP) for parents of children with disabilities. SSTP is a system of five intervention levels of increasing intensity and narrowing population reach. Twelve studies, including a total of 659 families, met eligibility criteria. Studies needed to have evaluated SSTP, be written in English or German, contribute original data, and have sufficient data for analyses. No restrictions were placed on study design. A series of meta-analyses were performed for seven different outcome categories. Analyses were conducted on the combination of all four levels of SSTP for which evidence exists (Levels 2-5), and were also conducted separately for each level of SSTP. Significant moderate effect sizes were found for all levels of SSTP for reducing child problems, the primary outcome of interest. On secondary outcomes, significant overall effect sizes were found for parenting styles, parenting satisfaction and efficacy, parental adjustment, parental relationship, and observed child behaviors. No significant treatment effects were found for observed parenting behaviors. Moderator analyses showed no significant differences in effect sizes across the levels of SSTP intervention, with the exception of child observations. Risk of bias within and across studies was assessed. Analyses suggested that publication bias and selective reporting bias were not likely to have heavily influenced the findings. The overall evidence base supported the effectiveness of SSTP as an intervention for improving child and parent outcomes in families of children with disabilities. Limitations and future research directions are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Optimized spray drying process for preparation of one-step calcium-alginate gel microspheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popeski-Dimovski, Riste

    Calcium-alginate micro particles have been used extensively in drug delivery systems. Therefore we establish a one-step method for preparation of internally gelated micro particles with spherical shape and narrow size distribution. We use four types of alginate with different G/M ratio and molar weight. The size of the particles is measured using light diffraction and scanning electron microscopy. Measurements showed that with this method, micro particles with size distribution around 4 micrometers can be prepared, and SEM imaging showed that those particles are spherical in shape.

  15. Shear Melting of a Colloidal Glass

    NASA Astrophysics Data System (ADS)

    Eisenmann, Christoph; Kim, Chanjoong; Mattsson, Johan; Weitz, David A.

    2010-01-01

    We use confocal microscopy to explore shear melting of colloidal glasses, which occurs at strains of ˜0.08, coinciding with a strongly non-Gaussian step size distribution. For larger strains, the particle mean square displacement increases linearly with strain and the step size distribution becomes Gaussian. The effective diffusion coefficient varies approximately linearly with shear rate, consistent with a modified Stokes-Einstein relationship in which thermal energy is replaced by shear energy and the length scale is set by the size of cooperatively moving regions consisting of ˜3 particles.

  16. Atomic layer-by-layer thermoelectric conversion in topological insulator bismuth/antimony tellurides.

    PubMed

    Sung, Ji Ho; Heo, Hoseok; Hwang, Inchan; Lim, Myungsoo; Lee, Donghun; Kang, Kibum; Choi, Hee Cheul; Park, Jae-Hoon; Jhi, Seung-Hoon; Jo, Moon-Ho

    2014-07-09

    Material design for direct heat-to-electricity conversion with substantial efficiency essentially requires cooperative control of electrical and thermal transport. Bismuth telluride (Bi2Te3) and antimony telluride (Sb2Te3), displaying the highest thermoelectric power at room temperature, are also known as topological insulators (TIs) whose electronic structures are modified by electronic confinements and strong spin-orbit interaction in a-few-monolayers thickness regime, thus possibly providing another degree of freedom for electron and phonon transport at surfaces. Here, we explore novel thermoelectric conversion in the atomic monolayer steps of a-few-layer topological insulating Bi2Te3 (n-type) and Sb2Te3 (p-type). Specifically, by scanning photoinduced thermoelectric current imaging at the monolayer steps, we show that efficient thermoelectric conversion is accomplished by optothermal motion of hot electrons (Bi2Te3) and holes (Sb2Te3) through 2D subbands and topologically protected surface states in a geometrically deterministic manner. Our discovery suggests that the thermoelectric conversion can be interiorly achieved at the atomic steps of a homogeneous medium by direct exploiting of quantum nature of TIs, thus providing a new design rule for the compact thermoelectric circuitry at the ultimate size limit.

  17. Real-time traffic sign detection and recognition

    NASA Astrophysics Data System (ADS)

    Herbschleb, Ernst; de With, Peter H. N.

    2009-01-01

    The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput

  18. A parallelization method for time periodic steady state in simulation of radio frequency sheath dynamics

    NASA Astrophysics Data System (ADS)

    Kwon, Deuk-Chul; Shin, Sung-Sik; Yu, Dong-Hun

    2017-10-01

    In order to reduce the computing time in simulation of radio frequency (rf) plasma sources, various numerical schemes were developed. It is well known that the upwind, exponential, and power-law schemes can efficiently overcome the limitation on the grid size for fluid transport simulations of high density plasma discharges. Also, the semi-implicit method is a well-known numerical scheme to overcome on the simulation time step. However, despite remarkable advances in numerical techniques and computing power over the last few decades, efficient multi-dimensional modeling of low temperature plasma discharges has remained a considerable challenge. In particular, there was a difficulty on parallelization in time for the time periodic steady state problems such as capacitively coupled plasma discharges and rf sheath dynamics because values of plasma parameters in previous time step are used to calculate new values each time step. Therefore, we present a parallelization method for the time periodic steady state problems by using period-slices. In order to evaluate the efficiency of the developed method, one-dimensional fluid simulations are conducted for describing rf sheath dynamics. The result shows that speedup can be achieved by using a multithreading method.

  19. Proposed variations of the stepped-wedge design can be used to accommodate multiple interventions.

    PubMed

    Lyons, Vivian H; Li, Lingyu; Hughes, James P; Rowhani-Rahbar, Ali

    2017-06-01

    Stepped-wedge design (SWD) cluster-randomized trials have traditionally been used for evaluating a single intervention. We aimed to explore design variants suitable for evaluating multiple interventions in an SWD trial. We identified four specific variants of the traditional SWD that would allow two interventions to be conducted within a single cluster-randomized trial: concurrent, replacement, supplementation, and factorial SWDs. These variants were chosen to flexibly accommodate study characteristics that limit a one-size-fits-all approach for multiple interventions. In the concurrent SWD, each cluster receives only one intervention, unlike the other variants. The replacement SWD supports two interventions that will not or cannot be used at the same time. The supplementation SWD is appropriate when the second intervention requires the presence of the first intervention, and the factorial SWD supports the evaluation of intervention interactions. The precision for estimating intervention effects varies across the four variants. Selection of the appropriate design variant should be driven by the research question while considering the trade-off between the number of steps, number of clusters, restrictions for concurrent implementation of the interventions, lingering effects of each intervention, and precision of the intervention effect estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Impurity effects in crystal growth from solutions: Steady states, transients and step bunch motion

    NASA Astrophysics Data System (ADS)

    Ranganathan, Madhav; Weeks, John D.

    2014-05-01

    We analyze a recently formulated model in which adsorbed impurities impede the motion of steps in crystals grown from solutions, while moving steps can remove or deactivate adjacent impurities. In this model, the chemical potential change of an atom on incorporation/desorption to/from a step is calculated for different step configurations and used in the dynamical simulation of step motion. The crucial difference between solution growth and vapor growth is related to the dependence of the driving force for growth of the main component on the size of the terrace in front of the step. This model has features resembling experiments in solution growth, which yields a dead zone with essentially no growth at low supersaturation and the motion of large coherent step bunches at larger supersaturation. The transient behavior shows a regime wherein steps bunch together and move coherently as the bunch size increases. The behavior at large line tension is reminiscent of the kink-poisoning mechanism of impurities observed in calcite growth. Our model unifies different impurity models and gives a picture of nonequilibrium dynamics that includes both steady states and time dependent behavior and shows similarities with models of disordered systems and the pinning/depinning transition.

  1. Feasibility study of negative lift circumferential type seal for helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Goldring, E. N.

    1977-01-01

    A new seal concept, the negative lift circumferential type seal, was evaluated under simulated helicopter transmission conditions. The bore of the circumferential seal contains step type geometry which produces a negative lift that urges the sealing segments towards the shaft surface. The seal size was a 2.5 inch bore and the test speeds were 7000 and 14,250 rpm. During the 300 hour test at typical transmission seal pressure (to 2 psig) the leakage was within acceptable limits and generally less than 0.1 cc/hour during the last 150 hours of testing. The wear to the carbon segments during the 300 hours was negligible.

  2. Finite-element approach to Brownian dynamics of polymers.

    PubMed

    Cyron, Christian J; Wall, Wolfgang A

    2009-12-01

    In the last decades simulation tools for Brownian dynamics of polymers have attracted more and more interest. Such simulation tools have been applied to a large variety of problems and accelerated the scientific progress significantly. However, the currently most frequently used explicit bead models exhibit severe limitations, especially with respect to time step size, the necessity of artificial constraints and the lack of a sound mathematical foundation. Here we present a framework for simulations of Brownian polymer dynamics based on the finite-element method. This approach allows simulating a wide range of physical phenomena at a highly attractive computational cost on the basis of a far-developed mathematical background.

  3. Computationally efficient method for Fourier transform of highly chirped pulses for laser and parametric amplifier modeling.

    PubMed

    Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail

    2016-11-14

    We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.

  4. Plant genotyping using fluorescently tagged inter-simple sequence repeats (ISSRs): basic principles and methodology.

    PubMed

    Prince, Linda M

    2015-01-01

    Inter-simple sequence repeat PCR (ISSR-PCR) is a fast, inexpensive genotyping technique based on length variation in the regions between microsatellites. The method requires no species-specific prior knowledge of microsatellite location or composition. Very small amounts of DNA are required, making this method ideal for organisms of conservation concern, or where the quantity of DNA is extremely limited due to organism size. ISSR-PCR can be highly reproducible but requires careful attention to detail. Optimization of DNA extraction, fragment amplification, and normalization of fragment peak heights during fluorescent detection are critical steps to minimizing the downstream time spent verifying and scoring the data.

  5. Semi-Automated Hydrophobic Interaction Chromatography Column Scouting Used in the Two-Step Purification of Recombinant Green Fluorescent Protein

    PubMed Central

    Murphy, Patrick J. M.

    2014-01-01

    Background Hydrophobic interaction chromatography (HIC) most commonly requires experimental determination (i.e., scouting) in order to select an optimal chromatographic medium for purifying a given target protein. Neither a two-step purification of untagged green fluorescent protein (GFP) from crude bacterial lysate using sequential HIC and size exclusion chromatography (SEC), nor HIC column scouting elution profiles of GFP, have been previously reported. Methods and Results Bacterial lysate expressing recombinant GFP was sequentially adsorbed to commercially available HIC columns containing butyl, octyl, and phenyl-based HIC ligands coupled to matrices of varying bead size. The lysate was fractionated using a linear ammonium phosphate salt gradient at constant pH. Collected HIC eluate fractions containing retained GFP were then pooled and further purified using high-resolution preparative SEC. Significant differences in presumptive GFP elution profiles were observed using in-line absorption spectrophotometry (A395) and post-run fluorimetry. SDS-PAGE and western blot demonstrated that fluorometric detection was the more accurate indicator of GFP elution in both HIC and SEC purification steps. Comparison of composite HIC column scouting data indicated that a phenyl ligand coupled to a 34 µm matrix produced the highest degree of target protein capture and separation. Conclusions Conducting two-step protein purification using the preferred HIC medium followed by SEC resulted in a final, concentrated product with >98% protein purity. In-line absorbance spectrophotometry was not as precise of an indicator of GFP elution as post-run fluorimetry. These findings demonstrate the importance of utilizing a combination of detection methods when evaluating purification strategies. GFP is a well-characterized model protein, used heavily in educational settings and by researchers with limited protein purification experience, and the data and strategies presented here may aid in development other of HIC-compatible protein purification schemes. PMID:25254496

  6. Recent progress on RE2O3-Mo/W emission materials.

    PubMed

    Wang, Jinshu; Zhang, Xizhu; Liu, Wei; Cui, Yuntao; Wang, Yiman; Zhou, Meiling

    2012-08-01

    RE2O3-Mo/W cathodes were prepared by powder metallurgy method. La2O3-Y2O3-Mo cermet cathodes prepared by traditional sintering method and spark plasma sintering (SPS) exhibit different secondary emission properties. The La2O3-Y2O3-Mo cermet cathode prepared by SPS method has smaller grain size and exhibits better secondary emission performance. Monte carlo calculation results indicate that the secondary electron emission way of the cathode correlates with the grain size. Decreasing the grain size can decrease the positive charging effect of RE2O3 and thus is favorable for the escaping of secondary electrons to vacuum. The Scandia doped tungsten matrix dispenser cathode with a sub-micrometer microstructure of matrix with uniformly distributed nanometer-particles of Scandia has good thermionic emission property. Over 100 A/cm2 full space charge limited current density can be obtained at 950Cb. The cathode surface is covered by a Ba-Sc-O active surface layer with nano-particles distributing mainly on growth steps of W grains, leads to the conspicuous emission property of the cathode.

  7. A practical Bayesian stepped wedge design for community-based cluster-randomized clinical trials: The British Columbia Telehealth Trial.

    PubMed

    Cunanan, Kristen M; Carlin, Bradley P; Peterson, Kevin A

    2016-12-01

    Many clinical trial designs are impractical for community-based clinical intervention trials. Stepped wedge trial designs provide practical advantages, but few descriptions exist of their clinical implementational features, statistical design efficiencies, and limitations. Enhance efficiency of stepped wedge trial designs by evaluating the impact of design characteristics on statistical power for the British Columbia Telehealth Trial. The British Columbia Telehealth Trial is a community-based, cluster-randomized, controlled clinical trial in rural and urban British Columbia. To determine the effect of an Internet-based telehealth intervention on healthcare utilization, 1000 subjects with an existing diagnosis of congestive heart failure or type 2 diabetes will be enrolled from 50 clinical practices. Hospital utilization is measured using a composite of disease-specific hospital admissions and emergency visits. The intervention comprises online telehealth data collection and counseling provided to support a disease-specific action plan developed by the primary care provider. The planned intervention is sequentially introduced across all participating practices. We adopt a fully Bayesian, Markov chain Monte Carlo-driven statistical approach, wherein we use simulation to determine the effect of cluster size, sample size, and crossover interval choice on type I error and power to evaluate differences in hospital utilization. For our Bayesian stepped wedge trial design, simulations suggest moderate decreases in power when crossover intervals from control to intervention are reduced from every 3 to 2 weeks, and dramatic decreases in power as the numbers of clusters decrease. Power and type I error performance were not notably affected by the addition of nonzero cluster effects or a temporal trend in hospitalization intensity. Stepped wedge trial designs that intervene in small clusters across longer periods can provide enhanced power to evaluate comparative effectiveness, while offering practical implementation advantages in geographic stratification, temporal change, use of existing data, and resource distribution. Current population estimates were used; however, models may not reflect actual event rates during the trial. In addition, temporal or spatial heterogeneity can bias treatment effect estimates. © The Author(s) 2016.

  8. Study of CdTe quantum dots grown using a two-step annealing method

    NASA Astrophysics Data System (ADS)

    Sharma, Kriti; Pandey, Praveen K.; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.

    2006-02-01

    High size dispersion, large average radius of quantum dot and low-volume ratio has been a major hurdle in the development of quantum dot based devices. In the present paper, we have grown CdTe quantum dots in a borosilicate glass matrix using a two-step annealing method. Results of optical characterization and the theoretical model of absorption spectra have shown that quantum dots grown using two-step annealing have lower average radius, lesser size dispersion, higher volume ratio and higher decrease in bulk free energy as compared to quantum dots grown conventionally.

  9. High Yield Chemical Vapor Deposition Growth of High Quality Large-Area AB Stacked Bilayer Graphene

    PubMed Central

    Liu, Lixin; Zhou, Hailong; Cheng, Rui; Yu, Woo Jong; Liu, Yuan; Chen, Yu; Shaw, Jonathan; Zhong, Xing; Huang, Yu; Duan, Xiangfeng

    2012-01-01

    Bernal stacked (AB stacked) bilayer graphene is of significant interest for functional electronic and photonic devices due to the feasibility to continuously tune its band gap with a vertical electrical field. Mechanical exfoliation can be used to produce AB stacked bilayer graphene flakes but typically with the sizes limited to a few micrometers. Chemical vapor deposition (CVD) has been recently explored for the synthesis of bilayer graphene but usually with limited coverage and a mixture of AB and randomly stacked structures. Herein we report a rational approach to produce large-area high quality AB stacked bilayer graphene. We show that the self-limiting effect of graphene growth on Cu foil can be broken by using a high H2/CH4 ratio in a low pressure CVD process to enable the continued growth of bilayer graphene. A high temperature and low pressure nucleation step is found to be critical for the formation of bilayer graphene nuclei with high AB stacking ratio. A rational design of a two-step CVD process is developed for the growth of bilayer graphene with high AB stacking ratio (up to 90 %) and high coverage (up to 99 %). The electrical transport studies demonstrated that devices made of the as-grown bilayer graphene exhibit typical characteristics of AB stacked bilayer graphene with the highest carrier mobility exceeding 4,000 cm2/V·s at room temperature, comparable to that of the exfoliated bilayer graphene. PMID:22906199

  10. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  11. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  12. Visually Lossless JPEG 2000 for Remote Image Browsing

    PubMed Central

    Oh, Han; Bilgin, Ali; Marcellin, Michael

    2017-01-01

    Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112

  13. Multi-passes warm rolling of AZ31 magnesium alloy, effect on evaluation of texture, microstructure, grain size and hardness

    NASA Astrophysics Data System (ADS)

    Kamran, J.; Hasan, B. A.; Tariq, N. H.; Izhar, S.; Sarwar, M.

    2014-06-01

    In this study the effect of multi-passes warm rolling of AZ31 magnesium alloy on texture, microstructure, grain size variation and hardness of as cast sample (A) and two rolled samples (B & C) taken from different locations of the as-cast ingot was investigated. The purpose was to enhance the formability of AZ31 alloy in order to help manufacturability. It was observed that multi-passes warm rolling (250°C to 350°C) of samples B & C with initial thickness 7.76mm and 7.73 mm was successfully achieved up to 85% reduction without any edge or surface cracks in ten steps with a total of 26 passes. The step numbers 1 to 4 consist of 5, 2, 11 and 3 passes respectively, the remaining steps 5 to 10 were single pass rolls. In each discrete step a fixed roll gap is used in a way that true strain per step increases very slowly from 0.0067 in the first step to 0.7118 in the 26th step. Both samples B & C showed very similar behavior after 26th pass and were successfully rolled up to 85% thickness reduction. However, during 10th step (27th pass) with a true strain value of 0.772 the sample B experienced very severe surface as well as edge cracks. Sample C was therefore not rolled for the 10th step and retained after 26 passes. Both samples were studied in terms of their basal texture, microstructure, grain size and hardness. Sample C showed an equiaxed grain structure after 85% total reduction. The equiaxed grain structure of sample C may be due to the effective involvement of dynamic recrystallization (DRX) which led to formation of these grains with relatively low misorientations with respect to the parent as cast grains. The sample B on the other hand showed a microstructure in which all the grains were elongated along the rolling direction (RD) after 90 % total reduction and DRX could not effectively play its role due to heavy strain and lack of plastic deformation systems. The microstructure of as cast sample showed a near-random texture (mrd 4.3), with average grain size of 44 & micro-hardness of 52 Hv. The grain size of sample B and C was 14μm and 27μm respectively and mrd intensity of basal texture was 5.34 and 5.46 respectively. The hardness of sample B and C came out to be 91 and 66 Hv respectively due to reduction in grain size and followed the well known Hall-Petch relationship.

  14. One-step multiplex PCR method for the determination of pecan and Brazil nut allergens in food products.

    PubMed

    Hubalkova, Zora; Rencova, Eva

    2011-10-01

    A one-step polymerase chain reaction (PCR) method for the simultaneous detection of the major allergens of pecan and Brazil nuts was developed. Primer pairs for the amplification of partial sequences of genes encoding the allergens were designed and tested for their specificity on a range of food components. The targeted amplicon size was 173 bp of Ber e 1 gene of Brazil nuts and 72 bp of vicilin-like seed storage protein gene in pecan nuts. The primer pair detecting the noncoding region of the chloroplast DNA was used as the internal control of amplification. The intrinsic detection limit of the PCR method was 100 pg mL(-1) pecan or Brazil nuts DNA. The practical detection limit was 0.1% w/w (1 g kg(-1)). The method was applied for the investigation of 63 samples with the declaration of pecans, Brazil nuts, other different nut species or nuts generally. In 15 food samples pecans and Brazil nuts allergens were identified in the conformity with the food declaration. The presented multiplex PCR method is specific enough and can be used as a fast approach for the detection of major allergens of pecan or Brazil nuts in food. Copyright © 2011 Society of Chemical Industry.

  15. Multiple stage miniature stepping motor

    DOEpatents

    Niven, William A.; Shikany, S. David; Shira, Michael L.

    1981-01-01

    A stepping motor comprising a plurality of stages which may be selectively activated to effect stepping movement of the motor, and which are mounted along a common rotor shaft to achieve considerable reduction in motor size and minimum diameter, whereby sequential activation of the stages results in successive rotor steps with direction being determined by the particular activating sequence followed.

  16. Kinematic and behavioral analyses of protective stepping strategies and risk for falls among community living older adults.

    PubMed

    Bair, Woei-Nan; Prettyman, Michelle G; Beamer, Brock A; Rogers, Mark W

    2016-07-01

    Protective stepping evoked by externally applied lateral perturbations reveals balance deficits underlying falls. However, a lack of comprehensive information about the control of different stepping strategies in relation to the magnitude of perturbation limits understanding of balance control in relation to age and fall status. The aim of this study was to investigate different protective stepping strategies and their kinematic and behavioral control characteristics in response to different magnitudes of lateral waist-pulls between older fallers and non-fallers. Fifty-two community-dwelling older adults (16 fallers) reacted naturally to maintain balance in response to five magnitudes of lateral waist-pulls. The balance tolerance limit (BTL, waist-pull magnitude where protective steps transitioned from single to multiple steps), first step control characteristics (stepping frequency and counts, spatial-temporal kinematic, and trunk position at landing) of four naturally selected protective step types were compared between fallers and non-fallers at- and above-BTL. Fallers took medial-steps most frequently while non-fallers most often took crossover-back-steps. Only non-fallers varied their step count and first step control parameters by step type at the instants of step initiation (onset time) and termination (trunk position), while both groups modulated step execution parameters (single stance duration and step length) by step type. Group differences were generally better demonstrated above-BTL. Fallers primarily used a biomechanically less effective medial-stepping strategy that may be partially explained by reduced somato-sensation. Fallers did not modulate their step parameters by step type at first step initiation and termination, instances particularly vulnerable to instability, reflecting their limitations in balance control during protective stepping. Copyright © 2016. Published by Elsevier Ltd.

  17. Modelling Limit Order Execution Times from Market Data

    NASA Astrophysics Data System (ADS)

    Kim, Adlar; Farmer, Doyne; Lo, Andrew

    2007-03-01

    Although the term ``liquidity'' is widely used in finance literatures, its meaning is very loosely defined and there is no quantitative measure for it. Generally, ``liquidity'' means an ability to quickly trade stocks without causing a significant impact on the stock price. From this definition, we identified two facets of liquidity -- 1.execution time of limit orders, and 2.price impact of market orders. The limit order is an order to transact a prespecified number of shares at a prespecified price, which will not cause an immediate execution. On the other hand, the market order is an order to transact a prespecified number of shares at a market price, which will cause an immediate execution, but are subject to price impact. Therefore, when the stock is liquid, market participants will experience quick limit order executions and small market order impacts. As a first step to understand market liquidity, we studied the facet of liquidity related to limit order executions -- execution times. In this talk, we propose a novel approach of modeling limit order execution times and show how they are affected by size and price of orders. We used q-Weibull distribution, which is a generalized form of Weibull distribution that can control the fatness of tail to model limit order execution times.

  18. Vitrification of zona-free rabbit expanded or hatching blastocysts: a possible model for human blastocysts.

    PubMed

    Cervera, R P; Garcia-Ximénez, F

    2003-10-01

    The purpose of this study was to test the effectiveness of one two-step (A) and two one-step (B1 and B2) vitrification procedures on denuded expanded or hatching rabbit blastocysts held in standard sealed plastic straws as a possible model for human blastocysts. The effect of blastocyst size was also studied on the basis of three size categories (I: diameter <200 micro m; II: diameter 200-299 micro m; III: diameter >/==" BORDER="0">300 micro m). Rabbit expanded or hatching blastocysts were vitrified at day 4 or 5. Before vitrification, the zona pellucida was removed using acidic phosphate buffered saline. For the two-step procedure, prior to vitrification, blastocysts were pre- equilibrated in a solution containing 10% dimethyl sulphoxide (DMSO) and 10% ethylene glycol (EG) for 1 min. Different final vitrification solutions were compared: 20% DMSO and 20% EG with (A and B1) or without (B2) 0.5 mol/l sucrose. Of 198 vitrified blastocysts, 181 (91%) survived, regardless of the vitrification procedure applied. Vitrification procedure A showed significantly higher re-expansion (88%), attachment (86%) and trophectoderm outgrowth (80%) rates than the two one-step vitrification procedures, B1 and B2 (46 and 21%, 20 and 33%, and 18 and 23%, respectively). After warming, blastocysts of greater size (II and III) showed significantly higher attachment (54 and 64%) and trophectoderm outgrowth (44 and 58%) rates than smaller blastocysts (I, attachment: 29%; trophectoderm outgrowth: 25%). These result demonstrate that denuded expanded or hatching rabbit blastocysts of greater size can be satisfactorily vitrified by use of a two-step procedure. The similarity of vitrification solutions used in humans could make it feasible to test such a procedure on human denuded blastocysts of different sizes.

  19. Newmark local time stepping on high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less

  20. Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi

    A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.

  1. Step-by-step guideline for disease-specific costing studies in low- and middle-income countries: a mixed methodology

    PubMed Central

    Hendriks, Marleen E.; Kundu, Piyali; Boers, Alexander C.; Bolarinwa, Oladimeji A.; te Pas, Mark J.; Akande, Tanimola M.; Agbede, Kayode; Gomez, Gabriella B.; Redekop, William K.; Schultsz, Constance; Tan, Siok Swan

    2014-01-01

    Background Disease-specific costing studies can be used as input into cost-effectiveness analyses and provide important information for efficient resource allocation. However, limited data availability and limited expertise constrain such studies in low- and middle-income countries (LMICs). Objective To describe a step-by-step guideline for conducting disease-specific costing studies in LMICs where data availability is limited and to illustrate how the guideline was applied in a costing study of cardiovascular disease prevention care in rural Nigeria. Design The step-by-step guideline provides practical recommendations on methods and data requirements for six sequential steps: 1) definition of the study perspective, 2) characterization of the unit of analysis, 3) identification of cost items, 4) measurement of cost items, 5) valuation of cost items, and 6) uncertainty analyses. Results We discuss the necessary tradeoffs between the accuracy of estimates and data availability constraints at each step and illustrate how a mixed methodology of accurate bottom-up micro-costing and more feasible approaches can be used to make optimal use of all available data. An illustrative example from Nigeria is provided. Conclusions An innovative, user-friendly guideline for disease-specific costing in LMICs is presented, using a mixed methodology to account for limited data availability. The illustrative example showed that the step-by-step guideline can be used by healthcare professionals in LMICs to conduct feasible and accurate disease-specific cost analyses. PMID:24685170

  2. Instrumentation for studying binder burnout in an immobilized plutonium ceramic wasteform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, M; Pugh, D; Herman, C

    The Plutonium Immobilization Program produces a ceramic wasteform that utilizes organic binders. Several techniques and instruments were developed to study binder burnout on full size ceramic samples in a production environment. This approach provides a method for developing process parameters on production scale to optimize throughput, product quality, offgas behavior, and plant emissions. These instruments allow for offgas analysis, large-scale TGA, product quality observation, and thermal modeling. Using these tools, results from lab-scale techniques such as laser dilametry studies and traditional TGA/DTA analysis can be integrated. Often, the sintering step of a ceramification process is the limiting process step thatmore » controls the production throughput. Therefore, optimization of sintering behavior is important for overall process success. Furthermore, the capabilities of this instrumentation allows better understanding of plant emissions of key gases: volatile organic compounds (VOCs), volatile inorganics including some halide compounds, NO{sub x}, SO{sub x}, carbon dioxide, and carbon monoxide.« less

  3. Responsive Urban Models by Processing Sets of Heterogeneous Data

    NASA Astrophysics Data System (ADS)

    Calvano, M.; Casale, A.; Ippoliti, E.; Guadagnoli, F.

    2018-05-01

    This paper presents some steps in experimentation aimed at describing urban spaces made following the series of earthquakes that affected a vast area of central Italy starting on 24 August 2016. More specifically, these spaces pertain to historical centres of limited size and case studies that can be called "problematic" (due to complex morphological and settlement conditions, because they are difficult to access, or because they have been affected by calamitous events, etc.). The main objectives were to verify the use of sets of heterogeneous data that are already largely available to define a workflow and develop procedures that would allow some of the steps to be automated as much as possible. The most general goal was to use the experimentation to define a methodology to approach the problem aimed at developing descriptive responsive models of the urban space, that is, morphological and computer-based models capable of being modified in relation to the constantly updated flow of input data.

  4. One-step synthesis of zero-dimensional hollow nanoporous gold nanoparticles with enhanced methanol electrooxidation performance.

    PubMed

    Pedireddy, Srikanth; Lee, Hiang Kwee; Tjiu, Weng Weei; Phang, In Yee; Tan, Hui Ru; Chua, Shu Quan; Troadec, Cedric; Ling, Xing Yi

    2014-09-17

    Nanoporous gold with networks of interconnected ligaments and highly porous structure holds stimulating technological implications in fuel cell catalysis. Current syntheses of nanoporous gold mainly revolve around de-alloying approaches that are generally limited by stringent and harsh multistep protocols. Here we develop a one-step solution phase synthesis of zero-dimensional hollow nanoporous gold nanoparticles with tunable particle size (150-1,000 nm) and ligament thickness (21-54 nm). With faster mass diffusivity, excellent specific electroactive surface area and large density of highly active surface sites, our zero-dimensional nanoporous gold nanoparticles exhibit ~1.4 times enhanced catalytic activity and improved tolerance towards carbonaceous species, demonstrating their superiority over conventional nanoporous gold sheets. Detailed mechanistic study also reveals the crucial heteroepitaxial growth of gold on the surface of silver chloride templates, implying that our synthetic protocol is generic and may be extended to the synthesis of other nanoporous metals via different templates.

  5. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    NASA Astrophysics Data System (ADS)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  6. Small RNA Library Preparation Method for Next-Generation Sequencing Using Chemical Modifications to Prevent Adapter Dimer Formation.

    PubMed

    Shore, Sabrina; Henderson, Jordana M; Lebedev, Alexandre; Salcedo, Michelle P; Zon, Gerald; McCaffrey, Anton P; Paul, Natasha; Hogrefe, Richard I

    2016-01-01

    For most sample types, the automation of RNA and DNA sample preparation workflows enables high throughput next-generation sequencing (NGS) library preparation. Greater adoption of small RNA (sRNA) sequencing has been hindered by high sample input requirements and inherent ligation side products formed during library preparation. These side products, known as adapter dimer, are very similar in size to the tagged library. Most sRNA library preparation strategies thus employ a gel purification step to isolate tagged library from adapter dimer contaminants. At very low sample inputs, adapter dimer side products dominate the reaction and limit the sensitivity of this technique. Here we address the need for improved specificity of sRNA library preparation workflows with a novel library preparation approach that uses modified adapters to suppress adapter dimer formation. This workflow allows for lower sample inputs and elimination of the gel purification step, which in turn allows for an automatable sRNA library preparation protocol.

  7. One-step synthesis of zero-dimensional hollow nanoporous gold nanoparticles with enhanced methanol electrooxidation performance

    NASA Astrophysics Data System (ADS)

    Pedireddy, Srikanth; Lee, Hiang Kwee; Tjiu, Weng Weei; Phang, In Yee; Tan, Hui Ru; Chua, Shu Quan; Troadec, Cedric; Ling, Xing Yi

    2014-09-01

    Nanoporous gold with networks of interconnected ligaments and highly porous structure holds stimulating technological implications in fuel cell catalysis. Current syntheses of nanoporous gold mainly revolve around de-alloying approaches that are generally limited by stringent and harsh multistep protocols. Here we develop a one-step solution phase synthesis of zero-dimensional hollow nanoporous gold nanoparticles with tunable particle size (150-1,000 nm) and ligament thickness (21-54 nm). With faster mass diffusivity, excellent specific electroactive surface area and large density of highly active surface sites, our zero-dimensional nanoporous gold nanoparticles exhibit ~1.4 times enhanced catalytic activity and improved tolerance towards carbonaceous species, demonstrating their superiority over conventional nanoporous gold sheets. Detailed mechanistic study also reveals the crucial heteroepitaxial growth of gold on the surface of silver chloride templates, implying that our synthetic protocol is generic and may be extended to the synthesis of other nanoporous metals via different templates.

  8. A modular platform for one-step assembly of multi-component membrane systems by fusion of charged proteoliposomes

    NASA Astrophysics Data System (ADS)

    Ishmukhametov, Robert R.; Russell, Aidan N.; Berry, Richard M.

    2016-10-01

    An important goal in synthetic biology is the assembly of biomimetic cell-like structures, which combine multiple biological components in synthetic lipid vesicles. A key limiting assembly step is the incorporation of membrane proteins into the lipid bilayer of the vesicles. Here we present a simple method for delivery of membrane proteins into a lipid bilayer within 5 min. Fusogenic proteoliposomes, containing charged lipids and membrane proteins, fuse with oppositely charged bilayers, with no requirement for detergent or fusion-promoting proteins, and deliver large, fragile membrane protein complexes into the target bilayers. We demonstrate the feasibility of our method by assembling a minimal electron transport chain capable of adenosine triphosphate (ATP) synthesis, combining Escherichia coli F1Fo ATP-synthase and the primary proton pump bo3-oxidase, into synthetic lipid vesicles with sizes ranging from 100 nm to ~10 μm. This provides a platform for the combination of multiple sets of membrane protein complexes into cell-like artificial structures.

  9. Active control of impulsive noise with symmetric α-stable distribution based on an improved step-size normalized adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yali; Zhang, Qizhi; Yin, Yixin

    2015-05-01

    In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.

  10. Method and apparatus for sizing and separating warp yarns using acoustical energy

    DOEpatents

    Sheen, S.H.; Chien, H.T.; Raptis, A.C.; Kupperman, D.S.

    1998-05-19

    A slashing process is disclosed for preparing warp yarns for weaving operations including the steps of sizing and/or desizing the yarns in an acoustic resonance box and separating the yarns with a leasing apparatus comprised of a set of acoustically agitated lease rods. The sizing step includes immersing the yarns in a size solution contained in an acoustic resonance box. Acoustic transducers are positioned against the exterior of the box for generating an acoustic pressure field within the size solution. Ultrasonic waves that result from the acoustic pressure field continuously agitate the size solution to effect greater mixing and more uniform application and penetration of the size onto the yarns. The sized yarns are then separated by passing the warp yarns over and under lease rods. Electroacoustic transducers generate acoustic waves along the longitudinal axis of the lease rods, creating a shearing motion on the surface of the rods for splitting the yarns. 2 figs.

  11. Optimization of Surface Roughness and Wall Thickness in Dieless Incremental Forming Of Aluminum Sheet Using Taguchi

    NASA Astrophysics Data System (ADS)

    Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir

    2018-03-01

    Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using Taguchi analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.

  12. Single cardiac ventricular myosins are autonomous motors

    PubMed Central

    Wang, Yihua; Yuan, Chen-Ching; Kazmierczak, Katarzyna; Szczesna-Cordary, Danuta

    2018-01-01

    Myosin transduces ATP free energy into mechanical work in muscle. Cardiac muscle has dynamically wide-ranging power demands on the motor as the muscle changes modes in a heartbeat from relaxation, via auxotonic shortening, to isometric contraction. The cardiac power output modulation mechanism is explored in vitro by assessing single cardiac myosin step-size selection versus load. Transgenic mice express human ventricular essential light chain (ELC) in wild- type (WT), or hypertrophic cardiomyopathy-linked mutant forms, A57G or E143K, in a background of mouse α-cardiac myosin heavy chain. Ensemble motility and single myosin mechanical characteristics are consistent with an A57G that impairs ELC N-terminus actin binding and an E143K that impairs lever-arm stability, while both species down-shift average step-size with increasing load. Cardiac myosin in vivo down-shifts velocity/force ratio with increasing load by changed unitary step-size selections. Here, the loaded in vitro single myosin assay indicates quantitative complementarity with the in vivo mechanism. Both have two embedded regulatory transitions, one inhibiting ADP release and a second novel mechanism inhibiting actin detachment via strain on the actin-bound ELC N-terminus. Competing regulators filter unitary step-size selection to control force-velocity modulation without myosin integration into muscle. Cardiac myosin is muscle in a molecule. PMID:29669825

  13. Effect of reaction-step-size noise on the switching dynamics of stochastic populations

    NASA Astrophysics Data System (ADS)

    Be'er, Shay; Heller-Algazi, Metar; Assaf, Michael

    2016-05-01

    In genetic circuits, when the messenger RNA lifetime is short compared to the cell cycle, proteins are produced in geometrically distributed bursts, which greatly affects the cellular switching dynamics between different metastable phenotypic states. Motivated by this scenario, we study a general problem of switching or escape in stochastic populations, where influx of particles occurs in groups or bursts, sampled from an arbitrary distribution. The fact that the step size of the influx reaction is a priori unknown and, in general, may fluctuate in time with a given correlation time and statistics, introduces an additional nondemographic reaction-step-size noise into the system. Employing the probability-generating function technique in conjunction with Hamiltonian formulation, we are able to map the problem in the leading order onto solving a stationary Hamilton-Jacobi equation. We show that compared to the "usual case" of single-step influx, bursty influx exponentially decreases the population's mean escape time from its long-lived metastable state. In particular, close to bifurcation we find a simple analytical expression for the mean escape time which solely depends on the mean and variance of the burst-size distribution. Our results are demonstrated on several realistic distributions and compare well with numerical Monte Carlo simulations.

  14. TRUMP. Transient & S-State Temperature Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    1992-03-03

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables, temperature, pressure, or field strength. Initial conditions may vary with spatial position,more » and among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables, temperature, pressure, or field strength. Initial conditions may vary with spatial position,more » and among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.« less

  16. A comparison of artificial compressibility and fractional step methods for incompressible flow computations

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Darian, Armen; Sindir, Munir

    1992-01-01

    We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).

  17. Intermediate surface structure between step bunching and step flow in SrRuO3 thin film growth

    NASA Astrophysics Data System (ADS)

    Bertino, Giulia; Gura, Anna; Dawber, Matthew

    We performed a systematic study of SrRuO3 thin films grown on TiO2 terminated SrTiO3 substrates using off-axis magnetron sputtering. We investigated the step bunching formation and the evolution of the SRO film morphology by varying the step size of the substrate, the growth temperature and the film thickness. The thin films were characterized using Atomic Force Microscopy and X-Ray Diffraction. We identified single and multiple step bunching and step flow growth regimes as a function of the growth parameters. Also, we clearly observe a stronger influence of the step size of the substrate on the evolution of the SRO film surface with respect to the other growth parameters. Remarkably, we observe the formation of a smooth, regular and uniform ``fish skin'' structure at the transition between one regime and another. We believe that the fish skin structure results from the merging of 2D flat islands predicted by previous models. The direct observation of this transition structure allows us to better understand how and when step bunching develops in the growth of SrRuO3 thin films.

  18. Optimal Padding for the Two-Dimensional Fast Fourier Transform

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.

    2011-01-01

    One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications.

  19. Steps Toward an EOS-Era Aerosol Type Climatology

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph A.

    2012-01-01

    We still have a way to go to develop a global climatology of aerosol type from the EOS-era satellite data record that currently spans more than 12 years of observations. We have demonstrated the ability to retrieve aerosol type regionally, providing a classification based on the combined constraints on particle size, shape, and single-scattering albedo (SSA) from the MISR instrument. Under good but not necessarily ideal conditions, the MISR data can distinguish three-to-five size bins, two-to-four bins in SSA, and spherical vs. non-spherical particles. However, retrieval sensitivity varies enormously with scene conditions. So, for example, there is less information about aerosol type when the mid-visible aerosol optical depth (AOD) is less that about 0.15 or 0.2, or when the range of scattering angles observed is reduced by solar geometry, even though the quality of the AOD retrieval itself is much less sensitive to these factors. This presentation will review a series of studies aimed at assessing the capabilities, as well as the limitations, of MISR aerosol type retrievals involving wildfire smoke, desert dust, volcanic ash, and urban pollution, in specific cases where suborbital validation data are available. A synthesis of results, planned upgrades to the MISR Standard aerosol algorithm to improve aerosol type retrievals, and steps toward the development of an aerosol type quality flag for the Standard product, will also be covered.

  20. Bonding of TRIP-Steel/Al2O3-(3Y)-TZP Composites and (3Y)-TZP Ceramic by a Spark Plasma Sintering (SPS) Apparatus

    PubMed Central

    Miriyev, Aslan; Grützner, Steffen; Krüger, Lutz; Kalabukhov, Sergey; Frage, Nachum

    2016-01-01

    A combination of the high damage tolerance of TRIP-steel and the extremely low thermal conductivity of partially stabilized zirconia (PSZ) can provide controlled thermal-mechanical properties to sandwich-shaped composite specimens comprising these materials. Sintering the (TRIP-steel-PSZ)/PSZ sandwich in a single step is very difficult due to differences in the sintering temperature and densification kinetics of the composite and the ceramic powders. In the present study, we successfully applied a two-step approach involving separate SPS consolidation of pure (3Y)-TZP and composites containing 20 vol % TRIP-steel, 40 vol % Al2O3 and 40 vol % (3Y)-TZP ceramic phase, and subsequent diffusion joining of both sintered components in an SPS apparatus. The microstructure and properties of the sintered and bonded specimens were characterized. No defects at the interface between the TZP and the composite after joining in the 1050–1150 °C temperature range were observed. Only limited grain growth occurred during joining, while crystallite size, hardness, shear strength and the fraction of the monoclinic phase in the TZP ceramic virtually did not change. The slight increase of the TZP layer’s fracture toughness with the joining temperature was attributed to the effect of grain size on transformation toughening. PMID:28773680

  1. Evaluation of a Fluorochlorozirconate Glass-Ceramic Storage Phosphor Plate for Gamma-Ray Computed Radiography

    DOE PAGES

    Leonard, Russell L.; Gray, Sharon K.; Alvarez, Carlos J.; ...

    2015-05-21

    In this paper, a fluorochlorozirconate (FCZ) glass-ceramic containing orthorhombic barium chloride crystals doped with divalent europium was evaluated for use as a storage phosphor in gamma-ray imaging. X-ray diffraction and phosphorimetry of the glass-ceramic sample showed the presence of a significant amount of orthorhombic barium chloride crystals in the glass matrix. Transmission electron microscopy and scanning electron microscopy were used to identify crystal size, structure, and morphology. The size of the orthorhombic barium chloride crystals in the FCZ glass matrix was very large, ~0.5–0.7 μm, which can limit image resolution. The FCZ glass-ceramic sample was exposed to 1 MeV gammamore » rays to determine its photostimulated emission characteristics at high energies, which were found to be suitable for imaging applications. Test images were made at 2 MeV energies using gap and step wedge phantoms. Gaps as small as 101.6 μm in a 440 stainless steel phantom were imaged using the sample imaging plate. Analysis of an image created using a depleted uranium step wedge phantom showed that emission is proportional to incident energy at the sample and the estimated absorbed dose. Finally, the results showed that the sample imaging plate has potential for gamma-ray-computed radiography and dosimetry applications.« less

  2. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.

    PubMed

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.

  3. Micro-computed tomography characterization of tissue engineering scaffolds: effects of pixel size and rotation step.

    PubMed

    Cengiz, Ibrahim Fatih; Oliveira, Joaquim Miguel; Reis, Rui L

    2017-08-01

    Quantitative assessment of micro-structure of materials is of key importance in many fields including tissue engineering, biology, and dentistry. Micro-computed tomography (µ-CT) is an intensively used non-destructive technique. However, the acquisition parameters such as pixel size and rotation step may have significant effects on the obtained results. In this study, a set of tissue engineering scaffolds including examples of natural and synthetic polymers, and ceramics were analyzed. We comprehensively compared the quantitative results of µ-CT characterization using 15 acquisition scenarios that differ in the combination of the pixel size and rotation step. The results showed that the acquisition parameters could statistically significantly affect the quantified mean porosity, mean pore size, and mean wall thickness of the scaffolds. The effects are also practically important since the differences can be as high as 24% regarding the mean porosity in average, and 19.5 h and 166 GB regarding the characterization time and data storage per sample with a relatively small volume. This study showed in a quantitative manner the effects of such a wide range of acquisition scenarios on the final data, as well as the characterization time and data storage per sample. Herein, a clear picture of the effects of the pixel size and rotation step on the results is provided which can notably be useful to refine the practice of µ-CT characterization of scaffolds and economize the related resources.

  4. Homoepitaxial and Heteroepitaxial Growth on Step-Free SiC Mesas

    NASA Technical Reports Server (NTRS)

    Neudeck, Philip G.; Powell, J. Anthony

    2004-01-01

    This article describes the initial discovery and development of new approaches to SiC homoepitaxial and heteroepitaxial growth. These approaches are based upon the previously unanticipated ability to effectively supress two-dimensional nucleation of 3C-SiC on large basal plane terraces that form between growth steps when epitaxy is carried out on 4H- and 6H-SiC nearly on-axis substrates. After subdividing the growth surface into mesa regions, pure stepflow homoeptixay with no terrace nucleation was then used to grow all existing surface steps off the edges of screw-dislocation-free mesas, leaving behind perfectly on-axis (0001) basal plane mesa surfaces completely free of atomic-scale steps. Step-free mesa surfaces as large as 0.4 mm x 0.4 mm were experimentally realized, with the yield and size of step-free mesas being initally limited by substrate screw dislocations. Continued epitaxial growth following step-free surface formation leads to the formation of thin lateral cantilevers that extend the step-free surface area from the top edge of the mesa sidewalls. By selecting a proper pre-growth mesa shape and crystallographic orientation, the rate of cantilever growth can be greatly enhanced in a web growth process that has been used to (1) enlarge step-free surface areas and (2) overgrow and laterally relocate micropipes and screw dislocations. A new growth process, named step-free surface heteroepitaxy, has been developed to achieve 3C-SiC films on 4H- and 6H-SiC substrate mesas completely free of double positioning boundary and stacking fault defects. The process is based upon the controlled terrace nucleation and lateral expansion of a single island of 3C-SiC across a step-free mesa surface. Experimental results indicate that substrateepilayer lattice mismatch is at least partially relieved parallel to the interface without dislocations that undesirably thread through the thickness of the epilayer. These results should enable realization of improved SiC homojunction and heterojunction devices. In addition, these experiments offer important insights into the nature of polytypism during SiC crystal growth.

  5. In situ formation deposited ZnO nanoparticles on silk fabrics under ultrasound irradiation.

    PubMed

    Khanjani, Somayeh; Morsali, Ali; Joo, Sang W

    2013-03-01

    Deposition of zinc(II) oxide (ZnO) nanoparticles on the surface of silk fabrics was prepared by sequential dipping steps in alternating bath of potassium hydroxide and zinc nitrate under ultrasound irradiation. This coating involves in situ generation and deposition of ZnO in a one step. The effects of ultrasound irradiation, concentration and sequential dipping steps on growth of the ZnO nanoparticles have been studied. Results show a decrease in the particles size as increasing power of ultrasound irradiation. Also, increasing of the concentration and sequential dipping steps increase particle size. The physicochemical properties of the nanoparticles were determined by powder X-ray diffraction (XRD), scanning electron microscopy (SEM) and wavelength dispersive X-ray (WDX). Copyright © 2012 Elsevier B.V. All rights reserved.

  6. A semi-flexible model prediction for the polymerization force exerted by a living F-actin filament on a fixed wall

    NASA Astrophysics Data System (ADS)

    Pierleoni, Carlo; Ciccotti, Giovanni; Ryckaert, Jean-Paul

    2015-10-01

    We consider a single living semi-flexible filament with persistence length ℓp in chemical equilibrium with a solution of free monomers at fixed monomer chemical potential μ1 and fixed temperature T. While one end of the filament is chemically active with single monomer (de)polymerization steps, the other end is grafted normally to a rigid wall to mimic a rigid network from which the filament under consideration emerges. A second rigid wall, parallel to the grafting wall, is fixed at distance L < < ℓp from the filament seed. In supercritical conditions where monomer density ρ1 is higher than the critical density ρ1c, the filament tends to polymerize and impinges onto the second surface which, in suitable conditions (non-escaping filament regime), stops the filament growth. We first establish the grand-potential Ω(μ1, T, L) of this system treated as an ideal reactive mixture, and derive some general properties, in particular the filament size distribution and the force exerted by the living filament on the obstacle wall. We apply this formalism to the semi-flexible, living, discrete Wormlike chain model with step size d and persistence length ℓp, hitting a hard wall. Explicit properties require the computation of the mean force f ¯ i ( L ) exerted by the wall at L and associated potential f ¯ i ( L ) = - d W i ( L ) / d L on a filament of fixed size i. By original Monte-Carlo calculations for few filament lengths in a wide range of compression, we justify the use of the weak bending universal expressions of Gholami et al. [Phys. Rev. E 74, 041803 (2006)] over the whole non-escaping filament regime. For a filament of size i with contour length Lc = (i - 1) d, this universal form is rapidly growing from zero (non-compression state) to the buckling value f b ( L c , ℓ p ) = /π 2 k B T ℓ p 4 Lc 2 over a compression range much narrower than the size d of a monomer. Employing this universal form for living filaments, we find that the average force exerted by a living filament on a wall at distance L is in practice L independent and very close to the value of the stalling force Fs H = ( k B T / d ) ln ( ρ ˆ 1 ) predicted by Hill, this expression being strictly valid in the rigid filament limit. The average filament force results from the product of the cumulative size fraction x = x ( L , ℓ p , ρ ˆ 1 ) , where the filament is in contact with the wall, times the buckling force on a filament of size Lc ≈ L, namely, Fs H = x f b ( L ; ℓ p ) . The observed L independence of Fs H implies that x ∝ L-2 for given ( ℓ p , ρ ˆ 1 ) and x ∝ ln ρ ˆ 1 for given (ℓp, L). At fixed ( L , ρ ˆ 1 ), one also has x ∝ ℓp - 1 which indicates that the rigid filament limit ℓp → ∞ is a singular limit in which an infinite force has zero weight. Finally, we derive the physically relevant threshold for filament escaping in the case of actin filaments.

  7. Kinetics of dissolution of UO2 in nitric acid solutions: A multiparametric study of the non-catalysed reaction

    NASA Astrophysics Data System (ADS)

    Cordara, T.; Szenknect, S.; Claparede, L.; Podor, R.; Mesbah, A.; Lavalette, C.; Dacheux, N.

    2017-12-01

    UO2 pellets were prepared by densification of oxides obtained from the conversion of the oxalate precursor. Then characterized in order to perform a multiparametric study of the dissolution in nitric acid medium. In this frame, for each sample, the densification rate, the grain size and the specific surface area of the prepared pellets were determined prior to the final dissolution experiments. By varying the concentration of the nitric acid solution and temperature, three different and successive steps were identified during the dissolution. Under the less aggressive conditions considered, a first transient step corresponding to the dissolution of the most reactive phases was observed at the solid/solution interface. Then, for all the tested conditions, a steady state step was established during which the normalised dissolution rate was found to be constant. It was followed by a third step characterized by a strong and continuous increase of the normalised dissolution rate. The duration of the steady state, also called "induction period", was found to vary drastically as a function of the HNO3 concentration and temperature. However, independently of the conditions, this steady state step stopped at almost similar dissolved material weight loss and dissolved uranium concentration. During the induction period, no important evolution of the topology of the solid/liquid interface was evidenced authorizing the use of the starting reactive specific surface area to evaluate the normalised dissolution rates thus the chemical durability of the sintered pellets. From the multiparametric study of UO2 dissolution proposed, oxidation of U(IV) to U(VI) by nitrate ions at the solid/liquid interface constitutes the limiting step in the overall dissolution mechanism associated to this induction period.

  8. Steps in the open space planning process

    Treesearch

    Stephanie B. Kelly; Melissa M. Ryan

    1995-01-01

    This paper presents the steps involved in developing an open space plan. The steps are generic in that the methods may be applied various size communities. The intent is to provide a framework to develop an open space plan that meets Massachusetts requirements for funding of open space acquisition.

  9. Variable-mesh method of solving differential equations

    NASA Technical Reports Server (NTRS)

    Van Wyk, R.

    1969-01-01

    Multistep predictor-corrector method for numerical solution of ordinary differential equations retains high local accuracy and convergence properties. In addition, the method was developed in a form conducive to the generation of effective criteria for the selection of subsequent step sizes in step-by-step solution of differential equations.

  10. A simple, compact, and rigid piezoelectric step motor with large step size.

    PubMed

    Wang, Qi; Lu, Qingyou

    2009-08-01

    We present a novel piezoelectric stepper motor featuring high compactness, rigidity, simplicity, and any direction operability. Although tested in room temperature, it is believed to work in low temperatures, owing to its loose operation conditions and large step size. The motor is implemented with a piezoelectric scanner tube that is axially cut into almost two halves and clamp holds a hollow shaft inside at both ends via the spring parts of the shaft. Two driving voltages that singly deform the two halves of the piezotube in one direction and recover simultaneously will move the shaft in the opposite direction, and vice versa.

  11. A simple, compact, and rigid piezoelectric step motor with large step size

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Lu, Qingyou

    2009-08-01

    We present a novel piezoelectric stepper motor featuring high compactness, rigidity, simplicity, and any direction operability. Although tested in room temperature, it is believed to work in low temperatures, owing to its loose operation conditions and large step size. The motor is implemented with a piezoelectric scanner tube that is axially cut into almost two halves and clamp holds a hollow shaft inside at both ends via the spring parts of the shaft. Two driving voltages that singly deform the two halves of the piezotube in one direction and recover simultaneously will move the shaft in the opposite direction, and vice versa.

  12. An improved affine projection algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo

    2017-08-01

    Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.

  13. Quadratic adaptive algorithm for solving cardiac action potential models.

    PubMed

    Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing

    2016-10-01

    An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Seed mediated synthesis of highly mono-dispersed gold nanoparticles in the presence of hydroquinone

    NASA Astrophysics Data System (ADS)

    Kumar, Dhiraj; Mutreja, Isha; Sykes, Peter

    2016-09-01

    Gold nanoparticles (AuNPs) are being studied for several biomedical applications, including drug delivery, biomedical imaging, contrast agents and tumor targeting. The synthesis of nanoparticles with a narrow size distribution is critical for these applications. We report the synthesis of highly mono-dispersed AuNPs by a seed mediated approach, in the presence of tri-sodium citrate and hydroquinone (HQ). AuNPs with an average size of 18 nm were used for the synthesis of highly mono-dispersed nanocrystals of an average size 40 nm, 60 nm, 80 nm and ˜100 nm; but the protocol is not limited to these sizes. The colloidal gold was subjected to UV-vis absorbance spectroscopy, showing a red shift in lambda max wavelength, peaks at 518.47 nm, 526.37 nm, 535.73 nm, 546.03 nm and 556.50 nm for AuNPs seed (18 nm), 40 nm, 60 nm, 80 nm and ˜100 nm respectively. The analysis was consistent with dynamic light scattering and electron microscopy. Hydrodynamic diameters measured were 17.6 nm, 40.8 nm, 59.8 nm, 74.1 nm, and 91.4 nm (size by dynamic light scattering—volume %); with an average poly dispersity index value of 0.088, suggesting mono-dispersity in the size distribution, which was also confirmed by transmission electron microscopy analysis. The advantage of a seed mediated approach is a multi-step growth of nanoparticle size that enables us to control the number of nanoparticles in the suspension, for size ranging from 24.5 nm to 95.8 nm. In addition, the HQ-based synthesis of colloidal nanocrystals allowed control of the particle size and size distribution by tailoring either the number of seeds, amount of gold precursor or reducing agent (HQ) in the final reaction mixture.

  15. Composite grain size sensitive and grain size insensitive creep of bischofite, carnallite and mixed bischofite-carnallite-halite salt rock

    NASA Astrophysics Data System (ADS)

    Muhammad, Nawaz; de Bresser, Hans; Peach, Colin; Spiers, Chris

    2016-04-01

    Deformation experiments have been conducted on rock samples of the valuable magnesium and potassium salts bischofite and carnallite, and on mixed bischofite-carnallite-halite rocks. The samples have been machined from a natural core from the northern part of the Netherlands. Main aim was to produce constitutive flow laws that can be applied at the in situ conditions that hold in the undissolved wall rock of caverns resulting from solution mining. The experiments were triaxial compression tests carried out at true in situ conditions of 70° C temperature and 40 MPa confining pressure. A typical experiment consisted of a few steps at constant strain rate, in the range 10-5 to 10-8 s-1, interrupted by periods of stress relaxation. During the constant strain rate part of the test, the sample was deformed until a steady (or near steady) state of stress was reached. This usually required about 2-4% of shortening. Then the piston was arrested and the stress on the sample was allowed to relax until the diminishing force on the sample reached the limits of the load cell resolution, usually at a strain rate in the order of 10-9 s-1. The duration of each relaxation step was a few days. Carnallite was found to be 4-5 times stronger than bischofite. The bischofite-carnallite-halite mixtures, at their turn, were stronger than carnallite, and hence substantially stronger than pure bischofite. For bischofite as well as carnallite, we observed that during stress relaxation, the stress exponent nof a conventional power law changed from ˜5 at strain rate 10-5 s-1 to ˜1 at 10-9 s-1. The absolute strength of both materials remained higher if relaxation started at a higher stress, i.e. at a faster strain rate. We interpret this as indicating a difference in microstructure at the initiation of the relaxation, notably a smaller grain size related to dynamical recrystallization during the constant strain rate step. The data thus suggest that there is a gradual change in deformation mechanism with decreasing strain rate for both bischofite and carnallite, from grain size insensitive (GSI) dislocation creep at the higher strain rates to grain size sensitive (GSS, i.e. pressure solution) creep at slow strain rate. We can speculate about the composite GSI-GSS nature of the constitutive laws describing the creep of the salt materials.

  16. Stability with large step sizes for multistep discretizations of stiff ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Majda, George

    1986-01-01

    One-leg and multistep discretizations of variable-coefficient linear systems of ODEs having both slow and fast time scales are investigated analytically. The stability properties of these discretizations are obtained independent of ODE stiffness and compared. The results of numerical computations are presented in tables, and it is shown that for large step sizes the stability of one-leg methods is better than that of the corresponding linear multistep methods.

  17. AN ATTEMPT TO FIND AN A PRIORI MEASURE OF STEP SIZE. COMPARATIVE STUDIES OF PRINCIPLES FOR PROGRAMMING MATHEMATICS IN AUTOMATED INSTRUCTION, TECHNICAL REPORT NO. 13.

    ERIC Educational Resources Information Center

    ROSEN, ELLEN F.; STOLUROW, LAWRENCE M.

    IN ORDER TO FIND A GOOD PREDICTOR OF EMPIRICAL DIFFICULTY, AN OPERATIONAL DEFINITION OF STEP SIZE, TEN PROGRAMER-JUDGES RATED CHANGE IN COMPLEXITY IN TWO VERSIONS OF A MATHEMATICS PROGRAM, AND THESE RATINGS WERE THEN COMPARED WITH MEASURES OF EMPIRICAL DIFFICULTY OBTAINED FROM STUDENT RESPONSE DATA. THE TWO VERSIONS, A 54 FRAME BOOKLET AND A 35…

  18. Predict the fatigue life of crack based on extended finite element method and SVR

    NASA Astrophysics Data System (ADS)

    Song, Weizhen; Jiang, Zhansi; Jiang, Hui

    2018-05-01

    Using extended finite element method (XFEM) and support vector regression (SVR) to predict the fatigue life of plate crack. Firstly, the XFEM is employed to calculate the stress intensity factors (SIFs) with given crack sizes. Then predicetion model can be built based on the function relationship of the SIFs with the fatigue life or crack length. Finally, according to the prediction model predict the SIFs at different crack sizes or different cycles. Because of the accuracy of the forward Euler method only ensured by the small step size, a new prediction method is presented to resolve the issue. The numerical examples were studied to demonstrate the proposed method allow a larger step size and have a high accuracy.

  19. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  20. Finite Memory Walk and Its Application to Small-World Network

    NASA Astrophysics Data System (ADS)

    Oshima, Hiraku; Odagaki, Takashi

    2012-07-01

    In order to investigate the effects of cycles on the dynamical process on both regular lattices and complex networks, we introduce a finite memory walk (FMW) as an extension of the simple random walk (SRW), in which a walker is prohibited from moving to sites visited during m steps just before the current position. This walk interpolates the simple random walk (SRW), which has no memory (m = 0), and the self-avoiding walk (SAW), which has an infinite memory (m = ∞). We investigate the FMW on regular lattices and clarify the fundamental characteristics of the walk. We find that (1) the mean-square displacement (MSD) of the FMW shows a crossover from the SAW at a short time step to the SRW at a long time step, and the crossover time is approximately equivalent to the number of steps remembered, and that the MSD can be rescaled in terms of the time step and the size of memory; (2) the mean first-return time (MFRT) of the FMW changes significantly at the number of remembered steps that corresponds to the size of the smallest cycle in the regular lattice, where ``smallest'' indicates that the size of the cycle is the smallest in the network; (3) the relaxation time of the first-return time distribution (FRTD) decreases as the number of cycles increases. We also investigate the FMW on the Watts--Strogatz networks that can generate small-world networks, and show that the clustering coefficient of the Watts--Strogatz network is strongly related to the MFRT of the FMW that can remember two steps.

  1. Porous polystyrene beads as carriers for self-emulsifying system containing loratadine.

    PubMed

    Patil, Pradeep; Paradkar, Anant

    2006-03-01

    The aim of this study was to formulate a self-emulsifying system (SES) containing a lipophilic drug, loratadine, and to explore the potential of preformed porous polystyrene beads (PPB) to act as carriers for such SES. Isotropic SES was formulated, which comprised Captex 200 (63% wt/wt), Cremophore EL (16% wt/wt), Capmul MCM (16% wt/wt), and loratadine (5% wt/wt). SES was evaluated for droplet size, drug content, and in vitro drug release. SES was loaded into preformed and characterized PPB using solvent evaporation method. SES-loaded PPB were evaluated using scanning electron microscopy (SEM) for density, specific surface area (S BET ), loading efficiency, drug content, and in vitro drug release. After SES loading, specific surface area reduced drastically, indicating filling of PPB micropores with SES. Loading efficiency was least for small size (SS) and comparable for medium size (MS) and large size (LS) PPB fractions. In vitro drug release was rapid in case of SS beads due to the presence of SES near to surface. LS fraction showed inadequate drug release owing to presence of deeper micropores that resisted outward diffusion of entrapped SES. Leaching of SES from micropores was the rate-limiting step for drug release. Geometrical features such as bead size and pore architecture of PPB were found to govern the loading efficiency and in vitro drug release from SES-loaded PPB.

  2. Porous polystyrene beads as carriers for self-emulsifying system containing loratadine.

    PubMed

    Patil, Pradeep; Paradkar, Anant

    2006-03-24

    The aim of this study was to formulate a self-emulsifying system (SES) containing a lipophilic drug, loratadine, and to explore the potential of preformed porous polystyrene beads (PPB) to act as carriers for such SES. Isotropic SES was formulated, which comprised Captex 200 (63% wt/wt), Cremophore EL (16% wt/wt), Capmul MCM (16% wt/wt), and loratadine (5% wt/wt). SES was evaluated for droplet size, drug content, and in vitro drug release. SES was loaded into preformed and characterized PPB using solvent evaporation method. SES-loaded PPB were evaluated using scanning electron microscopy (SEM) for density, specific surface area (S(BET)), loading efficiency, drug content, and in vitro drug release. After SES loading, specific surface area reduced drastically, indicating filling of PPB micropores with SES. Loading efficiency was least for small size (SS) and comparable for medium size (MS) and large size (LS) PPB fractions. In vitro drug release was rapid in case of SS beads due to the presence of SES near to surface. LS fraction showed inadequate drug release owing to presence of deeper micropores that resisted outward diffusion of entrapped SES. Leaching of SES from micropores was the rate-limiting step for drug release. Geometrical features such as bead size and pore architecture of PPB were found to govern the loading efficiency and in vitro drug release from SES-loaded PPB.

  3. Selective nickel-catalyzed conversion of model and lignin-derived phenolic compounds to cyclohexanone-based polymer building blocks.

    PubMed

    Schutyser, Wouter; Van den Bosch, Sander; Dijkmans, Jan; Turner, Stuart; Meledina, Maria; Van Tendeloo, Gustaaf; Debecker, Damien P; Sels, Bert F

    2015-05-22

    Valorization of lignin is essential for the economics of future lignocellulosic biorefineries. Lignin is converted into novel polymer building blocks through four steps: catalytic hydroprocessing of softwood to form 4-alkylguaiacols, their conversion into 4-alkylcyclohexanols, followed by dehydrogenation to form cyclohexanones, and Baeyer-Villiger oxidation to give caprolactones. The formation of alkylated cyclohexanols is one of the most difficult steps in the series. A liquid-phase process in the presence of nickel on CeO2 or ZrO2 catalysts is demonstrated herein to give the highest cyclohexanol yields. The catalytic reaction with 4-alkylguaiacols follows two parallel pathways with comparable rates: 1) ring hydrogenation with the formation of the corresponding alkylated 2-methoxycyclohexanol, and 2) demethoxylation to form 4-alkylphenol. Although subsequent phenol to cyclohexanol conversion is fast, the rate is limited for the removal of the methoxy group from 2-methoxycyclohexanol. Overall, this last reaction is the rate-limiting step and requires a sufficient temperature (>250 °C) to overcome the energy barrier. Substrate reactivity (with respect to the type of alkyl chain) and details of the catalyst properties (nickel loading and nickel particle size) on the reaction rates are reported in detail for the Ni/CeO2 catalyst. The best Ni/CeO2 catalyst reaches 4-alkylcyclohexanol yields over 80 %, is even able to convert real softwood-derived guaiacol mixtures and can be reused in subsequent experiments. A proof of principle of the projected cascade conversion of lignocellulose feedstock entirely into caprolactone is demonstrated by using Cu/ZrO2 for the dehydrogenation step to produce the resultant cyclohexanones (≈80 %) and tin-containing beta zeolite to form 4-alkyl-ε-caprolactones in high yields, according to a Baeyer-Villiger-type oxidation with H2 O2 . © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Impact of voxel size variation on CBCT-based diagnostic outcome in dentistry: a systematic review.

    PubMed

    Spin-Neto, Rubens; Gotfredsen, Erik; Wenzel, Ann

    2013-08-01

    The objective of this study was to make a systematic review on the impact of voxel size in cone beam computed tomography (CBCT)-based image acquisition, retrieving evidence regarding the diagnostic outcome of those images. The MEDLINE bibliographic database was searched from 1950 to June 2012 for reports comparing diverse CBCT voxel sizes. The search strategy was limited to English-language publications using the following combined terms in the search strategy: (voxel or FOV or field of view or resolution) and (CBCT or cone beam CT). The results from the review identified 20 publications that qualitatively or quantitatively assessed the influence of voxel size on CBCT-based diagnostic outcome, and in which the methodology/results comprised at least one of the expected parameters (image acquisition, reconstruction protocols, type of diagnostic task, and presence of a gold standard). The diagnostic task assessed in the studies was diverse, including the detection of root fractures, the detection of caries lesions, and accuracy of 3D surface reconstruction and of bony measurements, among others. From the studies assessed, it is clear that no general protocol can be yet defined for CBCT examination of specific diagnostic tasks in dentistry. Rationale in this direction is an important step to define the utility of CBCT imaging.

  5. Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.

    PubMed

    Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué

    2018-02-15

    We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shim, Yunsic; Amar, Jacques G.

    While temperature-accelerated dynamics (TAD) is a powerful method for carrying out non-equilibrium simulations of systems over extended time scales, the computational cost of serial TAD increases approximately as N{sup 3} where N is the number of atoms. In addition, although a parallel TAD method based on domain decomposition [Y. Shim et al., Phys. Rev. B 76, 205439 (2007)] has been shown to provide significantly improved scaling, the dynamics in such an approach is only approximate while the size of activated events is limited by the spatial decomposition size. Accordingly, it is of interest to develop methods to improve the scalingmore » of serial TAD. As a first step in understanding the factors which determine the scaling behavior, we first present results for the overall scaling of serial TAD and its components, which were obtained from simulations of Ag/Ag(100) growth and Ag/Ag(100) annealing, and compare with theoretical predictions. We then discuss two methods based on localization which may be used to address two of the primary “bottlenecks” to the scaling of serial TAD with system size. By implementing both of these methods, we find that for intermediate system-sizes, the scaling is improved by almost a factor of N{sup 1/2}. Some additional possible methods to improve the scaling of TAD are also discussed.« less

  7. Improved scaling of temperature-accelerated dynamics using localization

    NASA Astrophysics Data System (ADS)

    Shim, Yunsic; Amar, Jacques G.

    2016-07-01

    While temperature-accelerated dynamics (TAD) is a powerful method for carrying out non-equilibrium simulations of systems over extended time scales, the computational cost of serial TAD increases approximately as N3 where N is the number of atoms. In addition, although a parallel TAD method based on domain decomposition [Y. Shim et al., Phys. Rev. B 76, 205439 (2007)] has been shown to provide significantly improved scaling, the dynamics in such an approach is only approximate while the size of activated events is limited by the spatial decomposition size. Accordingly, it is of interest to develop methods to improve the scaling of serial TAD. As a first step in understanding the factors which determine the scaling behavior, we first present results for the overall scaling of serial TAD and its components, which were obtained from simulations of Ag/Ag(100) growth and Ag/Ag(100) annealing, and compare with theoretical predictions. We then discuss two methods based on localization which may be used to address two of the primary "bottlenecks" to the scaling of serial TAD with system size. By implementing both of these methods, we find that for intermediate system-sizes, the scaling is improved by almost a factor of N1/2. Some additional possible methods to improve the scaling of TAD are also discussed.

  8. A Versatile Methodology Using Sol-Gel, Supercritical Extraction, and Etching to Fabricate a Nitramine Explosive: Nanometer HNIW

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Song, Xiaolan; Song, Dan; Jiang, Wei; Liu, Hongying; Li, Fengsheng

    2013-01-01

    A combinative method with three steps was developed to fabricate HNIW (2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtziane) nanoexplosives with the gas anti-solvent (GAS) method improved by introducing a gel frame to limit the overgrowth of recrystallized particles and an acid-assistant to remove the used frame. Forming the mixed gel, by locking the explosive solution into a wet gel whose volume was divided by the networks, was the key for the fabrication. As demonstrated by scanning electron microscopy (SEM) analysis, a log-normal size distribution of nano-HNIW indicated that about 74.4% of the particles had sizes <120 nm and maximum particle size was ∼300 nm. Energy-dispersive X-ray spectroscopy (EDS) and infrared (IR) characterizations showed that the aerogel embedded with nanoexplosive particles was dissolved in hydrochloric acid solution, and the raw ɛ-HNIW was mostly transformed into the α phase (nano-HNIW) during recrystallization. Nano-HNIW exhibited impact and friction sensitivity almost equal to those of raw HNIW, within experimental error. Thermal analysis showed that the decomposition peak temperature decreased by more than 10°C and that the heat release increased by 42.5% when the particle size of HNIW was at the nanometer scale.

  9. A sandpile model of grain blocking and consequences for sediment dynamics in step-pool streams

    NASA Astrophysics Data System (ADS)

    Molnar, P.

    2012-04-01

    Coarse grains (cobbles to boulders) are set in motion in steep mountain streams by floods with sufficient energy to erode the particles locally and transport them downstream. During transport, grains are often blocked and form width-spannings structures called steps, separated by pools. The step-pool system is a transient, self-organizing and self-sustaining structure. The temporary storage of sediment in steps and the release of that sediment in avalanche-like pulses when steps collapse, leads to a complex nonlinear threshold-driven dynamics in sediment transport which has been observed in laboratory experiments (e.g., Zimmermann et al., 2010) and in the field (e.g., Turowski et al., 2011). The basic question in this paper is if the emergent statistical properties of sediment transport in step-pool systems may be linked to the transient state of the bed, i.e. sediment storage and morphology, and to the dynamics in sediment input. The hypothesis is that this state, in which sediment transporting events due to the collapse and rebuilding of steps of all sizes occur, is analogous to a critical state in self-organized open dissipative dynamical systems (Bak et al., 1988). To exlore the process of self-organization, a cellular automaton sandpile model is used to simulate the processes of grain blocking and hydraulically-driven step collapse in a 1-d channel. Particles are injected at the top of the channel and are allowed to travel downstream based on various local threshold rules, with the travel distance drawn from a chosen probability distribution. In sandpile modelling this is a simple 1-d limited non-local model, however it has been shown to have nontrivial dynamical behaviour (Kadanoff et al., 1989), and it captures the essence of stochastic sediment transport in step-pool systems. The numerical simulations are used to illustrate the differences between input and output sediment transport rates, mainly focussing on the magnification of intermittency and variability in the system response by the processes of grain blocking and step collapse. The temporal correlation in input and output rates and the number of grains stored in the system at any given time are quantified by spectral analysis and statistics of long-range dependence. Although the model is only conceptually conceived to represent the real processes of step formation and collapse, connections will be made between the modelling results and some field and laboratory data on step-pool systems. The main focus in the discussion will be to demonstrate how even in such a simple model the processes of grain blocking and step collapse may impact the sediment transport rates to the point that certain changes in input are not visible anymore, along the lines of "shredding the signals" proposed by Jerolmack and Paola (2010). The consequences are that the notions of stability and equilibrium, the attribution of cause and effect, and the timescales of process and form in step-pool systems, and perhaps in many other fluvial systems, may have very limited applicability.

  10. 36 CFR § 1004.11 - Load, weight and size limits.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... designate more restrictive limits when appropriate for traffic safety or protection of the road surface. The... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Load, weight and size limits... TRAFFIC SAFETY § 1004.11 Load, weight and size limits. (a) Vehicle load, weight and size limits...

  11. Spectroscopic classification of icy satellites of Saturn II: Identification of terrain units on Rhea

    NASA Astrophysics Data System (ADS)

    Scipioni, F.; Tosi, F.; Stephan, K.; Filacchione, G.; Ciarniello, M.; Capaccioni, F.; Cerroni, P.

    2014-05-01

    Rhea is the second largest icy satellites of Saturn and it is mainly composed of water ice. Its surface is characterized by a leading hemisphere slightly brighter than the trailing side. The main goal of this work is to identify homogeneous compositional units on Rhea by applying the Spectral Angle Mapper (SAM) classification technique to Rhea’s hyperspectral images acquired by the Visual and Infrared Mapping Spectrometer (VIMS) onboard the Cassini Orbiter in the infrared range (0.88-5.12 μm). The first step of the classification is dedicated to the identification of Rhea’s spectral endmembers by applying the k-means unsupervised clustering technique to four hyperspectral images representative of a limited portion of the surface, imaged at relatively high spatial resolution. We then identified eight spectral endmembers, corresponding to as many terrain units, which mostly distinguish for water ice abundance and ice grain size. In the second step, endmembers are used as reference spectra in SAM classification method to achieve a comprehensive classification of the entire surface. From our analysis of the infrared spectra returned by VIMS, it clearly emerges that Rhea’ surface units shows differences in terms of water ice bands depths, average ice grain size, and concentration of contaminants, particularly CO2 and hydrocarbons. The spectral units that classify optically dark terrains are those showing suppressed water ice bands, a finer ice grain size and a higher concentration of carbon dioxide. Conversely, spectral units labeling brighter regions have deeper water ice absorption bands, higher albedo and a smaller concentration of contaminants. All these variations reflect surface’s morphological and geological structures. Finally, we performed a comparison between Rhea and Dione, to highlight different magnitudes of space weathering effects in the icy satellites as a function of the distance from Saturn.

  12. Evaluation of flaws in carbon steel piping. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zahoor, A.; Gamble, R.M.; Mehta, H.S.

    1986-10-01

    The objective of this program was to develop flaw evaluation procedures and allowable flaw sizes for ferritic piping used in light water reactor (LWR) power generation facilities. The program results provide relevant ASME Code groups with the information necessary to define flaw evaluation procedures, allowable flaw sizes, and their associated bases for Section XI of the code. Because there are several possible flaw-related failure modes for ferritic piping over the LWR operating temperature range, three analysis methods were employed to develop the evaluation procedures. These include limit load analysis for plastic collapse, elastic plastic fracture mechanics (EPFM) analysis for ductilemore » tearing, and linear elastic fracture mechanics (LEFM) analysis for non ductile crack extension. To ensure the appropriate analysis method is used in an evaluation, a step by step procedure also is provided to identify the relevant acceptance standard or procedure on a case by case basis. The tensile strength and toughness properties required to complete the flaw evaluation for any of the three analysis methods are included in the evaluation procedure. The flaw evaluation standards are provided in tabular form for the plastic collapse and ductile tearing modes, where the allowable part through flaw depth is defined as a function of load and flaw length. For non ductile crack extension, linear elastic fracture mechanics analysis methods, similar to those in Appendix A of Section XI, are defined. Evaluation flaw sizes and procedures are developed for both longitudinal and circumferential flaw orientations and normal/upset and emergency/faulted operating conditions. The tables are based on margins on load of 2.77 and 1.39 for circumferential flaws and 3.0 and 1.5 for longitudinal flaws for normal/upset and emergency/faulted conditions, respectively.« less

  13. Ocean regional circulation model sensitizes to resolution of the lateral boundary conditions

    NASA Astrophysics Data System (ADS)

    Pham, Van Sy; Hwang, Jin Hwan

    2017-04-01

    Dynamical downscaling with nested regional oceanographic models is an effective approach for forecasting operationally coastal weather and projecting long term climate on the ocean. Nesting procedures deliver the unwanted in dynamic downscaling due to the differences of numerical grid sizes and updating steps. Therefore, such unavoidable errors restrict the application of the Ocean Regional Circulation Model (ORCMs) in both short-term forecasts and long-term projections. The current work identifies the effects of errors induced by computational limitations during nesting procedures on the downscaled results of the ORCMs. The errors are quantitatively evaluated for each error source and its characteristics by the Big-Brother Experiments (BBE). The BBE separates identified errors from each other and quantitatively assess the amount of uncertainties employing the same model to simulate for both nesting and nested model. Here, we focus on discussing errors resulting from two main matters associated with nesting procedures. They should be the spatial grids' differences and the temporal updating steps. After the diverse cases from separately running of the BBE, a Taylor diagram was adopted to analyze the results and suggest an optimization intern of grid size and updating period and domain sizes. Key words: lateral boundary condition, error, ocean regional circulation model, Big-Brother Experiment. Acknowledgement: This research was supported by grants from the Korean Ministry of Oceans and Fisheries entitled "Development of integrated estuarine management system" and a National Research Foundation of Korea (NRF) Grant (No. 2015R1A5A 7037372) funded by MSIP of Korea. The authors thank the Integrated Research Institute of Construction and Environmental Engineering of Seoul National University for administrative support.

  14. Making High-Pass Filters For Submillimeter Waves

    NASA Technical Reports Server (NTRS)

    Siegel, Peter H.; Lichtenberger, John A.

    1991-01-01

    Micromachining-and-electroforming process makes rigid metal meshes with cells ranging in size from 0.002 in. to 0.05 in. square. Series of steps involving cutting, grinding, vapor deposition, and electroforming creates self-supporting, electrically thick mesh. Width of holes typically 1.2 times cutoff wavelength of dominant waveguide mode in hole. To obtain sharp frequency-cutoff characteristic, thickness of mesh made greater than one-half of guide wavelength of mode in hole. Meshes used as high-pass filters (dichroic plates) for submillimeter electromagnetic waves. Process not limited to square silicon wafers. Round wafers also used, with slightly more complication in grinding periphery. Grid in any pattern produced in electroforming mandrel. Any platable metal or alloy used for mesh.

  15. Tip-enhanced Raman scattering of DNA aptamers for Listeria monocytogenes.

    PubMed

    He, Siyu; Li, Hongyuan; Gomes, Carmen L; Voronine, Dmitri V

    2018-05-03

    Optical detection and conformational mapping of aptamers are important for improving medical and biosensing technologies and for better understanding of biological processes at the molecular level. The authors investigate the vibrational signals of deoxyribonucleic acid aptamers specific to Listeria monocytogenes immobilized on gold substrates using tip-enhanced Raman scattering (TERS) spectroscopy and nanoscale imaging. The authors compare topographic and nano-optical signals and investigate the fluctuations of the position-dependent TERS spectra. They perform spatial TERS mapping with 3 nm step size and discuss the limitation of the resulting spatial resolution under the ambient conditions. TERS mapping provides information about the chemical composition and conformation of aptamers and paves the way to future label-free biosensing.

  16. Formal Solutions for Polarized Radiative Transfer. III. Stiffness and Instability

    NASA Astrophysics Data System (ADS)

    Janett, Gioele; Paganini, Alberto

    2018-04-01

    Efficient numerical approximation of the polarized radiative transfer equation is challenging because this system of ordinary differential equations exhibits stiff behavior, which potentially results in numerical instability. This negatively impacts the accuracy of formal solvers, and small step-sizes are often necessary to retrieve physical solutions. This work presents stability analyses of formal solvers for the radiative transfer equation of polarized light, identifies instability issues, and suggests practical remedies. In particular, the assumptions and the limitations of the stability analysis of Runge–Kutta methods play a crucial role. On this basis, a suitable and pragmatic formal solver is outlined and tested. An insightful comparison to the scalar radiative transfer equation is also presented.

  17. Cytoskeletal motor-driven active self-assembly in in vitro systems

    DOE PAGES

    Lam, A. T.; VanDelinder, V.; Kabir, A. M. R.; ...

    2015-11-11

    Molecular motor-driven self-assembly has been an active area of soft matter research for the past decade. Because molecular motors transform chemical energy into mechanical work, systems which employ molecular motors to drive self-assembly processes are able to overcome kinetic and thermodynamic limits on assembly time, size, complexity, and structure. Here, we review the progress in elucidating and demonstrating the rules and capabilities of motor-driven active self-assembly. Lastly, we focus on the types of structures created and the degree of control realized over these structures, and discuss the next steps necessary to achieve the full potential of this assembly mode whichmore » complements robotic manipulation and passive self-assembly.« less

  18. Extended asymmetric-cut multilayer X-ray gratings.

    PubMed

    Prasciolu, Mauro; Haase, Anton; Scholze, Frank; Chapman, Henry N; Bajt, Saša

    2015-06-15

    The fabrication and characterization of a large-area high-dispersion blazed grating for soft X-rays based on an asymmetric-cut multilayer structure is reported. An asymmetric-cut multilayer structure acts as a perfect blazed grating of high efficiency that exhibits a single diffracted order, as described by dynamical diffraction throughout the depth of the layered structure. The maximum number of grating periods created by cutting a multilayer deposited on a flat substrate is equal to the number of layers deposited, which limits the size of the grating. The size limitation was overcome by depositing the multilayer onto a substrate which itself is a coarse blazed grating and then polish it flat to reveal the uniformly spaced layers of the multilayer. The number of deposited layers required is such that the multilayer thickness exceeds the step height of the substrate structure. The method is demonstrated by fabricating a 27,060 line pairs per mm blazed grating (36.95 nm period) that is repeated every 3,200 periods by the 120-μm period substrate structure. This preparation technique also relaxes the requirements on stress control and interface roughness of the multilayer film. The dispersion and efficiency of the grating is demonstrated for soft X-rays of 13.2 nm wavelength.

  19. Optimizing Air Transportation Service to Metroplex Airports. Par 2; Analysis Using the Airline Schedule Optimization Model (ASOM)

    NASA Technical Reports Server (NTRS)

    Donoue, George; Hoffman, Karla; Sherry, Lance; Ferguson, John; Kara, Abdul Qadar

    2010-01-01

    The air transportation system is a significant driver of the U.S. economy, providing safe, affordable, and rapid transportation. During the past three decades airspace and airport capacity has not grown in step with demand for air transportation; the failure to increase capacity at the same rate as the growth in demand results in unreliable service and systemic delay. This report describes the results of an analysis of airline strategic decision-making that affects geographic access, economic access, and airline finances, extending the analysis of these factors using historic data (from Part 1 of the report). The Airline Schedule Optimization Model (ASOM) was used to evaluate how exogenous factors (passenger demand, airline operating costs, and airport capacity limits) affect geographic access (markets-served, scheduled flights, aircraft size), economic access (airfares), airline finances (profit), and air transportation efficiency (aircraft size). This analysis captures the impact of the implementation of airport capacity limits, as well as the effect of increased hedged fuel prices, which serve as a proxy for increased costs per flight that might occur if auctions or congestion pricing are imposed; also incorporated are demand elasticity curves based on historical data that provide information about how passenger demand is affected by airfare changes.

  20. Physical pretreatment – woody biomass size reduction – for forest biorefinery

    Treesearch

    J.Y. Zhu

    2011-01-01

    Physical pretreatment of woody biomass or wood size reduction is a prerequisite step for further chemical or biochemical processing in forest biorefinery. However, wood size reduction is very energy intensive which differentiates woody biomass from herbaceous biomass for biorefinery. This chapter discusses several critical issues related to wood size reduction: (1)...

  1. Planning energy-efficient bipedal locomotion on patterned terrain

    NASA Astrophysics Data System (ADS)

    Zamani, Ali; Bhounsule, Pranav A.; Taha, Ahmad

    2016-05-01

    Energy-efficient bipedal walking is essential in realizing practical bipedal systems. However, current energy-efficient bipedal robots (e.g., passive-dynamics-inspired robots) are limited to walking at a single speed and step length. The objective of this work is to address this gap by developing a method of synthesizing energy-efficient bipedal locomotion on patterned terrain consisting of stepping stones using energy-efficient primitives. A model of Cornell Ranger (a passive-dynamics inspired robot) is utilized to illustrate our technique. First, an energy-optimal trajectory control problem for a single step is formulated and solved. The solution minimizes the Total Cost Of Transport (TCOT is defined as the energy used per unit weight per unit distance travelled) subject to various constraints such as actuator limits, foot scuffing, joint kinematic limits, ground reaction forces. The outcome of the optimization scheme is a table of TCOT values as a function of step length and step velocity. Next, we parameterize the terrain to identify the location of the stepping stones. Finally, the TCOT table is used in conjunction with the parameterized terrain to plan an energy-efficient stepping strategy.

  2. Study on characteristics of printed circuit board liberation and its crushed products.

    PubMed

    Quan, Cui; Li, Aimin; Gao, Ningbo

    2012-11-01

    Recycling printed circuit board waste (PCBW) waste is a hot issue of environmental protection and resource recycling. Mechanical and thermo-chemical methods are two traditional recycling processes for PCBW. In the present research, a two-step crushing process combined with a coarse-crushing step and a fine-pulverizing step was adopted, and then the crushed products were classified into seven different fractions with a standard sieve. The liberation situation and particle shape in different size fractions were observed. Properties of different size fractions, such as heating value, thermogravimetric, proximate, ultimate and chemical analysis were determined. The Rosin-Rammler model was applied to analyze the particle size distribution of crushed material. The results indicated that complete liberation of metals from the PCBW was achieved at a size less than 0.59 mm, but the nonmetal particle in the smaller-than-0.15 mm fraction is liable to aggregate. Copper was the most prominent metal in PCBW and mainly enriched in the 0.42-0.25 mm particle size. The Rosin-Rammler equation adequately fit particle size distribution data of crushed PCBW with a correlation coefficient of 0.9810. The results of heating value and proximate analysis revealed that the PCBW had a low heating value and high ash content. The combustion and pyrolysis process of PCBW was different and there was an obvious oxidation peak of Cu in combustion runs.

  3. Workshop II On Unsteady Separated Flow Proceedings

    DTIC Science & Technology

    1988-07-28

    was static stall angle of 12 ° . achieved by injecting diluted food coloring at the apex through a 1.5 mm diameter tube placed The response of the wing...differences with uniform step size in q, and trailing -. 75 three- pront differences with uniform step size in ,, ,,as used The nonlinearity of the...flow prop- "Kutta condition." erties for slender 3D wings are addressed. To begin the The present paper emphasizes recent progress in the de- study

  4. The GRAM-3 model

    NASA Technical Reports Server (NTRS)

    Justus, C. G.

    1987-01-01

    The Global Reference Atmosphere Model (GRAM) is under continuous development and improvement. GRAM data were compared with Middle Atmosphere Program (MAP) predictions and with shuttle data. An important note: Users should employ only step sizes in altitude that give vertical density gradients consistent with shuttle-derived density data. Using too small a vertical step size (finer then 1 km) will result in what appears to be unreasonably high values of density shears but what in reality is noise in the model.

  5. 36 CFR 1004.11 - Load, weight and size limits.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... limits when appropriate for traffic safety or protection of the road surface. The Board may require a... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Load, weight and size limits... § 1004.11 Load, weight and size limits. (a) Vehicle load, weight and size limits established by State law...

  6. 36 CFR 1004.11 - Load, weight and size limits.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... limits when appropriate for traffic safety or protection of the road surface. The Board may require a... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Load, weight and size limits... § 1004.11 Load, weight and size limits. (a) Vehicle load, weight and size limits established by State law...

  7. 36 CFR 1004.11 - Load, weight and size limits.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... limits when appropriate for traffic safety or protection of the road surface. The Board may require a... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Load, weight and size limits... § 1004.11 Load, weight and size limits. (a) Vehicle load, weight and size limits established by State law...

  8. Between-monitor differences in step counts are related to body size: implications for objective physical activity measurement.

    PubMed

    Pomeroy, Jeremy; Brage, Søren; Curtis, Jeffrey M; Swan, Pamela D; Knowler, William C; Franks, Paul W

    2011-04-27

    The quantification of the relationships between walking and health requires that walking is measured accurately. We correlated different measures of step accumulation to body size, overall physical activity level, and glucose regulation. Participants were 25 men and 25 women American Indians without diabetes (Age: 20-34 years) in Phoenix, Arizona, USA. We assessed steps/day during 7 days of free living, simultaneously with three different monitors (Accusplit-AX120, MTI-ActiGraph, and Dynastream-AMP). We assessed total physical activity during free-living with doubly labeled water combined with resting metabolic rate measured by expired gas indirect calorimetry. Glucose tolerance was determined during an oral glucose tolerance test. Based on observed counts in the laboratory, the AMP was the most accurate device, followed by the MTI and the AX120, respectively. The estimated energy cost of 1000 steps per day was lower in the AX120 than the MTI or AMP. The correlation between AX120-assessed steps/day and waist circumference was significantly higher than the correlation between AMP steps and waist circumference. The difference in steps per day between the AX120 and both the AMP and the MTI were significantly related to waist circumference. Between-monitor differences in step counts influence the observed relationship between walking and obesity-related traits.

  9. Short-term Time Step Convergence in a Climate Model

    DOE PAGES

    Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...

    2015-02-11

    A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less

  10. Building an open-source robotic stereotaxic instrument.

    PubMed

    Coffey, Kevin R; Barker, David J; Ma, Sisi; West, Mark O

    2013-10-29

    This protocol includes the designs and software necessary to upgrade an existing stereotaxic instrument to a robotic (CNC) stereotaxic instrument for around $1,000 (excluding a drill), using industry standard stepper motors and CNC controlling software. Each axis has variable speed control and may be operated simultaneously or independently. The robot's flexibility and open coding system (g-code) make it capable of performing custom tasks that are not supported by commercial systems. Its applications include, but are not limited to, drilling holes, sharp edge craniotomies, skull thinning, and lowering electrodes or cannula. In order to expedite the writing of g-coding for simple surgeries, we have developed custom scripts that allow individuals to design a surgery with no knowledge of programming. However, for users to get the most out of the motorized stereotax, it would be beneficial to be knowledgeable in mathematical programming and G-Coding (simple programming for CNC machining). The recommended drill speed is greater than 40,000 rpm. The stepper motor resolution is 1.8°/Step, geared to 0.346°/Step. A standard stereotax has a resolution of 2.88 μm/step. The maximum recommended cutting speed is 500 μm/sec. The maximum recommended jogging speed is 3,500 μm/sec. The maximum recommended drill bit size is HP 2.

  11. Molecular dynamics with rigid bodies: Alternative formulation and assessment of its limitations when employed to simulate liquid water

    NASA Astrophysics Data System (ADS)

    Silveira, Ana J.; Abreu, Charlles R. A.

    2017-09-01

    Sets of atoms collectively behaving as rigid bodies are often used in molecular dynamics to model entire molecules or parts thereof. This is a coarse-graining strategy that eliminates degrees of freedom and supposedly admits larger time steps without abandoning the atomistic character of a model. In this paper, we rely on a particular factorization of the rotation matrix to simplify the mechanical formulation of systems containing rigid bodies. We then propose a new derivation for the exact solution of torque-free rotations, which are employed as part of a symplectic numerical integration scheme for rigid-body dynamics. We also review methods for calculating pressure in systems of rigid bodies with pairwise-additive potentials and periodic boundary conditions. Finally, simulations of liquid phases, with special focus on water, are employed to analyze the numerical aspects of the proposed methodology. Our results show that energy drift is avoided for time step sizes up to 5 fs, but only if a proper smoothing is applied to the interatomic potentials. Despite this, the effects of discretization errors are relevant, even for smaller time steps. These errors induce, for instance, a systematic failure of the expected equipartition of kinetic energy between translational and rotational degrees of freedom.

  12. Proposed variations of the stepped-wedge design can be used to accommodate multiple interventions

    PubMed Central

    Lyons, Vivian H; Li, Lingyu; Hughes, James P; Rowhani-Rahbar, Ali

    2018-01-01

    Objective Stepped wedge design (SWD) cluster randomized trials have traditionally been used for evaluating a single intervention. We aimed to explore design variants suitable for evaluating multiple interventions in a SWD trial. Study Design and Setting We identified four specific variants of the traditional SWD that would allow two interventions to be conducted within a single cluster randomized trial: Concurrent, Replacement, Supplementation and Factorial SWDs. These variants were chosen to flexibly accommodate study characteristics that limit a one-size-fits-all approach for multiple interventions. Results In the Concurrent SWD, each cluster receives only one intervention, unlike the other variants. The Replacement SWD supports two interventions that will not or cannot be employed at the same time. The Supplementation SWD is appropriate when the second intervention requires the presence of the first intervention, and the Factorial SWD supports the evaluation of intervention interactions. The precision for estimating intervention effects varies across the four variants. Conclusion Selection of the appropriate design variant should be driven by the research question while considering the trade-off between the number of steps, number of clusters, restrictions for concurrent implementation of the interventions, lingering effects of each intervention, and precision of the intervention effect estimates. PMID:28412466

  13. Single-Molecule Optical Spectroscopy and Imaging: From Early Steps to Recent Advances

    NASA Astrophysics Data System (ADS)

    Moerner, William E.

    The initial steps toward optical detection and spectroscopy of single molecules arose out of the study of spectral hole-burning in inhomogeneously broadened optical absorption profiles of molecular impurities in solids at low temperatures. Spectral signatures relating to the fluctuations of the number of molecules in resonance led to the attainment of the single-molecule limit in 1989. In the early 1990s, many fascinating physical effects were observed for individual molecules such as spectral diffusion, optical switching, vibrational spectra, and magnetic resonance of a single molecular spin. Since the mid-1990s when experiments moved to room temperature, a wide variety of biophysical effects may be explored, and a number of physical phenomena from the low temperature studies have analogs at high temperature. Recent advances worldwide cover a huge range, from in vitro studies of enzymes, proteins, and oligonucleotides, to observations in real time of a single protein performing a specific function inside a living cell. Because each single fluorophore acts a light source roughly 1 nm in size, microscopic observation of individual fluorophores leads naturally to localization beyond the optical diffraction limit. Combining this with active optical control of the number of emitting molecules leads to superresolution imaging, a new frontier for optical microscopy beyond the optical diffraction limit and for chemical design of photoswitchable fluorescent labels. Finally, to study one molecule in aqueous solution without surface perturbations, a new electrokinetic trap is described (the ABEL trap) which can trap single small biomolecules without the need for large dielectric beads.

  14. Synthetic scaffolds with full pore interconnectivity for bone regeneration prepared by supercritical foaming using advanced biofunctional plasticizers.

    PubMed

    Salerno, Aurelio; Diéguez, Sara; Diaz-Gomez, Luis; Gómez-Amoza, José L; Magariños, Beatriz; Concheiro, Angel; Domingo, Concepción; Alvarez-Lorenzo, Carmen; García-González, Carlos A

    2017-06-30

    Supercritical foaming allows for the solvent-free processing of synthetic scaffolds for bone regeneration. However, the control on the pore interconnectivity and throat pore size with this technique still needs to be improved. The use of plasticizers may help overcome these limitations. Eugenol, a GRAS natural compound extracted from plants, is proposed in this work as an advanced plasticizer with bioactive properties. Eugenol-containing poly(ε-caprolactone) (PCL) scaffolds were obtained by supercritical foaming (20.0 MPa, 45 °C, 17 h) followed by a one or a two-step depressurization profile. The effects of the eugenol content and the depressurization profile on the porous structure of the material and the physicochemical properties of the scaffold were evaluated. The combination of both processing parameters was successful to simultaneously tune the pore interconnectivity and throat sizes to allow mesenchymal stem cells infiltration. Scaffolds with eugenol were cytocompatible, presented antimicrobial activity preventing the attachment of Gram positive (S. aureus, S. epidermidis) bacteria and showed good tissue integration.

  15. Inkjet formation of unilamellar lipid vesicles for cell-like encapsulation†

    PubMed Central

    Stachowiak, Jeanne C.; Richmond, David L.; Li, Thomas H.; Brochard-Wyart, Françoise

    2010-01-01

    Encapsulation of macromolecules within lipid vesicles has the potential to drive biological discovery and enable development of novel, cell-like therapeutics and sensors. However, rapid and reliable production of large numbers of unilamellar vesicles loaded with unrestricted and precisely-controlled contents requires new technologies that overcome size, uniformity, and throughput limitations of existing approaches. Here we present a high-throughput microfluidic method for vesicle formation and encapsulation using an inkjet printer at rates up to 200 Hz. We show how multiple high-frequency pulses of the inkjet’s piezoelectric actuator create a microfluidic jet that deforms a bilayer lipid membrane, controlling formation of individual vesicles. Variations in pulse number, pulse voltage, and solution viscosity are used to control the vesicle size. As a first step toward cell-like reconstitution using this method, we encapsulate the cytoskeletal protein actin and use co-encapsulated microspheres to track its polymerization into a densely entangled cytoskeletal network upon vesicle formation. PMID:19568667

  16. The density matrix renormalization group algorithm on kilo-processor architectures: Implementation and trade-offs

    NASA Astrophysics Data System (ADS)

    Nemes, Csaba; Barcza, Gergely; Nagy, Zoltán; Legeza, Örs; Szolgay, Péter

    2014-06-01

    In the numerical analysis of strongly correlated quantum lattice models one of the leading algorithms developed to balance the size of the effective Hilbert space and the accuracy of the simulation is the density matrix renormalization group (DMRG) algorithm, in which the run-time is dominated by the iterative diagonalization of the Hamilton operator. As the most time-dominant step of the diagonalization can be expressed as a list of dense matrix operations, the DMRG is an appealing candidate to fully utilize the computing power residing in novel kilo-processor architectures. In the paper a smart hybrid CPU-GPU implementation is presented, which exploits the power of both CPU and GPU and tolerates problems exceeding the GPU memory size. Furthermore, a new CUDA kernel has been designed for asymmetric matrix-vector multiplication to accelerate the rest of the diagonalization. Besides the evaluation of the GPU implementation, the practical limits of an FPGA implementation are also discussed.

  17. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messer, Bronson; Sewell, Christopher; Heitmann, Katrin

    2015-01-01

    Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less

  18. Simulation study and guidelines to generate Laser-induced Surface Acoustic Waves for human skin feature detection

    NASA Astrophysics Data System (ADS)

    Li, Tingting; Fu, Xing; Chen, Kun; Dorantes-Gonzalez, Dante J.; Li, Yanning; Wu, Sen; Hu, Xiaotang

    2015-12-01

    Despite the seriously increasing number of people contracting skin cancer every year, limited attention has been given to the investigation of human skin tissues. To this regard, Laser-induced Surface Acoustic Wave (LSAW) technology, with its accurate, non-invasive and rapid testing characteristics, has recently shown promising results in biological and biomedical tissues. In order to improve the measurement accuracy and efficiency of detecting important features in highly opaque and soft surfaces such as human skin, this paper identifies the most important parameters of a pulse laser source, as well as provides practical guidelines to recommended proper ranges to generate Surface Acoustic Waves (SAWs) for characterization purposes. Considering that melanoma is a serious type of skin cancer, we conducted a finite element simulation-based research on the generation and propagation of surface waves in human skin containing a melanoma-like feature, determine best pulse laser parameter ranges of variation, simulation mesh size and time step, working bandwidth, and minimal size of detectable melanoma.

  19. Crystal growth kinetics of triblock Janus colloids

    NASA Astrophysics Data System (ADS)

    Reinhart, Wesley F.; Panagiotopoulos, Athanassios Z.

    2018-03-01

    We measure the kinetics of crystal growth from a melt of triblock Janus colloids using non-equilibrium molecular dynamics simulations. We assess the impact of interaction anisotropy by systematically varying the size of the attractive patches from 40% to 100% coverage, finding substantially different growth behaviors in the two limits. With isotropic particles, the interface velocity is directly proportional to the subcooling, in agreement with previous studies. With highly anisotropic particles, the growth curves are well approximated by using a power law with exponent and prefactor that depend strongly on the particular surface geometry and patch fraction. This nonlinear growth appears correlated to the roughness of the solid-liquid interface, with the strongest growth inhibition occurring for the smoothest crystal faces. We conclude that crystal growth for patchy particles does not conform to the typical collision-limited mechanism, but is instead an activated process in which the rate-limiting step is the collective rotation of particles into the proper orientation. Finally, we show how differences in the growth kinetics could be leveraged to achieve kinetic control over polymorph growth, either enhancing or suppressing metastable phases near solid-solid coexistence lines.

  20. Toward practical 3D radiography of pipeline girth welds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wassink, Casper, E-mail: casper.wassink@applusrtd.com; Hol, Martijn, E-mail: martijn.hol@applusrtd.com; Flikweert, Arjan, E-mail: martijn.hol@applusrtd.com

    2015-03-31

    Digital radiography has made its way into in-the-field girth weld testing. With recent generations of detectors and x-ray tubes it is possible to reach the image quality desired in standards as well as the speed of inspection desired to be competitive with film radiography and automated ultrasonic testing. This paper will show the application of these technologies in the RTD Rayscan system. The method for achieving an image quality that complies with or even exceeds prevailing industrial standards will be presented, as well as the application on pipeline girth welds with CRA layers. A next step in development will bemore » to also achieve a measurement of weld flaw height to allow for performing an Engineering Critical Assessment on the weld. This will allow for similar acceptance limits as currently used with Automated Ultrasonic Testing of pipeline girth welds. Although a sufficient sizing accuracy was already demonstrated and qualified in the TomoCAR system, testing in some applications is restricted to time limits. The paper will present some experiments that were performed to achieve flaw height approximation within these time limits.« less

  1. 50 CFR 622.275 - Size limits.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Dolphin and Wahoo Fishery Off the Atlantic States § 622.275 Size limits. All size limits in this section are minimum size...

  2. 50 CFR 622.275 - Size limits.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Dolphin and Wahoo Fishery Off the Atlantic States § 622.275 Size limits. All size limits in this section are minimum size...

  3. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach

    PubMed Central

    Hemming, Karla; Taljaard, Monica

    2016-01-01

    Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808

  4. Effects of two-step homogenization on precipitation behavior of Al{sub 3}Zr dispersoids and recrystallization resistance in 7150 aluminum alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Zhanying; Key Laboratory for Anisotropy and Texture of Materials, Northeastern University, Shenyang 110819, China,; Zhao, Gang

    2015-04-15

    The effect of two-step homogenization treatments on the precipitation behavior of Al{sub 3}Zr dispersoids was investigated by transmission electron microscopy (TEM) in 7150 alloys. Two-step treatments with the first step in the temperature range of 300–400 °C followed by the second step at 470 °C were applied during homogenization. Compared with the conventional one-step homogenization, both a finer particle size and a higher number density of Al{sub 3}Zr dispersoids were obtained with two-step homogenization treatments. The most effective dispersoid distribution was attained using the first step held at 300 °C. In addition, the two-step homogenization minimized the precipitate free zonesmore » and greatly increased the number density of dispersoids near dendrite grain boundaries. The effect of two-step homogenization on recrystallization resistance of 7150 alloys with different Zr contents was quantitatively analyzed using the electron backscattered diffraction (EBSD) technique. It was found that the improved dispersoid distribution through the two-step treatment can effectively inhibit the recrystallization process during the post-deformation annealing for 7150 alloys containing 0.04–0.09 wt.% Zr, resulting in a remarkable reduction of the volume fraction and grain size of recrystallization grains. - Highlights: • Effect of two-step homogenization on Al{sub 3}Zr dispersoids was investigated by TEM. • Finer and higher number of dispersoids obtained with two-step homogenization • Minimized the precipitate free zones and improved the dispersoid distribution • Recrystallization resistance with varying Zr content was quantified by EBSD. • Effectively inhibit the recrystallization through two-step treatments in 7150 alloy.« less

  5. 50 CFR 622.56 - Size limits.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Shrimp Fishery of the Gulf of Mexico § 622.56 Size limits. Shrimp not in compliance with the applicable size limit as... shrimp harvested in the Gulf EEZ are subject to the minimum-size landing and possession limits of...

  6. 50 CFR 622.56 - Size limits.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Shrimp Fishery of the Gulf of Mexico § 622.56 Size limits. Shrimp not in compliance with the applicable size limit as... shrimp harvested in the Gulf EEZ are subject to the minimum-size landing and possession limits of...

  7. Effect of sulfur source on photocatalytic degradation performance of CdS/MoS2 prepared with one-step hydrothermal synthesis.

    PubMed

    Wang, Yanfeng; Chen, Wei; Chen, Xiao; Feng, Huajun; Shen, Dongsheng; Huang, Bin; Jia, Yufeng; Zhou, Yuyang; Liang, Yuxiang

    2018-03-01

    CdS/MoS 2 , an extremely efficient photocatalyst, has been extensively used in hydrogen photoproduction and pollutant degradation. CdS/MoS 2 can be synthesized by a facile one-step hydrothermal process. However, the effect of the sulfur source on the synthesis of CdS/MoS 2 via one-step hydrothermal methods has seldom been investigated. We report herein a series of one-step hydrothermal preparations of CdS/MoS 2 using three different sulfur sources: thioacetamide, l-cysteine, and thiourea. The results revealed that the sulfur source strongly affected the crystallization, morphology, elemental composition and ultraviolet (UV)-visible-light-absorption ability of the CdS/MoS 2 . Among the investigated sulfur sources, thioacetamide provided the highest visible-light absorption ability for CdS/MoS 2 , with the smallest average particle size and largest surface area, resulting in the highest efficiency in Methylene Blue (MB) degradation. The photocatalytic activity of CdS/MoS 2 synthesized from the three sulfur sources can be arranged in the following order: thioacetamide>l-cysteine>thiourea. The reaction rate constants (k) for thioacetamide, l-cysteine, and thiourea were estimated to be 0.0197, 0.0140, and 0.0084min -1 , respectively. However, thioacetamide may be limited in practical application in terms of its price and toxicity, while l-cysteine is relatively economical, less toxic and exhibited good photocatalytic degradation performance toward MB. Copyright © 2017. Published by Elsevier B.V.

  8. A proposed approach to monitor private-sector policies and practices related to food environments, obesity and non-communicable disease prevention.

    PubMed

    Sacks, G; Swinburn, B; Kraak, V; Downs, S; Walker, C; Barquera, S; Friel, S; Hawkes, C; Kelly, B; Kumanyika, S; L'Abbé, M; Lee, A; Lobstein, T; Ma, J; Macmullan, J; Mohan, S; Monteiro, C; Neal, B; Rayner, M; Sanders, D; Snowdon, W; Vandevijvere, S

    2013-10-01

    Private-sector organizations play a critical role in shaping the food environments of individuals and populations. However, there is currently very limited independent monitoring of private-sector actions related to food environments. This paper reviews previous efforts to monitor the private sector in this area, and outlines a proposed approach to monitor private-sector policies and practices related to food environments, and their influence on obesity and non-communicable disease (NCD) prevention. A step-wise approach to data collection is recommended, in which the first ('minimal') step is the collation of publicly available food and nutrition-related policies of selected private-sector organizations. The second ('expanded') step assesses the nutritional composition of each organization's products, their promotions to children, their labelling practices, and the accessibility, availability and affordability of their products. The third ('optimal') step includes data on other commercial activities that may influence food environments, such as political lobbying and corporate philanthropy. The proposed approach will be further developed and piloted in countries of varying size and income levels. There is potential for this approach to enable national and international benchmarking of private-sector policies and practices, and to inform efforts to hold the private sector to account for their role in obesity and NCD prevention. © 2013 The Authors. Obesity Reviews published by John Wiley & Sons Ltd on behalf of the International Association for the Study of Obesity.

  9. Inferring Regulatory Networks by Combining Perturbation Screens and Steady State Gene Expression Profiles

    PubMed Central

    Michailidis, George

    2014-01-01

    Reconstructing transcriptional regulatory networks is an important task in functional genomics. Data obtained from experiments that perturb genes by knockouts or RNA interference contain useful information for addressing this reconstruction problem. However, such data can be limited in size and/or are expensive to acquire. On the other hand, observational data of the organism in steady state (e.g., wild-type) are more readily available, but their informational content is inadequate for the task at hand. We develop a computational approach to appropriately utilize both data sources for estimating a regulatory network. The proposed approach is based on a three-step algorithm to estimate the underlying directed but cyclic network, that uses as input both perturbation screens and steady state gene expression data. In the first step, the algorithm determines causal orderings of the genes that are consistent with the perturbation data, by combining an exhaustive search method with a fast heuristic that in turn couples a Monte Carlo technique with a fast search algorithm. In the second step, for each obtained causal ordering, a regulatory network is estimated using a penalized likelihood based method, while in the third step a consensus network is constructed from the highest scored ones. Extensive computational experiments show that the algorithm performs well in reconstructing the underlying network and clearly outperforms competing approaches that rely only on a single data source. Further, it is established that the algorithm produces a consistent estimate of the regulatory network. PMID:24586224

  10. Hollow Microtube Resonators via Silicon Self-Assembly toward Subattogram Mass Sensing Applications.

    PubMed

    Kim, Joohyun; Song, Jungki; Kim, Kwangseok; Kim, Seokbeom; Song, Jihwan; Kim, Namsu; Khan, M Faheem; Zhang, Linan; Sader, John E; Park, Keunhan; Kim, Dongchoul; Thundat, Thomas; Lee, Jungchul

    2016-03-09

    Fluidic resonators with integrated microchannels (hollow resonators) are attractive for mass, density, and volume measurements of single micro/nanoparticles and cells, yet their widespread use is limited by the complexity of their fabrication. Here we report a simple and cost-effective approach for fabricating hollow microtube resonators. A prestructured silicon wafer is annealed at high temperature under a controlled atmosphere to form self-assembled buried cavities. The interiors of these cavities are oxidized to produce thin oxide tubes, following which the surrounding silicon material is selectively etched away to suspend the oxide tubes. This simple three-step process easily produces hollow microtube resonators. We report another innovation in the capping glass wafer where we integrate fluidic access channels and getter materials along with residual gas suction channels. Combined together, only five photolithographic steps and one bonding step are required to fabricate vacuum-packaged hollow microtube resonators that exhibit quality factors as high as ∼ 13,000. We take one step further to explore additionally attractive features including the ability to tune the device responsivity, changing the resonator material, and scaling down the resonator size. The resonator wall thickness of ∼ 120 nm and the channel hydraulic diameter of ∼ 60 nm are demonstrated solely by conventional microfabrication approaches. The unique characteristics of this new fabrication process facilitate the widespread use of hollow microtube resonators, their translation between diverse research fields, and the production of commercially viable devices.

  11. Over-current carrying characteristics of rectangular-shaped YBCO thin films prepared by MOD method

    NASA Astrophysics Data System (ADS)

    Hotta, N.; Yokomizu, Y.; Iioka, D.; Matsumura, T.; Kumagai, T.; Yamasaki, H.; Shibuya, M.; Nitta, T.

    2008-02-01

    A fault current limiter (FCL) may be manufactured at competitive qualities and prices by using rectangular-shaped YBCO films which are prepared by metal-organic deposition (MOD) method, because the MOD method can produce large size elements with a low-cost and non-vacuum technique. Prior to constructing a superconducting FCL (SFCL), AC over-current carrying experiments were conducted for 120 mm long elements where YBCO thin film of about 200 nm in thickness was coated on sapphire substrate with cerium oxide (CeO2) interlayer. In the experiments, only single cycle of the ac damping current of 50 Hz was applied to the pure YBCO element without protective metal coating or parallel resistor and the magnitude of the current was increased step by step until the breakdown phenomena occurred in the element. In each experiment, current waveforms flowing through the YBCO element and voltage waveform across the element were measured to get the voltage-current characteristics. The allowable over-current and generated voltage were successfully estimated for the pure YBCO films. It can be pointed out that the lower n-value trends to bring about the higher allowable over-current and the higher withstand voltage more than tens of volts. The YBCO film having higher n-value is sensitive to the over-current. Thus, some protective methods such as a metal coating should be employed for applying to the fault current limiter.

  12. Reynolds number scaling to predict droplet size distribution in dispersed and undispersed subsurface oil releases.

    PubMed

    Li, Pu; Weng, Linlu; Niu, Haibo; Robinson, Brian; King, Thomas; Conmy, Robyn; Lee, Kenneth; Liu, Lei

    2016-12-15

    This study was aimed at testing the applicability of modified Weber number scaling with Alaska North Slope (ANS) crude oil, and developing a Reynolds number scaling approach for oil droplet size prediction for high viscosity oils. Dispersant to oil ratio and empirical coefficients were also quantified. Finally, a two-step Rosin-Rammler scheme was introduced for the determination of droplet size distribution. This new approach appeared more advantageous in avoiding the inconsistency in interfacial tension measurements, and consequently delivered concise droplet size prediction. Calculated and observed data correlated well based on Reynolds number scaling. The relation indicated that chemical dispersant played an important role in reducing the droplet size of ANS under different seasonal conditions. The proposed Reynolds number scaling and two-step Rosin-Rammler approaches provide a concise, reliable way to predict droplet size distribution, supporting decision making in chemical dispersant application during an offshore oil spill. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Control of Alginate Core Size in Alginate-Poly (Lactic-Co-Glycolic) Acid Microparticles

    NASA Astrophysics Data System (ADS)

    Lio, Daniel; Yeo, David; Xu, Chenjie

    2016-01-01

    Core-shell alginate-poly (lactic-co-glycolic) acid (PLGA) microparticles are potential candidates to improve hydrophilic drug loading while facilitating controlled release. This report studies the influence of the alginate core size on the drug release profile of alginate-PLGA microparticles and its size. Microparticles are synthesized through double-emulsion fabrication via a concurrent ionotropic gelation and solvent extraction. The size of alginate core ranges from approximately 10, 50, to 100 μm when the emulsification method at the first step is homogenization, vortexing, or magnetic stirring, respectively. The second step emulsification for all three conditions is performed with magnetic stirring. Interestingly, although the alginate core has different sizes, alginate-PLGA microparticle diameter does not change. However, drug release profiles are dramatically different for microparticles comprising different-sized alginate cores. Specifically, taking calcein as a model drug, microparticles containing the smallest alginate core (10 μm) show the slowest release over a period of 26 days with burst release less than 1 %.

  14. Facile synthesis of concentrated gold nanoparticles with low size-distribution in water: temperature and pH controls

    NASA Astrophysics Data System (ADS)

    Li, Chunfang; Li, Dongxiang; Wan, Gangqiang; Xu, Jie; Hou, Wanguo

    2011-07-01

    The citrate reduction method for the synthesis of gold nanoparticles (GNPs) has known advantages but usually provides the products with low nanoparticle concentration and limits its application. Herein, we report a facile method to synthesize GNPs from concentrated chloroauric acid (2.5 mM) via adding sodium hydroxide and controlling the temperature. It was found that adding a proper amount of sodium hydroxide can produce uniform concentrated GNPs with low size distribution; otherwise, the largely distributed nanoparticles or instable colloids were obtained. The low reaction temperature is helpful to control the nanoparticle formation rate, and uniform GNPs can be obtained in presence of optimized NaOH concentrations. The pH values of the obtained uniform GNPs were found to be very near to neutral, and the pH influence on the particle size distribution may reveal the different formation mechanism of GNPs at high or low pH condition. Moreover, this modified synthesis method can save more than 90% energy in the heating step. Such environmental-friendly synthesis method for gold nanoparticles may have a great potential in large-scale manufacturing for commercial and industrial demand.

  15. Ligament Mediated Fragmentation of Viscoelastic Liquids

    NASA Astrophysics Data System (ADS)

    Keshavarz, Bavand; Houze, Eric C.; Moore, John R.; Koerner, Michael R.; McKinley, Gareth H.

    2016-10-01

    The breakup and atomization of complex fluids can be markedly different than the analogous processes in a simple Newtonian fluid. Atomization of paint, combustion of fuels containing antimisting agents, as well as physiological processes such as sneezing are common examples in which the atomized liquid contains synthetic or biological macromolecules that result in viscoelastic fluid characteristics. Here, we investigate the ligament-mediated fragmentation dynamics of viscoelastic fluids in three different canonical flows. The size distributions measured in each viscoelastic fragmentation process show a systematic broadening from the Newtonian solvent. In each case, the droplet sizes are well described by Gamma distributions which correspond to a fragmentation-coalescence scenario. We use a prototypical axial step strain experiment together with high-speed video imaging to show that this broadening results from the pronounced change in the corrugated shape of viscoelastic ligaments as they separate from the liquid core. These corrugations saturate in amplitude and the measured distributions for viscoelastic liquids in each process are given by a universal probability density function, corresponding to a Gamma distribution with nmin=4 . The breadth of this size distribution for viscoelastic filaments is shown to be constrained by a geometrical limit which can not be exceeded in ligament-mediated fragmentation phenomena.

  16. Biology Inspired Approach for Communal Behavior in Sensor Networks

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng

    2006-01-01

    Research in wireless sensor network technology has exploded in the last decade. Promises of complex and ubiquitous control of the physical environment by these networks open avenues for new kinds of science and business. Due to the small size and low cost of sensor devices, visionaries promise systems enabled by deployment of massive numbers of sensors working in concert. Although the reduction in size has been phenomenal it results in severe limitations on the computing, communicating, and power capabilities of these devices. Under these constraints, research efforts have concentrated on developing techniques for performing relatively simple tasks with minimal energy expense assuming some form of centralized control. Unfortunately, centralized control does not scale to massive size networks and execution of simple tasks in sparsely populated networks will not lead to the sophisticated applications predicted. These must be enabled by new techniques dependent on local and autonomous cooperation between sensors to effect global functions. As a step in that direction, in this work we detail a technique whereby a large population of sensors can attain a global goal using only local information and by making only local decisions without any form of centralized control.

  17. Optimization and application of octadecyl-modified monolithic silica for solid-phase extraction of drugs in whole blood samples.

    PubMed

    Namera, Akira; Saito, Takeshi; Ota, Shigenori; Miyazaki, Shota; Oikawa, Hiroshi; Murata, Kazuhiro; Nagao, Masataka

    2017-09-29

    Monolithic silica in MonoSpin for solid-phase extraction of drugs from whole blood samples was developed to facilitate high-throughput analysis. Monolithic silica of various pore sizes and octadecyl contents were synthesized, and their effects on recovery rates were evaluated. The silica monolith M18-200 (20μm through-pore size, 10.4nm mesopore size, and 17.3% carbon content) achieved the best recovery of the target analytes in whole blood samples. The extraction proceeded with centrifugal force at 1000rpm for 2min, and the eluate was directly injected into the liquid chromatography-mass spectrometry system without any tedious steps such as evaporation of extraction solvents. Under the optimized condition, low detection limits of 0.5-2.0ngmL -1 and calibration ranges up to 1000ngmL -1 were obtained. The recoveries of the target drugs in the whole blood were 76-108% with relative standard deviation of less than 14.3%. These results indicate that the developed method based on monolithic silica is convenient, highly efficient, and applicable for detecting drugs in whole blood samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Ligament Mediated Fragmentation of Viscoelastic Liquids.

    PubMed

    Keshavarz, Bavand; Houze, Eric C; Moore, John R; Koerner, Michael R; McKinley, Gareth H

    2016-10-07

    The breakup and atomization of complex fluids can be markedly different than the analogous processes in a simple Newtonian fluid. Atomization of paint, combustion of fuels containing antimisting agents, as well as physiological processes such as sneezing are common examples in which the atomized liquid contains synthetic or biological macromolecules that result in viscoelastic fluid characteristics. Here, we investigate the ligament-mediated fragmentation dynamics of viscoelastic fluids in three different canonical flows. The size distributions measured in each viscoelastic fragmentation process show a systematic broadening from the Newtonian solvent. In each case, the droplet sizes are well described by Gamma distributions which correspond to a fragmentation-coalescence scenario. We use a prototypical axial step strain experiment together with high-speed video imaging to show that this broadening results from the pronounced change in the corrugated shape of viscoelastic ligaments as they separate from the liquid core. These corrugations saturate in amplitude and the measured distributions for viscoelastic liquids in each process are given by a universal probability density function, corresponding to a Gamma distribution with n_{min}=4. The breadth of this size distribution for viscoelastic filaments is shown to be constrained by a geometrical limit which can not be exceeded in ligament-mediated fragmentation phenomena.

  19. Controlling CH3NH3PbI(3-x)Cl(x) Film Morphology with Two-Step Annealing Method for Efficient Hybrid Perovskite Solar Cells.

    PubMed

    Liu, Dong; Wu, Lili; Li, Chunxiu; Ren, Shengqiang; Zhang, Jingquan; Li, Wei; Feng, Lianghuan

    2015-08-05

    The methylammonium lead halide perovskite solar cells have become very attractive because they can be prepared with low-cost solution-processable technology and their power conversion efficiency have been increasing from 3.9% to 20% in recent years. However, the high performance of perovskite photovoltaic devices are dependent on the complicated process to prepare compact perovskite films with large grain size. Herein, a new method is developed to achieve excellent CH3NH3PbI3-xClx film with fine morphology and crystallization based on one step deposition and two-step annealing process. This method include the spin coating deposition of the perovskite films with the precursor solution of PbI2, PbCl2, and CH3NH3I at the molar ratio 1:1:4 in dimethylformamide (DMF) and the post two-step annealing (TSA). The first annealing is achieved by solvent-induced process in DMF to promote migration and interdiffusion of the solvent-assisted precursor ions and molecules and realize large size grain growth. The second annealing is conducted by thermal-induced process to further improve morphology and crystallization of films. The compact perovskite films are successfully prepared with grain size up to 1.1 μm according to SEM observation. The PL decay lifetime, and the optic energy gap for the film with two-step annealing are 460 ns and 1.575 eV, respectively, while they are 307 and 327 ns and 1.577 and 1.582 eV for the films annealed in one-step thermal and one-step solvent process. On the basis of the TSA process, the photovoltaic devices exhibit the best efficiency of 14% under AM 1.5G irradiation (100 mW·cm(-2)).

  20. Monte Carlo modeling of single-molecule cytoplasmic dynein.

    PubMed

    Singh, Manoranjan P; Mallik, Roop; Gross, Steven P; Yu, Clare C

    2005-08-23

    Molecular motors are responsible for active transport and organization in the cell, underlying an enormous number of crucial biological processes. Dynein is more complicated in its structure and function than other motors. Recent experiments have found that, unlike other motors, dynein can take different size steps along microtubules depending on load and ATP concentration. We use Monte Carlo simulations to model the molecular motor function of cytoplasmic dynein at the single-molecule level. The theory relates dynein's enzymatic properties to its mechanical force production. Our simulations reproduce the main features of recent single-molecule experiments that found a discrete distribution of dynein step sizes, depending on load and ATP concentration. The model reproduces the large steps found experimentally under high ATP and no load by assuming that the ATP binding affinities at the secondary sites decrease as the number of ATP bound to these sites increases. Additionally, to capture the essential features of the step-size distribution at very low ATP concentration and no load, the ATP hydrolysis of the primary site must be dramatically reduced when none of the secondary sites have ATP bound to them. We make testable predictions that should guide future experiments related to dynein function.

  1. Control Software for Piezo Stepping Actuators

    NASA Technical Reports Server (NTRS)

    Shields, Joel F.

    2013-01-01

    A control system has been developed for the Space Interferometer Mission (SIM) piezo stepping actuator. Piezo stepping actuators are novel because they offer extreme dynamic range (centimeter stroke with nanometer resolution) with power, thermal, mass, and volume advantages over existing motorized actuation technology. These advantages come with the added benefit of greatly reduced complexity in the support electronics. The piezo stepping actuator consists of three fully redundant sets of piezoelectric transducers (PZTs), two sets of brake PZTs, and one set of extension PZTs. These PZTs are used to grasp and move a runner attached to the optic to be moved. By proper cycling of the two brake and extension PZTs, both forward and backward moves of the runner can be achieved. Each brake can be configured for either a power-on or power-off state. For SIM, the brakes and gate of the mechanism are configured in such a manner that, at the end of the step, the actuator is in a parked or power-off state. The control software uses asynchronous sampling of an optical encoder to monitor the position of the runner. These samples are timed to coincide with the end of the previous move, which may consist of a variable number of steps. This sampling technique linearizes the device by avoiding input saturation of the actuator and makes latencies of the plant vanish. The software also estimates, in real time, the scale factor of the device and a disturbance caused by cycling of the brakes. These estimates are used to actively cancel the brake disturbance. The control system also includes feedback and feedforward elements that regulate the position of the runner to a given reference position. Convergence time for smalland medium-sized reference positions (less than 200 microns) to within 10 nanometers can be achieved in under 10 seconds. Convergence times for large moves (greater than 1 millimeter) are limited by the step rate.

  2. Outward Bound to the Galaxies--One Step at a Time

    ERIC Educational Resources Information Center

    Ward, R. Bruce; Miller-Friedmann, Jaimie; Sienkiewicz, Frank; Antonucci, Paul

    2012-01-01

    Less than a century ago, astronomers began to unlock the cosmic distances within and beyond the Milky Way. Understanding the size and scale of the universe is a continuing, step-by-step process that began with the remarkably accurate measurement of the distance to the Moon made by early Greeks. In part, the authors have ITEAMS (Innovative…

  3. Smart Hydrogel Particles: Biomarker Harvesting: One-step affinity purification, size exclusion, and protection against degradation

    PubMed Central

    Luchini, Alessandra; Geho, David H.; Bishop, Barney; Tran, Duy; Xia, Cassandra; Dufour, Robert; Jones, Clint; Espina, Virginia; Patanarut, Alexis; Zhu, Weidong; Ross, Mark; Tessitore, Alessandra; Petricoin, Emanuel; Liotta, Lance A.

    2010-01-01

    Disease-associated blood biomarkers exist in exceedingly low concentrations within complex mixtures of high-abundance proteins such as albumin. We have introduced an affinity bait molecule into N-isopropylacrylamide to produce a particle that will perform three independent functions within minutes, in one step, in solution: a) molecular size sieving b) affinity capture of all solution phase target molecules, and c) complete protection of harvested proteins from enzymatic degradation. The captured analytes can be readily electroeluted for analysis. PMID:18076201

  4. Establishing intensively cultured hybrid poplar plantations for fuel and fiber.

    Treesearch

    Edward Hansen; Lincoln Moore; Daniel Netzer; Michael Ostry; Howard Phipps; Jaroslav Zavitkovski

    1983-01-01

    This paper describes a step-by-step procedure for establishing commercial size intensively cultured plantations of hybrid poplar and summarizes the state-of-knowledge as developed during 10 years of field research at Rhinelander, Wisconsin.

  5. Study on experimental characterization of carbon fiber reinforced polymer panel using digital image correlation: A sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Kashfuddoja, Mohammad; Prasath, R. G. R.; Ramji, M.

    2014-11-01

    In this work, the experimental characterization of polymer-matrix and polymer based carbon fiber reinforced composite laminate by employing a whole field non-contact digital image correlation (DIC) technique is presented. The properties are evaluated based on full field data obtained from DIC measurements by performing a series of tests as per ASTM standards. The evaluated properties are compared with the results obtained from conventional testing and analytical models and they are found to closely match. Further, sensitivity of DIC parameters on material properties is investigated and their optimum value is identified. It is found that the subset size has more influence on material properties as compared to step size and their predicted optimum value for the case of both matrix and composite material is found consistent with each other. The aspect ratio of region of interest (ROI) chosen for correlation should be the same as that of camera resolution aspect ratio for better correlation. Also, an open cutout panel made of the same composite laminate is taken into consideration to demonstrate the sensitivity of DIC parameters on predicting complex strain field surrounding the hole. It is observed that the strain field surrounding the hole is much more sensitive to step size rather than subset size. Lower step size produced highly pixilated strain field, showing sensitivity of local strain at the expense of computational time in addition with random scattered noisy pattern whereas higher step size mitigates the noisy pattern at the expense of losing the details present in data and even alters the natural trend of strain field leading to erroneous maximum strain locations. The subset size variation mainly presents a smoothing effect, eliminating noise from strain field while maintaining the details in the data without altering their natural trend. However, the increase in subset size significantly reduces the strain data at hole edge due to discontinuity in correlation. Also, the DIC results are compared with FEA prediction to ascertain the suitable value of DIC parameters towards better accuracy.

  6. Expression Levels of LCORL Are Associated with Body Size in Horses

    PubMed Central

    Metzger, Julia; Schrimpf, Rahel; Philipp, Ute; Distl, Ottmar

    2013-01-01

    Body size is an important characteristic for horses of various breeds and essential for the classification of ponies concerning the limit value of 148 cm (58.27 inches) height at the withers. Genome-wide association analyses revealed the highest associated quantitative trait locus for height at the withers on horse chromosome (ECA) 3 upstream of the candidate gene LCORL. Using 214 Hanoverian horses genotyped on the Illumina equine SNP50 BeadChip and 42 different horse breeds across all size ranges, we confirmed the highly associated single nucleotide polymorphism BIEC2-808543 (−log10P = 8.3) and the adjacent gene LCORL as the most promising candidate for body size. We investigated the relative expression levels of LCORL and its two neighbouring genes NCAPG and DCAF16 using quantitative real-time PCR (RT-qPCR). We could demonstrate a significant association of the relative LCORL expression levels with the size of the horses and the BIEC2-808543 genotypes within and across horse breeds. In heterozygous C/T-horses expression levels of LCORL were significantly decreased by 40% and in homozygous C/C-horses by 56% relative to the smaller T/T-horses. Bioinformatic analyses indicated that this SNP T>C mutation is disrupting a putative binding site of the transcription factor TFIID which is important for the transcription process of genes involved in skeletal bone development. Thus, our findings suggest that expression levels of LCORL play a key role for body size within and across horse breeds and regulation of the expression of LCORL is associated with genetic variants of BIEC2-808543. This is the first functional study for a body size regulating polymorphism in horses and a further step to unravel the mechanisms for understanding the genetic regulation of body size in horses. PMID:23418579

  7. Differential Effects of Monovalent Cations and Anions on Key Nanoparticle Attributes

    EPA Science Inventory

    Understanding the key particle attributes such as particle size, size distribution and surface charge of both the nano- and micron-sized particles is the first step in drug formulation as such attributes are known to directly influence several characteristics of drugs including d...

  8. Low-frequency radio constraints on the synchrotron cosmic web

    NASA Astrophysics Data System (ADS)

    Vernstrom, T.; Gaensler, B. M.; Brown, S.; Lenc, E.; Norris, R. P.

    2017-06-01

    We present a search for the synchrotron emission from the synchrotron cosmic web by cross-correlating 180-MHz radio images from the Murchison Widefield Array with tracers of large-scale structure (LSS). We use two versions of the radio image covering 21.76° × 21.76° with point sources brighter than 0.05 Jy subtracted, with and without filtering of Galactic emission. As tracers of the LSS, we use the Two Micron All-Sky Survey and the Wide-field InfraRed Explorer redshift catalogues to produce galaxy number density maps. The cross-correlation functions all show peak amplitudes at 0°, decreasing with varying slopes towards zero correlation over a range of 1°. The cross-correlation signals include components from point source, Galactic, and extragalactic diffuse emission. We use models of the diffuse emission from smoothing the density maps with Gaussians of sizes 1-4 Mpc to find limits on the cosmic web components. From these models, we find surface brightness 99.7 per cent upper limits in the range of 0.09-2.20 mJy beam-1 (average beam size of 2.6 arcmin), corresponding to 0.01-0.30 mJy arcmin-2. Assuming equipartition between energy densities of cosmic rays and the magnetic field, the flux density limits translate to magnetic field strength limits of 0.03-1.98 μG, depending heavily on the spectral index. We conclude that for a 3σ detection of 0.1 μG magnetic field strengths via cross-correlations, image depths of sub-mJy to sub-μJy are necessary. We include discussion on the treatment and effect of extragalactic point sources and Galactic emission, and next steps for building on this work.

  9. Predictive evaluation of size restrictions as management strategies for tennessee reservoir crappie fisheries

    USGS Publications Warehouse

    Isermann, D.A.; Sammons, S.M.; Bettoli, P.W.; Churchill, T.N.

    2002-01-01

    We evaluated the potential effect of minimum size restrictions on crappies Pomoxis spp. in 12 large Tennessee reservoirs. A Beverton-Holt equilibrium yield model was used to predict and compare the response of these fisheries to three minimum size restrictions: 178 mm (i.e., pragmatically, no size limit), 229 mm, and the current statewide limit of 254 mm. The responses of crappie fisheries to size limits differed among reservoirs and varied with rates of conditional natural mortality (CM). Based on model results, crappie fisheries fell into one of three response categories: (1) In some reservoirs (N = 5), 254-mm and 229-mm limits would benefit the fishery in terms of yield if CM were low (30%); the associated declines in the number of crappies harvested would be significant but modest when compared with those in other reservoirs. (2) In other reservoirs (N = 6), little difference in yield existed among size restrictions at low to intermediate rates of CM (30-40%). In these reservoirs, a 229-mm limit was predicted to be a more beneficial regulation than the current 254-mm limit. (3) In the remaining reservoir, Tellico, size limits negatively affected all three harvest statistics. Generally, yield was negatively affected by size limits in all populations at a CM of 50%. The number of crappies reaching 300 mm was increased by size limits in most model scenarios: however, associated declines in the total number of crappies harvested often outweighed the benefits to size structure when CM was 40% or higher. When crappie growth was fast (reaching 254 mm in less than 3 years) and CM was low (30%), size limits were most effective in balancing increases in yield and size structure against declines in the total number of crappies harvested. The variability in predicted size-limit responses observed among Tennessee reservoirs suggests that using a categorical approach to applying size limits to crappie fisheries within a state or region would likely be a more effective management strategy than implementing a single, areawide regulation.

  10. A chaos wolf optimization algorithm with self-adaptive variable step-size

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  11. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model

    PubMed Central

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well. PMID:25484854

  12. Efficient transformation of an auditory population code in a small sensory system.

    PubMed

    Clemens, Jan; Kutzki, Olaf; Ronacher, Bernhard; Schreiber, Susanne; Wohlgemuth, Sandra

    2011-08-16

    Optimal coding principles are implemented in many large sensory systems. They include the systematic transformation of external stimuli into a sparse and decorrelated neuronal representation, enabling a flexible readout of stimulus properties. Are these principles also applicable to size-constrained systems, which have to rely on a limited number of neurons and may only have to fulfill specific and restricted tasks? We studied this question in an insect system--the early auditory pathway of grasshoppers. Grasshoppers use genetically fixed songs to recognize mates. The first steps of neural processing of songs take place in a small three-layer feed-forward network comprising only a few dozen neurons. We analyzed the transformation of the neural code within this network. Indeed, grasshoppers create a decorrelated and sparse representation, in accordance with optimal coding theory. Whereas the neuronal input layer is best read out as a summed population, a labeled-line population code for temporal features of the song is established after only two processing steps. At this stage, information about song identity is maximal for a population decoder that preserves neuronal identity. We conclude that optimal coding principles do apply to the early auditory system of the grasshopper, despite its size constraints. The inputs, however, are not encoded in a systematic, map-like fashion as in many larger sensory systems. Already at its periphery, part of the grasshopper auditory system seems to focus on behaviorally relevant features, and is in this property more reminiscent of higher sensory areas in vertebrates.

  13. Elemental analysis of printed circuit boards considering the ROHS regulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wienold, Julia, E-mail: julia.wienold@bam.de; Recknagel, Sebastian, E-mail: sebastian.recknagel@bam.de; Scharf, Holger, E-mail: holger.scharf@bam.de

    2011-03-15

    The EU RoHS Directive (2002/95/EC of the European Parliament and of the Council) bans the placing of new electrical and electronic equipment containing more than agreed levels of lead, cadmium, mercury, hexavalent chromium, polybrominated biphenyl (PBB) and polybrominated diphenyl ether (PBDE) flame retardants on the EU market. It necessitates methods for the evaluation of RoHS compliance of assembled electronic equipment. In this study mounted printed circuit boards from personal computers were analyzed on their content of the three elements Cd, Pb and Hg which were limited by the EU RoHS directive. Main focus of the investigations was the influence ofmore » sample pre-treatment on the precision and reproducibility of the results. The sample preparation steps used were based on the guidelines given in EN 62321. Five different types of dissolution procedures were tested on different subsequent steps of sample treatment like cutting and milling. Elemental analysis was carried out using ICP-OES, XRF and CV-AFS (Hg). The results obtained showed that for decision-making with respect to RoHS compliance a size reduction of the material to be analyzed to particles {<=}1.5 mm can already be sufficient. However, to ensure analytical results with relative standard deviations of less than 20%, as recommended by the EN 62321, a much larger effort for sample processing towards smaller particle sizes might be required which strongly depends on the mass fraction of the element under investigation.« less

  14. Temperature controlled formation of lead/acid batteries

    NASA Astrophysics Data System (ADS)

    Bungardt, M.

    At present, standard formation programs have to accommodate the worst case. This is important, especially in respect of variations in climatic conditions. The standard must be set so that during the hottest weather periods the maximum electrolyte temperature is not exceeded. As this value is defined not only by the desired properties and the recipe of the active mass, but also by type and size of the separators and by the dimensions of the plates, general rules cannot be formulated. It is considered to be advantageous to introduce limiting data for the maximum temperature into a general formation program. The latter is defined so that under normal to good ambient conditions the shortest formation time is achieved. If required, the temperature control will reduce the currents employed in the different steps, according to need, and will extend the formation time accordingly. With computer-controlled formation, these parameters can be readily adjusted to suit each type of battery and can also be reset according to modifications in the preceding processing steps. Such a procedure ensures that: (i) the formation time is minimum under the given ambient conditions; (ii) in the event of malpractice ( e.g. actual program not fitting to size) the batteries will not be destroyed; (iii) the energy consumption is minimized (note, high electrolyte temperature leads to excess gassing). These features are incorporated in the BA/FOS-500 battery formation system developed by Digatron. The operational characteristics of this system are listed in Table 1.

  15. A multilayer concentric filter device to diminish clogging for separation of particles and microalgae based on size.

    PubMed

    Chen, Chih-Chung; Chen, Yu-An; Liu, Yi-Ju; Yao, Da-Jeng

    2014-04-21

    Microalgae species have great economic importance; they are a source of medicines, health foods, animal feeds, industrial pigments, cosmetic additives and biodiesel. Specific microalgae species collected from the environment must be isolated for examination and further application, but their varied size and culture conditions make their isolation using conventional methods, such as filtration, streaking plate and flow cytometric sorting, labour-intensive and costly. A separation device based on size is one of the most rapid, simple and inexpensive methods to separate microalgae, but this approach encounters major disadvantages of clogging and multiple filtration steps when the size of microalgae varies over a wide range. In this work, we propose a multilayer concentric filter device with varied pore size and is driven by a centrifugation force. The device, which includes multiple filter layers, was employed to separate a heterogeneous population of microparticles into several subpopulations by filtration in one step. A cross-flow to attenuate prospective clogging was generated by altering the rate of rotation instantly through the relative motion between the fluid and the filter according to the structural design of the device. Mixed microparticles of varied size were tested to demonstrate that clogging was significantly suppressed due to a highly efficient separation. Microalgae in a heterogeneous population collected from an environmental soil collection were separated and enriched into four subpopulations according to size in a one step filtration process. A microalgae sample contaminated with bacteria and insect eggs was also tested to prove the decontamination capability of the device.

  16. Study of mesoporous CdS-quantum-dot-sensitized TiO2 films by using X-ray photoelectron spectroscopy and AFM

    PubMed Central

    Wojcieszak, Robert; Raj, Gijo

    2014-01-01

    Summary CdS quantum dots were grown on mesoporous TiO2 films by successive ionic layer adsorption and reaction processes in order to obtain CdS particles of various sizes. AFM analysis shows that the growth of the CdS particles is a two-step process. The first step is the formation of new crystallites at each deposition cycle. In the next step the pre-deposited crystallites grow to form larger aggregates. Special attention is paid to the estimation of the CdS particle size by X-ray photoelectron spectroscopy (XPS). Among the classical methods of characterization the XPS model is described in detail. In order to make an attempt to validate the XPS model, the results are compared to those obtained from AFM analysis and to the evolution of the band gap energy of the CdS nanoparticles as obtained by UV–vis spectroscopy. The results showed that XPS technique is a powerful tool in the estimation of the CdS particle size. In conjunction with these results, a very good correlation has been found between the number of deposition cycles and the particle size. PMID:24605274

  17. Modeling solute clustering in the diffusion layer around a growing crystal.

    PubMed

    Shiau, Lie-Ding; Lu, Yung-Fang

    2009-03-07

    The mechanism of crystal growth from solution is often thought to consist of a mass transfer diffusion step followed by a surface reaction step. Solute molecules might form clusters in the diffusion step before incorporating into the crystal lattice. A model is proposed in this work to simulate the evolution of the cluster size distribution due to the simultaneous aggregation and breakage of solute molecules in the diffusion layer around a growing crystal in the stirred solution. The crystallization of KAl(SO(4))(2)12H(2)O from aqueous solution is studied to illustrate the effect of supersaturation and diffusion layer thickness on the number-average degree of clustering and the size distribution of solute clusters in the diffusion layer.

  18. 11S Storage globulin from pumpkin seeds: regularities of proteolysis by papain.

    PubMed

    Rudakova, A S; Rudakov, S V; Kakhovskaya, I A; Shutov, A D

    2014-08-01

    Limited proteolysis of the α- and β-chains and deep cleavage of the αβ-subunits by the cooperative (one-by-one) mechanism was observed in the course of papain hydrolysis of cucurbitin, an 11S storage globulin from seeds of the pumpkin Cucurbita maxima. An independent analysis of the kinetics of the limited and cooperative proteolyses revealed that the reaction occurs in two successive steps. In the first step, limited proteolysis consisting of detachments of short terminal peptides from the α- and β-chains was observed. The cooperative proteolysis, which occurs as a pseudo-first order reaction, started at the second step. Therefore, the limited proteolysis at the first step plays a regulatory role, impacting the rate of deep degradation of cucurbitin molecules by the cooperative mechanism. Structural alterations of cucurbitin induced by limited proteolysis are suggested to generate its susceptibility to cooperative proteolysis. These alterations are tentatively discussed on the basis of the tertiary structure of the cucurbitin subunit pdb|2EVX in comparison with previously obtained data on features of degradation of soybean 11S globulin hydrolyzed by papain.

  19. The magical number 4 in short-term memory: a reconsideration of mental storage capacity.

    PubMed

    Cowan, N

    2001-02-01

    Miller (1956) summarized evidence that people can remember about seven chunks in short-term memory (STM) tasks. However, that number was meant more as a rough estimate and a rhetorical device than as a real capacity limit. Others have since suggested that there is a more precise capacity limit, but that it is only three to five chunks. The present target article brings together a wide variety of data on capacity limits suggesting that the smaller capacity limit is real. Capacity limits will be useful in analyses of information processing only if the boundary conditions for observing them can be carefully described. Four basic conditions in which chunks can be identified and capacity limits can accordingly be observed are: (1) when information overload limits chunks to individual stimulus items, (2) when other steps are taken specifically to block the recording of stimulus items into larger chunks, (3) in performance discontinuities caused by the capacity limit, and (4) in various indirect effects of the capacity limit. Under these conditions, rehearsal and long-term memory cannot be used to combine stimulus items into chunks of an unknown size; nor can storage mechanisms that are not capacity-limited, such as sensory memory, allow the capacity-limited storage mechanism to be refilled during recall. A single, central capacity limit averaging about four chunks is implicated along with other, noncapacity-limited sources. The pure STM capacity limit expressed in chunks is distinguished from compound STM limits obtained when the number of separately held chunks is unclear. Reasons why pure capacity estimates fall within a narrow range are discussed and a capacity limit for the focus of attention is proposed.

  20. One-Step Synthesis of Water-Soluble MoS2 Quantum Dots via a Hydrothermal Method as a Fluorescent Probe for Hyaluronidase Detection.

    PubMed

    Gu, Wei; Yan, Yinghan; Zhang, Cuiling; Ding, Caiping; Xian, Yuezhong

    2016-05-11

    In this work, a bottom-up strategy is developed to synthesize water-soluble molybdenum disulfide quantum dots (MoS2 QDs) through a simple, one-step hydrothermal method using ammonium tetrathiomolybdate [(NH4)2MoS4] as the precursor and hydrazine hydrate as the reducing agent. The as-synthesized MoS2 QDs are few-layered with a narrow size distribution, and the average diameter is about 2.8 nm. The resultant QDs show excitation-dependent blue fluorescence due to the polydispersity of the QDs. Moreover, the fluorescence can be quenched by hyaluronic acid (HA)-functionalized gold nanoparticles through a photoinduced electron-transfer mechanism. Hyaluronidase (HAase), an endoglucosidase, can cleave HA into proangiogenic fragments and lead to the aggregation of gold nanoparticles. As a result, the electron transfer is blocked and fluorescence is recovered. On the basis of this principle, a novel fluorescence sensor for HAase is developed with a linear range from 1 to 50 U/mL and a detection limit of 0.7 U/mL.

  1. Thermal energy management process experiment

    NASA Technical Reports Server (NTRS)

    Ollendorf, S.

    1984-01-01

    The thermal energy management processes experiment (TEMP) will demonstrate that through the use of two-phase flow technology, thermal systems can be significantly enhanced by increasing heat transport capabilities at reduced power consumption while operating within narrow temperature limits. It has been noted that such phenomena as excess fluid puddling, priming, stratification, and surface tension effects all tend to mask the performance of two-phase flow systems in a 1-g field. The flight experiment approach would be to attack the experiment to an appropriate mounting surface with a 15 to 20 meter effective length and provide a heat input and output station in the form of heaters and a radiator. Using environmental data, the size, location, and orientation of the experiment can be optimized. The approach would be to provide a self-contained panel and mount it to the STEP through a frame. A small electronics package would be developed to interface with the STEP avionics for command and data handling. During the flight, heaters on the evaporator will be exercised to determine performance. Flight data will be evaluated against the ground tests to determine any anomalous behavior.

  2. Drawing causal inferences using propensity scores: a practical guide for community psychologists.

    PubMed

    Lanza, Stephanie T; Moore, Julia E; Butera, Nicole M

    2013-12-01

    Confounding present in observational data impede community psychologists' ability to draw causal inferences. This paper describes propensity score methods as a conceptually straightforward approach to drawing causal inferences from observational data. A step-by-step demonstration of three propensity score methods-weighting, matching, and subclassification-is presented in the context of an empirical examination of the causal effect of preschool experiences (Head Start vs. parental care) on reading development in kindergarten. Although the unadjusted population estimate indicated that children with parental care had substantially higher reading scores than children who attended Head Start, all propensity score adjustments reduce the size of this overall causal effect by more than half. The causal effect was also defined and estimated among children who attended Head Start. Results provide no evidence for improved reading if those children had instead received parental care. We carefully define different causal effects and discuss their respective policy implications, summarize advantages and limitations of each propensity score method, and provide SAS and R syntax so that community psychologists may conduct causal inference in their own research.

  3. Drawing Causal Inferences Using Propensity Scores: A Practical Guide for Community Psychologists

    PubMed Central

    Lanza, Stephanie T.; Moore, Julia E.; Butera, Nicole M.

    2014-01-01

    Confounding present in observational data impede community psychologists’ ability to draw causal inferences. This paper describes propensity score methods as a conceptually straightforward approach to drawing causal inferences from observational data. A step-by-step demonstration of three propensity score methods – weighting, matching, and subclassification – is presented in the context of an empirical examination of the causal effect of preschool experiences (Head Start vs. parental care) on reading development in kindergarten. Although the unadjusted population estimate indicated that children with parental care had substantially higher reading scores than children who attended Head Start, all propensity score adjustments reduce the size of this overall causal effect by more than half. The causal effect was also defined and estimated among children who attended Head Start. Results provide no evidence for improved reading if those children had instead received parental care. We carefully define different causal effects and discuss their respective policy implications, summarize advantages and limitations of each propensity score method, and provide SAS and R syntax so that community psychologists may conduct causal inference in their own research. PMID:24185755

  4. Enhanced production of lovastatin by Omphalotus olearius (DC.) Singer in solid state fermentation.

    PubMed

    Atlı, Burcu; Yamaç, Mustafa; Yıldız, Zeki; Isikhuemnen, Omoanghe S

    2015-01-01

    Although lovastatin production has been reported for different microorganism species, there is limited information about lovastatin production by basidiomycetes. The optimization of culture parameters that enhances lovastatin production by Omphalotus olearius OBCC 2002 was investigated, using statistically based experimental designs under solid state fermentation. The Plackett Burman design was used in the first step to test the relative importance of the variables affecting production of lovastatin. Amount and particle size of barley were identified as efficient variables. In the latter step, the interactive effects of selected efficient variables were studied with a full factorial design. A maximum lovastatin yield of 139.47mg/g substrate was achieved by the fermentation of 5g of barley, 1-2mm particle diam., at 28°C. This study showed that O. olearius OBCC 2002 has a high capacity for lovastatin production which could be enhanced by using solid state fermentation with novel and cost-effective substrates, such as barley. Copyright © 2013 Revista Iberoamericana de Micología. Published by Elsevier Espana. All rights reserved.

  5. A two-step framework for reconstructing remotely sensed land surface temperatures contaminated by cloud

    NASA Astrophysics Data System (ADS)

    Zeng, Chao; Long, Di; Shen, Huanfeng; Wu, Penghai; Cui, Yaokui; Hong, Yang

    2018-07-01

    Land surface temperature (LST) is one of the most important parameters in land surface processes. Although satellite-derived LST can provide valuable information, the value is often limited by cloud contamination. In this paper, a two-step satellite-derived LST reconstruction framework is proposed. First, a multi-temporal reconstruction algorithm is introduced to recover invalid LST values using multiple LST images with reference to corresponding remotely sensed vegetation index. Then, all cloud-contaminated areas are temporally filled with hypothetical clear-sky LST values. Second, a surface energy balance equation-based procedure is used to correct for the filled values. With shortwave irradiation data, the clear-sky LST is corrected to the real LST under cloudy conditions. A series of experiments have been performed to demonstrate the effectiveness of the developed approach. Quantitative evaluation results indicate that the proposed method can recover LST in different surface types with mean average errors in 3-6 K. The experiments also indicate that the time interval between the multi-temporal LST images has a greater impact on the results than the size of the contaminated area.

  6. A hierarchically honeycomb-like carbon via one-step surface and pore adjustment with superior capacity for lithium-oxygen batteries

    NASA Astrophysics Data System (ADS)

    Li, Jing; Zhang, Yining; Zhou, Wei; Nie, Hongjiao; Zhang, Huamin

    2014-09-01

    Li-O2 batteries have attracted considerable attention due to their high energy density. The critical challenges that limit the practical applications include effective utilization of electrode space for solid products deposition and acceptable cycling performance. In the present work, a nitrogen-doped micron-sized honeycomb-like carbon is developed for use as a cathode material for Li-O2 batteries. This novel material is obtained by using nano-CaCO3 particles as hard template and sucrose as the carbon source, followed by thermal annealing at 800 °C in ammonia. With one-step ammonia activation, surface nitrogenation and further pore structure optimization are realized simultaneously. The material exhibits enhanced activity for oxygen reduction reaction and oxygen transfer ability. Surprisingly, an improved cycling stability is also obtained. As a result, a superior discharge capacity up to 12,600 mAh g-1 is achieved, about 4 times that of commercial Ketjenblack carbon. The results provide a novel route to construct effective non-metal carbon-based cathodes for high performance of Li-O2 batteries.

  7. Is the size of the useful field of view affected by postural demands associated with standing and stepping?

    PubMed

    Reed-Jones, James G; Reed-Jones, Rebecca J; Hollands, Mark A

    2014-04-30

    The useful field of view (UFOV) is the visual area from which information is obtained at a brief glance. While studies have examined the effects of increased cognitive load on the visual field, no one has specifically looked at the effects of postural control or locomotor activity on the UFOV. The current study aimed to examine the effects of postural demand and locomotor activity on UFOV performance in healthy young adults. Eleven participants were tested on three modified UFOV tasks (central processing, peripheral processing, and divided-attention) while seated, standing, and stepping in place. Across all postural conditions, participants showed no difference in their central or peripheral processing. However, in the divided-attention task (reporting the letter in central vision and target location in peripheral vision amongst distracter items) a main effect of posture condition on peripheral target accuracy was found for targets at 57° of eccentricity (p=.037). The mean accuracy reduced from 80.5% (standing) to 74% (seated) to 56.3% (stepping). These findings show that postural demands do affect UFOV divided-attention performance. In particular, the size of the useful field of view significantly decreases when stepping. This finding has important implications for how the results of a UFOV test are used to evaluate the general size of the UFOV during varying activities, as the traditional seated test procedure may overestimate the size of the UFOV during locomotor activities. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. The occupational exposure limit for fluid aerosol generated in metalworking operations: limitations and recommendations.

    PubMed

    Park, Donguk

    2012-03-01

    The aim of this review was to assess current knowledge related to the occupational exposure limit (OEL) for fluid aerosols including either mineral or chemical oil that are generated in metalworking operations, and to discuss whether their OEL can be appropriately used to prevent several health risks that may vary among metalworking fluid (MWF) types. The OEL (time-weighted average; 5 mg/m(3), short-term exposure limit ; 15 mg/m(3)) has been applied to MWF aerosols without consideration of different fluid aerosol-size fractions. The OEL, is also based on the assumption that there are no significant differences in risk among fluid types, which may be contentious. Particularly, the health risks from exposure to water-soluble fluids may not have been sufficiently considered. Although adoption of The National Institute for Occupational Safety and Health's recommended exposure limit for MWF aerosol (0.5 mg/m(3)) would be an effective step towards minimizing and evaluating the upper respiratory irritation that may be caused by neat or diluted MWF, this would fail to address the hazards (e.g., asthma and hypersensitivity pneumonitis) caused by microbial contaminants generated only by the use of water-soluble fluids. The absence of an OEL for the water-soluble fluids used in approximately 80-90 % of all applicants may result in limitations of the protection from health risks caused by exposure to those fluids.

  9. 16 CFR 642.3 - Prescreen opt-out notice.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... size that is larger than the type size of the principal text on the same page, but in no event smaller than 12-point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (ii) On the...

  10. 16 CFR 642.3 - Prescreen opt-out notice.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... size that is larger than the type size of the principal text on the same page, but in no event smaller than 12-point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (ii) On the...

  11. 16 CFR 642.3 - Prescreen opt-out notice.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... size that is larger than the type size of the principal text on the same page, but in no event smaller than 12-point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (ii) On the...

  12. Influence of sequence and size of DNA on packaging efficiency of parvovirus MVM-based vectors.

    PubMed

    Brandenburger, A; Coessens, E; El Bakkouri, K; Velu, T

    1999-05-01

    We have derived a vector from the autonomous parvovirus MVM(p), which expresses human IL-2 specifically in transformed cells (Russell et al., J. Virol 1992;66:2821-2828). Testing the therapeutic potential of these vectors in vivo requires high-titer stocks. Stocks with a titer of 10(9) can be obtained after concentration and purification (Avalosse et al., J. Virol. Methods 1996;62:179-183), but this method requires large culture volumes and cannot easily be scaled up. We wanted to increase the production of recombinant virus at the initial transfection step. Poor vector titers could be due to inadequate genome amplification or to inefficient packaging. Here we show that intracellular amplification of MVM vector genomes is not the limiting factor for vector production. Several vector genomes of different size and/or structure were amplified to an equal extent. Their amplification was also equivalent to that of a cotransfected wild-type genome. We did not observe any interference between vector and wild-type genomes at the level of DNA amplification. Despite equivalent genome amplification, vector titers varied greatly between the different genomes, presumably owing to differences in packaging efficiency. Genomes with a size close to 100% that of wild type were packaged most efficiently with loss of efficiency at lower and higher sizes. However, certain genomes of identical size showed different packaging efficiencies, illustrating the importance of the DNA sequence, and probably its structure.

  13. Comparing two books and establishing probably efficacious treatment for low sexual desire.

    PubMed

    Balzer, Alexandra M; Mintz, Laurie B

    2015-04-01

    Using a sample of 45 women, this study compared the effectiveness of a previously studied (Mintz, Balzer, Zhao, & Bush, 2012) bibliotherapy intervention (Mintz, 2009), a similar self-help book (Hall, 2004), and a wait-list control (WLC) group. To examine intervention effectiveness, between and within group standardized effect sizes (interpreted with Cohen's, 1988 benchmarks .20 = small, .50 = medium, .80+ = large) and their confidence limits are used. In comparison to the WLC group, both interventions yielded large between-group posttest effect sizes on a measure of sexual desire. Additionally, large between-group posttest effect sizes were found for sexual satisfaction and lubrication among those reading the Mintz book. When examining within-group pretest to posttest effect sizes, medium to large effects were found for desire, lubrication, and orgasm for both books and for satisfaction and arousal for those reading the Mintz book. When directly comparing the books, all between-group posttest effect sizes were likely obtained by chance. It is concluded that both books are equally effective in terms of the outcome of desire, but whether or not there is differential efficacy in terms of other domains of sexual functioning is equivocal. Tentative evidence is provided for the longer term effectiveness of both books in enhancing desire. Arguing for applying criteria for empirically supported treatments to self-help, results are purported to establish the Mintz book as probably efficacious and to comprise a first step in this designation for the Hall book. (c) 2015 APA, all rights reserved).

  14. Dislocation-induced Charges in Quantum Dots: Step Alignment and Radiative Emission

    NASA Technical Reports Server (NTRS)

    Leon, R.; Okuno, J.; Lawton, R.; Stevens-Kalceff, M.; Phillips, M.; Zou, J.; Cockayne, D.; Lobo, C.

    1999-01-01

    A transition between two types of step alignment was observed in a multilayered InGaAs/GaAs quantum-dot (QD) structure. A change to larger QD sizes in smaller concentrations occurred after formation of a dislocation array.

  15. Solar kerosene from H2O and CO2

    NASA Astrophysics Data System (ADS)

    Furler, P.; Marxer, D.; Scheffe, J.; Reinalda, D.; Geerlings, H.; Falter, C.; Batteiger, V.; Sizmann, A.; Steinfeld, A.

    2017-06-01

    The entire production chain for renewable kerosene obtained directly from sunlight, H2O, and CO2 is experimentally demonstrated. The key component of the production process is a high-temperature solar reactor containing a reticulated porous ceramic (RPC) structure made of ceria, which enables the splitting of H2O and CO2 via a 2-step thermochemical redox cycle. In the 1st reduction step, ceria is endo-thermally reduced using concentrated solar radiation as the energy source of process heat. In the 2nd oxidation step, nonstoichiometric ceria reacts with H2O and CO2 to form H2 and CO - syngas - which is finally converted into kerosene by the Fischer-Tropsch process. The RPC featured dual-scale porosity for enhanced heat and mass transfer: mm-size pores for volumetric radiation absorption during the reduction step and μm-size pores within its struts for fast kinetics during the oxidation step. We report on the engineering design of the solar reactor and the experimental demonstration of over 290 consecutive redox cycles for producing high-quality syngas suitable for the processing of liquid hydrocarbon fuels.

  16. Squamate hatchling size and the evolutionary causes of negative offspring size allometry.

    PubMed

    Meiri, S; Feldman, A; Kratochvíl, L

    2015-02-01

    Although fecundity selection is ubiquitous, in an overwhelming majority of animal lineages, small species produce smaller number of offspring per clutch. In this context, egg, hatchling and neonate sizes are absolutely larger, but smaller relative to adult body size in larger species. The evolutionary causes of this widespread phenomenon are not fully explored. The negative offspring size allometry can result from processes limiting maximal egg/offspring size forcing larger species to produce relatively smaller offspring ('upper limit'), or from a limit on minimal egg/offspring size forcing smaller species to produce relatively larger offspring ('lower limit'). Several reptile lineages have invariant clutch sizes, where females always lay either one or two eggs per clutch. These lineages offer an interesting perspective on the general evolutionary forces driving negative offspring size allometry, because an important selective factor, fecundity selection in a single clutch, is eliminated here. Under the upper limit hypotheses, large offspring should be selected against in lineages with invariant clutch sizes as well, and these lineages should therefore exhibit the same, or shallower, offspring size allometry as lineages with variable clutch size. On the other hand, the lower limit hypotheses would allow lineages with invariant clutch sizes to have steeper offspring size allometries. Using an extensive data set on the hatchling and female sizes of > 1800 species of squamates, we document that negative offspring size allometry is widespread in lizards and snakes with variable clutch sizes and that some lineages with invariant clutch sizes have unusually steep offspring size allometries. These findings suggest that the negative offspring size allometry is driven by a constraint on minimal offspring size, which scales with a negative allometry. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.

  17. The unique authority of state and local health departments to address obesity.

    PubMed

    Pomeranz, Jennifer L

    2011-07-01

    The United States has 51 state health departments and thousands of local health agencies. Their size, structure, and authority differ, but they all possess unique abilities to address obesity. Because they are responsible for public health, they can take various steps themselves and can coordinate efforts with other agencies to further health in all policy domains. I describe the value of health agencies' rule-making authority and clarify this process through 2 case studies involving menu-labeling regulations. I detail rule-making procedures and examine the legal and practical limitations on agency activity. Health departments have many options to effect change in the incidence of obesity but need the support of other government entities and officials.

  18. Identification of small ORFs in vertebrates using ribosome footprinting and evolutionary conservation

    PubMed Central

    Bazzini, Ariel A; Johnstone, Timothy G; Christiano, Romain; Mackowiak, Sebastian D; Obermayer, Benedikt; Fleming, Elizabeth S; Vejnar, Charles E; Lee, Miler T; Rajewsky, Nikolaus; Walther, Tobias C; Giraldez, Antonio J

    2014-01-01

    Identification of the coding elements in the genome is a fundamental step to understanding the building blocks of living systems. Short peptides (< 100 aa) have emerged as important regulators of development and physiology, but their identification has been limited by their size. We have leveraged the periodicity of ribosome movement on the mRNA to define actively translated ORFs by ribosome footprinting. This approach identifies several hundred translated small ORFs in zebrafish and human. Computational prediction of small ORFs from codon conservation patterns corroborates and extends these findings and identifies conserved sequences in zebrafish and human, suggesting functional peptide products (micropeptides). These results identify micropeptide-encoding genes in vertebrates, providing an entry point to define their function in vivo. PMID:24705786

  19. International Launch Vehicle Selection for Interplanetary Travel

    NASA Technical Reports Server (NTRS)

    Ferrone, Kristine; Nguyen, Lori T.

    2010-01-01

    In developing a mission strategy for interplanetary travel, the first step is to consider launch capabilities which provide the basis for fundamental parameters of the mission. This investigation focuses on the numerous launch vehicles of various characteristics available and in development internationally with respect to upmass, launch site, payload shroud size, fuel type, cost, and launch frequency. This presentation will describe launch vehicles available and in development worldwide, then carefully detail a selection process for choosing appropriate vehicles for interplanetary missions focusing on international collaboration, risk management, and minimization of cost. The vehicles that fit the established criteria will be discussed in detail with emphasis on the specifications and limitations related to interplanetary travel. The final menu of options will include recommendations for overall mission design and strategy.

  20. The Unique Authority of State and Local Health Departments to Address Obesity

    PubMed Central

    2011-01-01

    The United States has 51 state health departments and thousands of local health agencies. Their size, structure, and authority differ, but they all possess unique abilities to address obesity. Because they are responsible for public health, they can take various steps themselves and can coordinate efforts with other agencies to further health in all policy domains. I describe the value of health agencies' rule-making authority and clarify this process through 2 case studies involving menu-labeling regulations. I detail rule-making procedures and examine the legal and practical limitations on agency activity. Health departments have many options to effect change in the incidence of obesity but need the support of other government entities and officials. PMID:21566027

  1. Weight-watching at the university: the consequences of growth.

    PubMed

    Gallant, J A; Prothero, J W

    1972-01-28

    We began by pointing out that tools (for example) have size optima that are dictated by function. If we assume that the university has a function, it would seem reasonable to think about the size which will serve that function best. The principle of size optimization is fundamental, but its application to the university at once encounters a difficulty: What is the function of a university? It might take forever to secure general agreement on the answer to this question. The problem is that universities have a number of different functions, to which different individuals will attach different weights, and each function may well have a unique size optimum. Just as it is, in general, mathematically impossible to maximize simultaneously for two different functions of the same variable (29), so it is unsound to conceive of a single optimum for the multiversity. Nonetheless, a range of workable sizes may be defined by analyzing the effect of variation in size on all essential functions. The examples from biological systems illustrate this approach. Cells exist in a variety of sizes, each size presumably representing an optimization to one or another set of constraints, yet there are upper bounds. There are no cells the size of basketballs because essential metabolic functions are limited by the surface-to-volume ratio. We must emphasize that one does not need a grand theory of life in order to identify this limiting condition. If cells could talk, they would no doubt differ on the general philosophy of being a cell, yet all conceptions would be subject to certain physically inevitable limitations on size. In the case of the university, no grand theory of education is needed in order to identify dysfunctions of growth that affect essential activities (for example, the diffusion of individuals through, in, and out of the university) or that affect all activities (for example, overall morale). Balanced against these dysfunctions are such advantages of growth as economy, the achievement of a critical mass, and flexibility in staffing. Our analyis of data from the California system indicates that unit costs of education decline very little above a size of 10,000 or 15,000 students. Moreover, the critical mass for departmental excellence, at least in terms of the ACE ratings of graduate departments, is achieved by a university of about this size. Growth beyond this size range conitinues to provide flexibility in staffing and spares administrators the trouble of having to make difficult decisions. At the same time, the dysfunctions attendant on growth become steadily more severe. Our impression is that the dysfunctions have not been seriously considered, while the advantages have been greatly oversold. The idea of dysfunctional growth, although fundamental in biology, contradicts one of America's most cherished illusions. Particular dysfunctions of growth are rarely formulated, set down, and explicitly weighed against the potential advantages. Rather, the American prejudice has been to assume that growth is always good, or at least inevitable, and to treat the dysfunctions (which are inevitable) as managerial problems to be ironed out later or glossed over. There has also been a remarkable failure to think in terms of optima and to distinguish in this way between what we have termed functional and dysfunctional growth. Rather, the tendency has been to extrapolate functional growth into the dysfunctional range: If a university population of 10,000 confers certain advantages as compared with a population of 1,000, then it is assumed that a population of 100,000 must confer even more advantages. We suggest that it is time, in fact past time, to subject university growth to a more searching scrutiny. Functional and dysfunctional consequences need to be spelled out. Scale effects ought to be considered in connection with every plan for expansion. Ideally, one might expect a farsighted and tough-minded administration to carry out this function. This has rarely been the case. Too often administrators regard their function as simply that of broker among competing expansionist tendencies. Such a conception replaces philosophy by politics and often encourages mindless growth. Perhaps it is time for faculties to involve themselves in long-range planning and to pay the price of a more satisfactory environment by giving up some individual dreams of empire. The first step for every large university ought to be a careful analysis of scale effects (30). If analysis indicates that continued growth of a university will be, on balance, dysfunctional, we suggest that plans be formulated to establish an absolute limit on further enrollment increase, and an absolute limit on further building expansion. If further analysis indicates that a university is already well into the dysfunctional size range, then the obvious solution is to cut back. If this turns out to be the case, then we suggest that a program for the gradual reduction of the campus population be undertaken. There are two distinct ways to accomplish this: (i) the establishment of a new university and (ii) the decentralization of the existing university into two or more campuses. Decentralization strikes us as an attractive idea, worthy of careful study. One of the recommendations of the Scranton commission was, "Large universities should take steps to decentralize or reorganize to make possible a more human scale" (18, p. 14). Returning to the natural world, we note again that cells do not grow indefinitely. Instead, they divide.

  2. Feeding habits and trophic level of the Panama grunt Pomadasys panamensis, an important bycatch species from the shrimp trawl fishery in the Gulf of California.

    PubMed

    Rodríguez-Preciado, José A; Amezcua, Felipe; Bellgraph, Brian; Madrid-Vera, Juan

    2014-01-01

    The Panama grunt is an abundant and commercially important species in the southeastern Gulf of California, but the research undertaken on this species is scarce despite its ecological and economic importance. We studied the feeding habits of Panama grunt through stomach content analyses as a first step towards understanding the biology of this species in the study area. Our results indicate that the Panama grunt is a benthic predator throughout its life cycle and feeds mainly on infaunal crustaceans. Diet differences among grunt were not found according to size, diet, or season. Shannon diversity index results indicate that Panama grunt has a limited trophic niche breadth with a diet dominated by a limited number of taxa as crustaceans. The estimated trophic level of this species is 3.59. Overall, the Panama grunt is a carnivorous fish occupying the intermediate levels of the trophic pyramid.

  3. Direct Hydrogel Encapsulation of Pluripotent Stem Cells Enables Ontomimetic Differentiation and Growth of Engineered Human Heart Tissues

    PubMed Central

    Kerscher, Petra; Turnbull, Irene C; Hodge, Alexander J; Kim, Joonyul; Seliktar, Dror; Easley, Christopher J; Costa, Kevin D; Lipke, Elizabeth A

    2016-01-01

    Human engineered heart tissues have potential to revolutionize cardiac development research, drug-testing, and treatment of heart disease; however, implementation is limited by the need to use pre-differentiated cardiomyocytes (CMs). Here we show that by providing a 3D poly(ethylene glycol)-fibrinogen hydrogel microenvironment, we can directly differentiate human pluripotent stem cells (hPSCs) into contracting heart tissues. Our straight-forward, ontomimetic approach, imitating the process of development, requires only a single cell-handling step, provides reproducible results for a range of tested geometries and size scales, and overcomes inherent limitations in cell maintenance and maturation, while achieving high yields of CMs with developmentally appropriate temporal changes in gene expression. Here we demonstrate that hPSCs encapsulated within this biomimetic 3D hydrogel microenvironment develop into functional cardiac tissues composed of self-aligned CMs with evidence of ultrastructural maturation, mimicking heart development, and enabling investigation of disease mechanisms and screening of compounds on developing human heart tissue. PMID:26826618

  4. GaAs MOEMS Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SPAHN, OLGA B.; GROSSETETE, GRANT D.; CICH, MICHAEL J.

    2003-03-01

    Many MEMS-based components require optical monitoring techniques using optoelectronic devices for converting mechanical position information into useful electronic signals. While the constituent piece-parts of such hybrid opto-MEMS components can be separately optimized, the resulting component performance, size, ruggedness and cost are substantially compromised due to assembly and packaging limitations. GaAs MOEMS offers the possibility of monolithically integrating high-performance optoelectronics with simple mechanical structures built in very low-stress epitaxial layers with a resulting component performance determined only by GaAs microfabrication technology limitations. GaAs MOEMS implicitly integrates the capability for radiation-hardened optical communications into the MEMS sensor or actuator component, a vitalmore » step towards rugged integrated autonomous microsystems that sense, act, and communicate. This project establishes a new foundational technology that monolithically combines GaAs optoelectronics with simple mechanics. Critical process issues addressed include selectivity, electrochemical characteristics, and anisotropy of the release chemistry, and post-release drying and coating processes. Several types of devices incorporating this novel technology are demonstrated.« less

  5. Feeding habits and trophic level of the Panama grunt Pomadasys panamensis, an important bycatch species from the shrimp trawl fishery in the Gulf of California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez-Preciado, Jose A.; Amezcua-Martinez, Felipe; Bellgraph, Brian J.

    The Panama grunt is an abundant and commercially important species in the SE Gulf of California, but the research undertaken on this species is scarce despite its ecological and economic importance. We studied the feeding habits of Panama grunt through stomach content analyses as a first step towards understanding the biology of this species in the study area. Our results show that the Panama grunt is a benthic predator throughout its life cycle and feeds mainly on infaunal crustaceans. Diet differences were not found according to size, diet or season. Shannon diversity index results indicate that Panama grunt have amore » limited trophic niche breadth with a diet dominated by a limited number of taxa. The estimated trophic level of this species is 3.59. Overall, the Panama grunt is a carnivorous fish occupying the intermediate levels of the trophic pyramid.« less

  6. Sexual variation in assimilation efficiency: its link to phenotype and potential role in sexual dimorphism.

    PubMed

    Stahlschmidt, Zachary R; Davis, Jon R; Denardo, Dale F

    2011-04-01

    Sex-specific variation in morphology (sexual dimorphism) is a prevalent phenomenon among animals, and both dietary intake and resource allocation strategies influence sexually dimorphic traits (e.g., body size or composition). However, we investigated whether assimilation efficiency (AE), an intermediate step between dietary intake and allocation, can also vary between the sexes. Specifically, we tested whether sex-based differences in AE can explain variation in phenotypic traits. We measured morphometric characteristics (i.e., body length, mass, condition, and musculature) and AE of total energy, crude protein, and crude fat in post-reproductive adult Children's pythons (which exhibit a limited female-biased sexual size dimorphism) fed both low and high dietary intakes. Meal size was negatively related to AE of energy. Notably, male snakes absorbed crude protein more efficiently and increased epaxial (dorsal) musculature faster than females, which demonstrates a link between AE and phenotype. However, females grew in body length faster but did not absorb any nutrient more efficiently than males. Although our results do not provide a direct link between AE and sexual size dimorphism, they demonstrate that sexual variation in nutrient absorption exists and can contribute to other types of sex-based differences in phenotype (i.e., sexual dimorphism in growth of musculature). Hence, testing the broader applicability of AE's role in sexually dimorphic traits among other species is warranted.

  7. Analytical approaches for the characterization and quantification of nanoparticles in food and beverages.

    PubMed

    Mattarozzi, Monica; Suman, Michele; Cascio, Claudia; Calestani, Davide; Weigel, Stefan; Undas, Anna; Peters, Ruud

    2017-01-01

    Estimating consumer exposure to nanomaterials (NMs) in food products and predicting their toxicological properties are necessary steps in the assessment of the risks of this technology. To this end, analytical methods have to be available to detect, characterize and quantify NMs in food and materials related to food, e.g. food packaging and biological samples following metabolization of food. The challenge for the analytical sciences is that the characterization of NMs requires chemical as well as physical information. This article offers a comprehensive analysis of methods available for the detection and characterization of NMs in food and related products. Special attention was paid to the crucial role of sample preparation methods since these have been partially neglected in the scientific literature so far. The currently available instrumental methods are grouped as fractionation, counting and ensemble methods, and their advantages and limitations are discussed. We conclude that much progress has been made over the last 5 years but that many challenges still exist. Future perspectives and priority research needs are pointed out. Graphical Abstract Two possible analytical strategies for the sizing and quantification of Nanoparticles: Asymmetric Flow Field-Flow Fractionation with multiple detectors (allows the determination of true size and mass-based particle size distribution); Single Particle Inductively Coupled Plasma Mass Spectrometry (allows the determination of a spherical equivalent diameter of the particle and a number-based particle size distribution).

  8. μ-Rainbow: CdSe Nanocrystal Photoluminescence Gradients via Laser Spike Annealing for Kinetic Investigations and Tunable Device Design.

    PubMed

    Treml, Benjamin E; Jacobs, Alan G; Bell, Robert T; Thompson, Michael O; Hanrath, Tobias

    2016-02-10

    Much of the promise of nanomaterials derives from their size-dependent, and hence tunable, properties. Impressive advances have been made in the synthesis of nanoscale building blocks with precisely tailored size, shape and composition. Significant attention is now turning toward creating thin film structures in which size-dependent properties can be spatially programmed with high fidelity. Nonequilibrium processing techniques present exciting opportunities to create nanostructured thin films with unprecedented spatial control over their optical and electronic properties. Here, we demonstrate single scan laser spike annealing (ssLSA) on CdSe nanocrystal (NC) thin films as an experimental test bed to illustrate how the size-dependent photoluminescence (PL) emission can be tuned throughout the visible range and in spatially defined profiles during a single annealing step. Through control of the annealing temperature and time, we discovered that NC fusion is a kinetically limited process with a constant activation energy in over 2 orders of magnitude of NC growth rate. To underscore the broader technological implications of this work, we demonstrate the scalability of LSA to process large area NC films with periodically modulated PL emission, resulting in tunable emission properties of a large area film. New insights into the processing-structure-property relationships presented here offer significant advances in our fundamental understanding of kinetics of nanomaterials as well as technological implications for the production of nanomaterial films.

  9. Twisting and subunit rotation in single FOF1-ATP synthase

    PubMed Central

    Sielaff, Hendrik; Börsch, Michael

    2013-01-01

    FOF1-ATP synthases are ubiquitous proton- or ion-powered membrane enzymes providing ATP for all kinds of cellular processes. The mechanochemistry of catalysis is driven by two rotary nanomotors coupled within the enzyme. Their different step sizes have been observed by single-molecule microscopy including videomicroscopy of fluctuating nanobeads attached to single enzymes and single-molecule Förster resonance energy transfer. Here we review recent developments of approaches to monitor the step size of subunit rotation and the transient elastic energy storage mechanism in single FOF1-ATP synthases. PMID:23267178

  10. Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance

    DTIC Science & Technology

    2013-11-01

    2219 , 2000 Tile gap is found to increase the DoP as compared to One Tile tiles The next step will be run simulations on narrower and wider gap sizes...experiments described in reference - ARL-TR- 2219 , 2000 □ Tile gap is found to increase the DoP as compared to One Tile tiles □ The next step will be run...L| Al m ^ s\\cr V^ 1 v^ □ Smoothed-particle hydrodynamics (SPH) used for all parts □ SPH size = 0.40-mm, totaling 278k

  11. Optimization of ecosystem model parameters with different temporal variabilities using tower flux data and an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    He, L.; Chen, J. M.; Liu, J.; Mo, G.; Zhen, T.; Chen, B.; Wang, R.; Arain, M.

    2013-12-01

    Terrestrial ecosystem models have been widely used to simulate carbon, water and energy fluxes and climate-ecosystem interactions. In these models, some vegetation and soil parameters are determined based on limited studies from literatures without consideration of their seasonal variations. Data assimilation (DA) provides an effective way to optimize these parameters at different time scales . In this study, an ensemble Kalman filter (EnKF) is developed and applied to optimize two key parameters of an ecosystem model, namely the Boreal Ecosystem Productivity Simulator (BEPS): (1) the maximum photosynthetic carboxylation rate (Vcmax) at 25 °C, and (2) the soil water stress factor (fw) for stomatal conductance formulation. These parameters are optimized through assimilating observations of gross primary productivity (GPP) and latent heat (LE) fluxes measured in a 74 year-old pine forest, which is part of the Turkey Point Flux Station's age-sequence sites. Vcmax is related to leaf nitrogen concentration and varies slowly over the season and from year to year. In contrast, fw varies rapidly in response to soil moisture dynamics in the root-zone. Earlier studies suggested that DA of vegetation parameters at daily time steps leads to Vcmax values that are unrealistic. To overcome the problem, we developed a three-step scheme to optimize Vcmax and fw. First, the EnKF is applied daily to obtain precursor estimates of Vcmax and fw. Then Vcmax is optimized at different time scales assuming fw is unchanged from first step. The best temporal period or window size is then determined by analyzing the magnitude of the minimized cost-function, and the coefficient of determination (R2) and Root-mean-square deviation (RMSE) of GPP and LE between simulation and observation. Finally, the daily fw value is optimized for rain free days corresponding to the Vcmax curve from the best window size. The optimized fw is then used to model its relationship with soil moisture. We found that the optimized fw is best correlated linearly to soil water content at 5 to 10 cm depth. We also found that both the temporal scale or window size and the priori uncertainty of Vcmax (given as its standard deviation) are important in determining the seasonal trajectory of Vcmax. During the leaf expansion stage, an appropriate window size leads to reasonable estimate of Vcmax. In the summer, the fluctuation of optimized Vcmax is mainly caused by the uncertainties in Vcmax but not the window size. Our study suggests that a smooth Vcmax curve optimized from an optimal time window size is close to the reality though the RMSE of GPP at this window is not the minimum. It also suggests that for the accurate optimization of Vcmax, it is necessary to set appropriate levels of uncertainty of Vcmax in the spring and summer because the rate of leaf nitrogen concentration change is different over the season. Parameter optimizations for more sites and multi-years are in progress.

  12. Allocation of limited reserves to a clutch: A model explaining the lack of a relationship between clutch size and egg size

    USGS Publications Warehouse

    Flint, Paul L.; Grand, James B.; Sedinger, James S.

    1996-01-01

    Lack (1967, 1968) proposed that clutch size in waterfowl is limited by the nutrients available to females when producing eggs. He suggested that if nutrients available for clutch formation are limited, then species producing small eggs would, on average, lay more eggs than species with large eggs. Rohwer (1988) argues that this model should also apply within species. Thus, the nutrition-limitation hypothesis predicts a tradeoff among females between clutch size and egg size (Rohwer 1988). Field studies of single species consistently have failed to detect a negative relationship between clutch size and egg size (Rohwer 1988, Lessells et al. 1992, Rohwer and Eisenhauer 1989, Flint and Sedinger 1992, Flint and Grand 1996). The absence of such a relationship within species has been regarded as evidence against the hypothesis that nutrient availability limits clutch size (Rohwer 1988, 1991, 1992; Rohwer and Eisenhauer 1989).

  13. Statistical analyses support power law distributions found in neuronal avalanches.

    PubMed

    Klaus, Andreas; Yu, Shan; Plenz, Dietmar

    2011-01-01

    The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i) analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii) model parameter estimation to determine the specific exponent of the power law, and (iii) comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect). This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.

  14. Reference Value Advisor: a new freeware set of macroinstructions to calculate reference intervals with Microsoft Excel.

    PubMed

    Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine

    2011-03-01

    International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.

  15. Contrast, size, and orientation-invariant target detection in infrared imagery

    NASA Astrophysics Data System (ADS)

    Zhou, Yi-Tong; Crawshaw, Richard D.

    1991-08-01

    Automatic target detection in IR imagery is a very difficult task due to variations in target brightness, shape, size, and orientation. In this paper, the authors present a contrast, size, and orientation invariant algorithm based on Gabor functions for detecting targets from a single IR image frame. The algorithms consists of three steps. First, it locates potential targets by using low-resolution Gabor functions which resist noise and background clutter effects, then, it removes false targets and eliminates redundant target points based on a similarity measure. These two steps mimic human vision processing but are different from Zeevi's Foveating Vision System. Finally, it uses both low- and high-resolution Gabor functions to verify target existence. This algorithm has been successfully tested on several IR images that contain multiple examples of military vehicles with different size and brightness in various background scenes and orientations.

  16. Persistence of the gapless spin liquid in the breathing kagome Heisenberg antiferromagnet

    NASA Astrophysics Data System (ADS)

    Iqbal, Yasir; Poilblanc, Didier; Thomale, Ronny; Becca, Federico

    2018-03-01

    The nature of the ground state of the spin S =1 /2 Heisenberg antiferromagnet on the kagome lattice with breathing anisotropy (i.e., with different superexchange couplings J▵ and J▿ within elementary up- and down-pointing triangles) is investigated within the framework of Gutzwiller projected fermionic wave functions and Monte Carlo methods. We analyze the stability of the U(1 ) Dirac spin liquid with respect to the presence of fermionic pairing that leads to a gapped Z2 spin liquid. For several values of the ratio J▿/J▵ , the size scaling of the energy gain due to the pairing fields and the variational parameters are reported. Our results show that the energy gain of the gapped spin liquid with respect to the gapless state either vanishes for large enough system size or scales to zero in the thermodynamic limit. Similarly, the optimized pairing amplitudes (responsible for opening the spin gap) are shown to vanish in the thermodynamic limit. Our outcome is corroborated by the application of one and two Lanczos steps to the gapless and gapped wave functions, for which no energy gain of the gapped state is detected when improving the quality of the variational states. Finally, we discuss the competition with the "simplex" Z2 resonating-valence-bond spin liquid, valence-bond crystal, and nematic states in the strongly anisotropic regime, i.e., J▿≪J▵ .

  17. Effect of initial shock wave voltage on shock wave lithotripsy-induced lesion size during step-wise voltage ramping.

    PubMed

    Connors, Bret A; Evan, Andrew P; Blomgren, Philip M; Handa, Rajash K; Willis, Lynn R; Gao, Sujuan

    2009-01-01

    To determine if the starting voltage in a step-wise ramping protocol for extracorporeal shock wave lithotripsy (SWL) alters the size of the renal lesion caused by the SWs. To address this question, one kidney from 19 juvenile pigs (aged 7-8 weeks) was treated in an unmodified Dornier HM-3 lithotripter (Dornier Medical Systems, Kennesaw, GA, USA) with either 2000 SWs at 24 kV (standard clinical treatment, 120 SWs/min), 100 SWs at 18 kV followed by 2000 SWs at 24 kV or 100 SWs at 24 kV followed by 2000 SWs at 24 kV. The latter protocols included a 3-4 min interval, between the 100 SWs and the 2000 SWs, used to check the targeting of the focal zone. The kidneys were removed at the end of the experiment so that lesion size could be determined by sectioning the entire kidney and quantifying the amount of haemorrhage in each slice. The average parenchymal lesion for each pig was then determined and a group mean was calculated. Kidneys that received the standard clinical treatment had a mean (sem) lesion size of 3.93 (1.29)% functional renal volume (FRV). The mean lesion size for the 18 kV ramping group was 0.09 (0.01)% FRV, while lesion size for the 24 kV ramping group was 0.51 (0.14)% FRV. The lesion size for both of these groups was significantly smaller than the lesion size in the standard clinical treatment group. The data suggest that initial voltage in a voltage-ramping protocol does not correlate with renal damage. While voltage ramping does reduce injury when compared with SWL with no voltage ramping, starting at low or high voltage produces lesions of the same approximate size. Our findings also suggest that the interval between the initial shocks and the clinical dose of SWs, in our one-step ramping protocol, is important for protecting the kidney against injury.

  18. Effects of natural organic matter on PCB-activated carbon sorption kinetics: implications for sediment capping applications.

    PubMed

    Fairey, Julian L; Wahman, David G; Lowry, Gregory V

    2010-01-01

    In situ capping of polychlorinated biphenyl (PCB)-contaminated sediments with a layer of activated carbon has been proposed, but several questions remain regarding the long-term effectiveness of this remediation strategy. Here, we assess the degree to which kinetic limitations, size exclusion effects, and electrostatic repulsions impaired PCB sorption to activated carbon. Sorption of 11 PCB congeners with activated carbon was studied in fixed bed reactors with organic-free water (OFW) and Suwannee River natural organic matter (SR-NOM), made by reconstituting freeze-dried SR-NOM at a concentration of 10 mg L(-1) as carbon. In the OFW test, no PCBs were detected in the column effluent over the 390-d study, indicating that PCB-activated carbon equilibrium sorption capacities may be achieved before breakthrough even at the relatively high hydraulic loading rate (HLR) of 3.1 m h(-1). However, in the SR-NOM fixed-bed test, partial PCB breakthrough occurred over the entire 320-d test (HLRs of 3.1-, 1.5-, and 0.8 m h(-1)). Simulations from a modified pore and surface diffusion model indicated that external (film diffusion) mass transfer was the dominant rate-limiting step but that internal (pore diffusion) mass transfer limitations were also present. The external mass transfer limitation was likely caused by formation of PCB-NOM complexes that reduced PCB sorption through a combination of (i) increased film diffusion resistance; (ii) size exclusion effects; and (iii) electrostatic repulsive forces between the PCBs and the NOM-coated activated carbon. However, the seepage velocities in the SR-NOM fixed bed test were about 1000 times higher than would be expected in a sediment cap. Therefore, additional studies are needed to assess whether the mass transfer limitations described here would be likely to manifest themselves at the lower seepage velocities observed in practice.

  19. Stent-protected carotid angioplasty using a membrane stent: a comparative cadaver study.

    PubMed

    Müller-Hülsbeck, Stefan; Gühne, Albrecht; Tsokos, Michael; Hüsler, Erhard J; Schaffner, Silvio R; Paulsen, Friedrich; Hedderich, Jürgen; Heller, Martin; Jahnke, Thomas

    2006-01-01

    To evaluate the performance of a prototype membrane stent, MembraX, in the prevention of acute and late embolization and to quantify particle embolization during carotid stent placement in human carotid explants in a proof of concept study. Thirty human carotid cadaveric explants (mild stenoses 0-29%, n = 23; moderate stenoses 30-69%, n = 3; severe stenoses 70-99%, n = 2) that included the common, internal and external carotid arteries were integrated into a pulsatile-flow model. Three groups were formed according to the age of the donors (mean 58.8 years; sample SD 15.99 years) and randomized to three test groups: (I) MembraX, n = 9; (II) Xpert bare stent, n = 10; (III) Xpert bare stent with Emboshield protection device, n = 9. Emboli liberated during stent deployment (step A), post-dilatation (step B), and late embolization (step C) were measured in 100 microm effluent filters. When the Emboshield was used, embolus penetration was measured during placement (step D) and retrieval (step E). Late embolization was simulated by compressing the area of the stented vessel five times. Absolute numbers of particles (median; >100 microm) caught in the effluent filter were: (I) MembraX: A = 7, B = 9, C = 3; (II) bare stent: A = 6.5, B = 6, C = 4.5; (III) bare stent and Emboshield: A = 7, B = 7, C.=.5, D = 8, E = 10. The data showed no statistical differences according to whether embolic load was analyzed by weight or mean particle size. When summing all procedural steps, the Emboshield caused the greatest load by weight (p = 0.011) and the largest number (p = 0.054) of particles. On the basis of these limited data neither a membrane stent nor a protection device showed significant advantages during ex vivo carotid angioplasty. However, the membrane stent seems to have the potential for reducing the emboli responsible for supposed late embolization, whereas more emboli were observed when using a protection device. Further studies are necessary and warranted.

  20. 29 CFR 1926.1053 - Ladders.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Stairways and Ladders § 1926.1053 Ladders. Link to an... structural defects, such as, but not limited to, broken or missing rungs, cleats, or steps, broken or split..., such as, but not limited to, broken or missing rungs, cleats, or steps, broken or split rails, or...

  1. 16 CFR 701.3 - Written warranty terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... in compliance with part 703 of this subchapter; (7) Any limitations on the duration of implied... warranty duration; (5) A step-by-step explanation of the procedure which the consumer should follow in... following statement: Some States do not allow limitations on how long an implied warranty lasts, so the...

  2. Trimming Line Design using New Development Method and One Step FEM

    NASA Astrophysics Data System (ADS)

    Chung, Wan-Jin; Park, Choon-Dal; Yang, Dong-yol

    2005-08-01

    In most of automobile panel manufacturing, trimming is generally performed prior to flanging. To find feasible trimming line is crucial in obtaining accurate edge profile after flanging. Section-based method develops blank along section planes and find trimming line by generating loop of end points. This method suffers from inaccurate results for regions with out-of-section motion. On the other hand, simulation-based method can produce more accurate trimming line by iterative strategy. However, due to limitation of time and lack of information in initial die design, it is still not widely accepted in the industry. In this study, new fast method to find feasible trimming line is proposed. One step FEM is used to analyze the flanging process because we can define the desired final shape after flanging and most of strain paths are simple in flanging. When we use one step FEM, the main obstacle is the generation of initial guess. Robust initial guess generation method is developed to handle bad-shaped mesh, very different mesh size and undercut part. The new method develops 3D triangular mesh in propagational way from final mesh onto the drawing tool surface. Also in order to remedy mesh distortion during development, energy minimization technique is utilized. Trimming line is extracted from the outer boundary after one step FEM simulation. This method shows many benefits since trimming line can be obtained in the early design stage. The developed method is successfully applied to the complex industrial applications such as flanging of fender and door outer.

  3. A structural and kinetic study on myofibrils prevented from shortening by chemical cross-linking.

    PubMed

    Herrmann, C; Sleep, J; Chaussepied, P; Travers, F; Barman, T

    1993-07-20

    In previous work, we studied the early steps of the Mg(2+)-ATPase activity of Ca(2+)-activated myofibrils [Houadjeto, M., Travers, F., & Barman, T. (1992) Biochemistry 31, 1564-1569]. The myofibrils were free to contract, and the results obtained refer to the ATPase cycle of myofibrils contracting with no external load. Here we studied the ATPase of myofibrils contracting isometrically. To prevent shortening, we cross-linked them with 1-ethyl-3-[3-(dimethylamino)propyl]carbodiimide (EDC). SDS-PAGE and Western blot analyses showed that the myosin rods were extensively cross-linked and that 8% of the myosin heads were cross-linked to the thin filament. The transient kinetics of the cross-linked myofibrils were studied in 0.1 M potassium acetate, pH 7.4 and 4 degrees C, by the rapid-flow quench method. The ATP binding steps were studied by the cold ATP chase and the cleavage and release of products steps by the Pi burst method. In Pi burst experiments, the sizes of the bursts were equal within experimental error to the ATPase site concentrations (as determined by the cold ATP chase methods) for both cross-linked (isometric) and un-cross-linked (isotonic) myofibrils. This shows that in both cases the rate-limiting step is after the cleavage of ATP. When cross-linked, the kcat of Ca(2+)-activated myofibrils was reduced from 1.7 to 0.8 s-1. This is consistent with the observation that fibers shortening at moderate velocity have a higher ATPase activity than isometric fibers.(ABSTRACT TRUNCATED AT 250 WORDS)

  4. One-step synthesis of gene carrier via gamma irradiation and its application in tumor gene therapy

    PubMed Central

    Kim, Eun-Ji; Heo, Hun; Park, Jong-Seok; Gwon, Hui-Jeong; Lim, Youn-Mook; Jang, Mi-Kyeong

    2018-01-01

    Introduction Although numerous studies have been conducted with the aim of developing drug-delivery systems, chemically synthesized gene carriers have shown limited applications in the biomedical fields due to several problems, such as low-grafting yields, undesirable reactions, difficulties in controlling the reactions, and high-cost production owing to multi-step manufacturing processes. Materials and methods We developed a 1-step synthesis process to produce 2-aminoethyl methacrylate-grafted water-soluble chitosan (AEMA-g-WSC) as a gene carrier, using gamma irradiation for simultaneous synthesis and sterilization, but no catalysts or photoinitiators. We analyzed the AEMA graft site on WSC using 2-dimensional nuclear magnetic resonance spectroscopy (2D NMR; 1H and 13C NMR), and assayed gene transfection effects in vitro and in vivo. Results We revealed selective grafting of AEMA onto C6-OH groups of WSC. AEMA-g-WSC effectively condensed plasmid DNA to form polyplexes in the size range of 170 to 282 nm. AEMA-g-WSC polyplexes in combination with psi-hBCL2 (a vector expressing short hairpin RNA against BCL2 mRNA) inhibited tumor cell proliferation and tumor growth in vitro and in vivo, respectively, by inducing apoptosis. Conclusion The simple grafting process mediated via gamma irradiation is a promising method for synthesizing gene carriers. PMID:29416333

  5. From h to p efficiently: optimal implementation strategies for explicit time-dependent problems using the spectral/hp element method

    PubMed Central

    Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J

    2014-01-01

    We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840

  6. Preparation of epoxy-based macroporous monolithic columns for the fast and efficient immunofiltration of Staphylococcus aureus.

    PubMed

    Ott, Sonja; Niessner, Reinhard; Seidel, Michael

    2011-08-01

    Macroporous epoxy-based monolithic columns were used for immunofiltration of bacteria. The prepared monolithic polymer support is hydrophilic and has large pore sizes of 21 μm without mesopores. A surface chemistry usually applied for immobilization of antibodies on glass slides is successfully transferred to monolithic columns. Step-by-step, the surface of the epoxy-based monolith is hydrolyzed, silanized, coated with poly(ethylene glycol diamine) and activated with the homobifunctional crosslinker di(N-succinimidyl)carbonate for immobilization of antibodies on the monolithic columns. The functionalization steps are characterized to ensure the coating of each monolayer. The prepared antibody-immobilized monolithic column is optimized for immunofiltration to enrich Staphylococcus aureus as an important food contaminant. Different kinds of geometries of monolithic columns, flow rates and elution buffers are tested with the goal to get high recoveries in the shortest enrichment time as possible. An effective capture of S. aureus was achieved at a flow rate of 7.0 mL/min with low backpressures of 20.1±5.4 mbar enabling a volumetric enrichment of 1000 within 145 min. The bacteria were quantified by flow cytometry using a double-labeling approach. After immunofiltration the sensitivity was significantly increased and a detection limit of the total system of 42 S. aureus/mL was reached. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Resource Allocation and Seed Size Selection in Perennial Plants under Pollen Limitation.

    PubMed

    Huang, Qiaoqiao; Burd, Martin; Fan, Zhiwei

    2017-09-01

    Pollen limitation may affect resource allocation patterns in plants, but its role in the selection of seed size is not known. Using an evolutionarily stable strategy model of resource allocation in perennial iteroparous plants, we show that under density-independent population growth, pollen limitation (i.e., a reduction in ovule fertilization rate) should increase the optimal seed size. At any level of pollen limitation (including none), the optimal seed size maximizes the ratio of juvenile survival rate to the resource investment needed to produce one seed (including both ovule production and seed provisioning); that is, the optimum maximizes the fitness effect per unit cost. Seed investment may affect allocation to postbreeding adult survival. In our model, pollen limitation increases individual seed size but decreases overall reproductive allocation, so that pollen limitation should also increase the optimal allocation to postbreeding adult survival. Under density-dependent population growth, the optimal seed size is inversely proportional to ovule fertilization rate. However, pollen limitation does not affect the optimal allocation to postbreeding adult survival and ovule production. These results highlight the importance of allocation trade-offs in the effect pollen limitation has on the ecology and evolution of seed size and postbreeding adult survival in perennial plants.

  8. CR-39 track etching and blow-up method

    DOEpatents

    Hankins, Dale E.

    1987-01-01

    This invention is a method of etching tracks in CR-39 foil to obtain uniformly sized tracks. The invention comprises a step of electrochemically etching the foil at a low frequency and a "blow-up" step of electrochemically etching the foil at a high frequency.

  9. Simplified 4-Step Transportation Planning Process For Any Sized Area

    DOT National Transportation Integrated Search

    1999-01-01

    This paper presents a streamlined version of the Washington, D.C. region's : 4-step travel demand forecasting model. The purpose for streamlining the : model was to have a model that could: replicate the regional model, and be run : in a new s...

  10. Saving Lives.

    ERIC Educational Resources Information Center

    Moon, Daniel

    2002-01-01

    Advises schools on how to establish an automated external defibrillator (AED) program. These laptop-size devices can save victims of sudden cardiac arrest by delivering an electrical shock to return the heartbeat to normal. Discusses establishing standards, developing a strategy, step-by-step advice towards establishing an AED program, and school…

  11. Research on optimal DEM cell size for 3D visualization of loess terraces

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei

    2009-10-01

    In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.

  12. Absolute phase estimation: adaptive local denoising and global unwrapping.

    PubMed

    Bioucas-Dias, Jose; Katkovnik, Vladimir; Astola, Jaakko; Egiazarian, Karen

    2008-10-10

    The paper attacks absolute phase estimation with a two-step approach: the first step applies an adaptive local denoising scheme to the modulo-2 pi noisy phase; the second step applies a robust phase unwrapping algorithm to the denoised modulo-2 pi phase obtained in the first step. The adaptive local modulo-2 pi phase denoising is a new algorithm based on local polynomial approximations. The zero-order and the first-order approximations of the phase are calculated in sliding windows of varying size. The zero-order approximation is used for pointwise adaptive window size selection, whereas the first-order approximation is used to filter the phase in the obtained windows. For phase unwrapping, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [IEEE Trans. Image Process.16, 698 (2007)] to the denoised wrapped phase. Simulations give evidence that the proposed algorithm yields state-of-the-art performance, enabling strong noise attenuation while preserving image details. (c) 2008 Optical Society of America

  13. Two step continuous method to synthesize colloidal spheroid gold nanorods.

    PubMed

    Chandra, S; Doran, J; McCormack, S J

    2015-12-01

    This research investigated a two-step continuous process to synthesize colloidal suspension of spheroid gold nanorods. In the first step; gold precursor was reduced to seed-like particles in the presence of polyvinylpyrrolidone and ascorbic acid. In continuous second step; silver nitrate and alkaline sodium hydroxide produced various shape and size Au nanoparticles. The shape was manipulated through weight ratio of ascorbic acid to silver nitrate by varying silver nitrate concentration. The specific weight ratio of 1.35-1.75 grew spheroid gold nanorods of aspect ratio ∼1.85 to ∼2.2. Lower weight ratio of 0.5-1.1 formed spherical nanoparticle. The alkaline medium increased the yield of gold nanorods and reduced reaction time at room temperature. The synthesized gold nanorods retained their shape and size in ethanol. The surface plasmon resonance was red shifted by ∼5 nm due to higher refractive index of ethanol than water. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Research on the effect of coverage rate on the surface quality in laser direct writing process

    NASA Astrophysics Data System (ADS)

    Pan, Xuetao; Tu, Dawei

    2017-07-01

    Direct writing technique is usually used in femtosecond laser two-photon micromachining. The size of the scanning step is an important factor affecting the surface quality and machining efficiency of micro devices. According to the mechanism of two-photon polymerization, combining the distribution function of light intensity and the free radical concentration theory, we establish the mathematical model of coverage of solidification unit, then analyze the effect of coverage on the machining quality and efficiency. Using the principle of exposure equivalence, we also obtained the analytic expressions of the relationship among the surface quality characteristic parameters of microdevices and the scanning step, and carried out the numerical simulation and experiment. The results show that the scanning step has little influence on the surface quality of the line when it is much smaller than the size of the solidification unit. However, with increasing scanning step, the smoothness of line surface is reduced rapidly, and the surface quality becomes much worse.

  15. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting.

    PubMed

    Rashed-Ul Islam, S M; Jahan, Munira; Tabassum, Shahina

    2015-01-01

    Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 10 3 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 10 3 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p < 0.0001). Both methods showed good agreement at Bland-Altman plot, with a mean difference of 0.61 log 10 IU/ml and limits of agreement of -1.82 to 3.03 log 10 IU/ml. The intra-assay and interassay coefficients of variation (CV%) of plasma samples (4-7 log 10 IU/ml) for the one-step PCR method ranged between 0.33 to 0.59 and 0.28 to 0.48 respectively, thus demonstrating a high level of concordance between the two methods. Moreover, elimination of the DNA extraction step in the one-step PCR kit allowed time-efficient and significant labor and cost savings for the quantification of HBV DNA in a resource limited setting. Rashed-Ul Islam SM, Jahan M, Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15.

  16. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting

    PubMed Central

    Jahan, Munira; Tabassum, Shahina

    2015-01-01

    Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 103 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 103 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p < 0.0001). Both methods showed good agreement at Bland-Altman plot, with a mean difference of 0.61 log10 IU/ml and limits of agreement of -1.82 to 3.03 log10 IU/ml. The intra-assay and interassay coefficients of variation (CV%) of plasma samples (4-7 log10 IU/ml) for the one-step PCR method ranged between 0.33 to 0.59 and 0.28 to 0.48 respectively, thus demonstrating a high level of concordance between the two methods. Moreover, elimination of the DNA extraction step in the one-step PCR kit allowed time-efficient and significant labor and cost savings for the quantification of HBV DNA in a resource limited setting. How to cite this article Rashed-Ul Islam SM, Jahan M, Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15. PMID:29201678

  17. 40 CFR 35.909 - Step 2+3 grants.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ASSISTANCE Grants for Construction of Treatment Works-Clean Water Act § 35.909 Step 2+3 grants. (a) Authority... design (step 2) and construction (step 3) of a waste water treatment works. (b) Limitations. The Regional... Water and Waste Management finds to have unusually high costs of construction, the Regional...

  18. 40 CFR 35.909 - Step 2+3 grants.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ASSISTANCE Grants for Construction of Treatment Works-Clean Water Act § 35.909 Step 2+3 grants. (a) Authority... design (step 2) and construction (step 3) of a waste water treatment works. (b) Limitations. The Regional... Water and Waste Management finds to have unusually high costs of construction, the Regional...

  19. 40 CFR 35.909 - Step 2+3 grants.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ASSISTANCE Grants for Construction of Treatment Works-Clean Water Act § 35.909 Step 2+3 grants. (a) Authority... design (step 2) and construction (step 3) of a waste water treatment works. (b) Limitations. The Regional... Water and Waste Management finds to have unusually high costs of construction, the Regional...

  20. 40 CFR 35.909 - Step 2+3 grants.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... design (step 2) and construction (step 3) of a waste water treatment works. (b) Limitations. The Regional... ASSISTANCE Grants for Construction of Treatment Works-Clean Water Act § 35.909 Step 2+3 grants. (a) Authority... Water and Waste Management finds to have unusually high costs of construction, the Regional...

  1. Influence of fragment size and postoperative joint congruency on long-term outcome of posterior malleolar fractures.

    PubMed

    Drijfhout van Hooff, Cornelis Christiaan; Verhage, Samuel Marinus; Hoogendoorn, Jochem Maarten

    2015-06-01

    One of the factors contributing to long-term outcome of posterior malleolar fractures is the development of osteoarthritis. Based on biomechanical, cadaveric, and small population studies, fixation of posterior malleolar fracture fragments (PMFFs) is usually performed when fragment size exceeds 25-33%. However, the influence of fragment size on long-term clinical and radiological outcome size remains unclear. A retrospective cohort study of 131 patients treated for an isolated ankle fracture with involvement of the posterior malleolus was performed. Mean follow-up was 6.9 (range, 2.5-15.9) years. Patients were divided into groups depending on size of the fragment, small (<5%, n = 20), medium (5-25%, n = 86), or large (>25%, n = 25), and presence of step-off after operative treatment. We have compared functional outcome measures (AOFAS, AAOS), pain (VAS), and dorsiflexion restriction compared to the contralateral ankle and the incidence of osteoarthritis on X-ray. There were no nonunions, 56% of patients had no radiographic osteoarthritis, VAS was 10 of 100, and median clinical score was 90 of 100. More osteoarthritis occurred in ankle fractures with medium and large PMFFs compared to small fragments (small 16%, medium 48%, large 54%; P = .006). Also when comparing small with medium-sized fragments (P = .02), larger fragment size did not lead to a significantly decreased function (median AOFAS 95 vs 88, P = .16). If the PMFF size was >5%, osteoarthritis occurred more frequently when there was a postoperative step-off ≥1 mm in the tibiotalar joint surface (41% vs 61%, P = .02) (whether the posterior fragment had been fixed or not). In this group, fixing the PMFF did not influence development of osteoarthritis. However, in 42% of the cases with fixation of the fragment a postoperative step-off remained (vs 45% in the group without fixation). Osteoarthritis is 1 component of long-term outcome of malleolar fractures, and the results of this study demonstrate that there was more radiographic osteoarthritis in patients with medium and large posterior fragments than in those with small fragments. Radiographic osteoarthritis also occurred more frequently when postoperative step-off was 1 mm or more, whether the posterior fragment was fixed or not. However, clinical scores were not different for these groups. Level IV, retrospective case series. © The Author(s) 2015.

  2. Morphing Aircraft Structures: Research in AFRL/RB

    DTIC Science & Technology

    2008-09-01

    various iterative steps in the process, etc. The solver also internally controls the step size for integration, as this is independent of the step...Coupling of Substructures for Dynamic Analyses,” AIAA Journal , Vol. 6, No. 7, 1968, pp. 1313-1319. 2“Using the State-Dependent Modal Force (MFORCE),” AFL...an actuation system consisting of multiple internal actuators, centrally computer controlled to implement any commanded morphing configuration; and

  3. Analysis of operator splitting errors for near-limit flame simulations

    NASA Astrophysics Data System (ADS)

    Lu, Zhen; Zhou, Hua; Li, Shan; Ren, Zhuyin; Lu, Tianfeng; Law, Chung K.

    2017-04-01

    High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction-diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction of ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.

  4. Analysis of operator splitting errors for near-limit flame simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Zhen; Zhou, Hua; Li, Shan

    High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction–diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction ofmore » ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.« less

  5. Structural design and fabrication techniques of composite unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Hunt, Daniel Stephen

    Popularity of unmanned aerial vehicles has grown substantially in recent years both in the private sector, as well as for government functions. This growth can be attributed largely to the increased performance of the technology that controls these vehicles, as well as decreasing cost and size of this technology. What is sometimes forgotten though, is that the research and advancement of the airframes themselves are equally as important as what is done with them. With current computer-aided design programs, the limits of design optimization can be pushed further than ever before, resulting in lighter and faster airframes that can achieve longer endurances, higher altitudes, and more complex missions. However, realization of a paper design is still limited by the physical restrictions of the real world and the structural constraints associated with it. The purpose of this paper is to not only step through current design and manufacturing processes of composite UAVs at Oklahoma State University, but to also focus on composite spars, utilizing and relating both calculated and empirical data. Most of the experience gained for this thesis was from the Cessna Longitude project. The Longitude is a 1/8 scale, flying demonstrator Oklahoma State University constructed for Cessna. For the project, Cessna required dynamic flight data for their design process in order to make their 2017 release date. Oklahoma State University was privileged enough to assist Cessna with the mission of supporting the validation of design of their largest business jet to date. This paper will detail the steps of the fabrication process used in construction of the Longitude, as well as several other projects, beginning with structural design, machining, molding, skin layup, and ending with final assembly. Also, attention will be paid specifically towards spar design and testing in effort to ease the design phase. This document is intended to act not only as a further development of current practices, but also as a step-by-step manual for those who aspire to make composite airframes, predominantly the Oklahoma State University MAE students who either are, or will be using these techniques on a daily basis.

  6. One-step preparation of antimicrobial silver nanoparticles in polymer matrix

    NASA Astrophysics Data System (ADS)

    Lyutakov, O.; Kalachyova, Y.; Solovyev, A.; Vytykacova, S.; Svanda, J.; Siegel, J.; Ulbrich, P.; Svorcik, V.

    2015-03-01

    Simple one-step procedure for in situ preparation of silver nanoparticles (AgNPs) in the polymer thin films is described. Nanoparticles (NPs) were prepared by reaction of N-methyl pyrrolidone with silver salt in semi-dry polymer film and characterized by transmission electron microscopy, XPS, and UV-Vis spectroscopy techniques. Direct synthesis of NPs in polymer has several advantages; even though it avoids time-consuming NPs mixing with polymer matrix, uniform silver distribution in polymethylmethacrylate (PMMA) films is achieved without necessity of additional stabilization. The influence of the silver concentration, reaction temperature and time on reaction conversion rate, and the size and size-distribution of the AgNPs was investigated. Polymer films doped with AgNPs were tested for their antibacterial activity on Gram-negative bacteria. Antimicrobial properties of AgNPs/PMMA films were found to be depended on NPs concentration, their size and distribution. Proposed one-step synthesis of functional polymer containing AgNPs is environmentally friendly, experimentally simple and extremely quick. It opens up new possibilities in development of antimicrobial coatings with medical and sanitation applications.

  7. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  8. Mitigating Handoff Call Dropping in Wireless Cellular Networks: A Call Admission Control Technique

    NASA Astrophysics Data System (ADS)

    Ekpenyong, Moses Effiong; Udoh, Victoria Idia; Bassey, Udoma James

    2016-06-01

    Handoff management has been an important but challenging issue in the field of wireless communication. It seeks to maintain seamless connectivity of mobile users changing their points of attachment from one base station to another. This paper derives a call admission control model and establishes an optimal step-size coefficient (k) that regulates the admission probability of handoff calls. An operational CDMA network carrier was investigated through the analysis of empirical data collected over a period of 1 month, to verify the performance of the network. Our findings revealed that approximately 23 % of calls in the existing system were lost, while 40 % of the calls (on the average) were successfully admitted. A simulation of the proposed model was then carried out under ideal network conditions to study the relationship between the various network parameters and validate our claim. Simulation results showed that increasing the step-size coefficient degrades the network performance. Even at optimum step-size (k), the network could still be compromised in the presence of severe network crises, but our model was able to recover from these problems and still functions normally.

  9. Facile fabrication of a silicon nanowire sensor by two size reduction steps for detection of alpha-fetoprotein biomarker of liver cancer

    NASA Astrophysics Data System (ADS)

    Binh Pham, Van; ThanhTung Pham, Xuan; Nhat Khoa Phan, Thanh; Thanh Tuyen Le, Thi; Chien Dang, Mau

    2015-12-01

    We present a facile technique that only uses conventional micro-techniques and two size-reduction steps to fabricate wafer-scale silicon nanowire (SiNW) with widths of 200 nm. Initially, conventional lithography was used to pattern SiNW with 2 μm width. Then the nanowire width was decreased to 200 nm by two size-reduction steps with isotropic wet etching. The fabricated SiNW was further investigated when used with nanowire field-effect sensors. The electrical characteristics of the fabricated SiNW devices were characterized and pH sensitivity was investigated. Then a simple and effective surface modification process was carried out to modify SiNW for subsequent binding of a desired receptor. The complete SiNW-based biosensor was then used to detect alpha-fetoprotein (AFP), one of the medically approved biomarkers for liver cancer diagnosis. Electrical measurements showed that the developed SiNW biosensor could detect AFP with concentrations of about 100 ng mL-1. This concentration is lower than the necessary AFP concentration for liver cancer diagnosis.

  10. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    NASA Astrophysics Data System (ADS)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  11. Advantages offered by high average power picosecond lasers

    NASA Astrophysics Data System (ADS)

    Moorhouse, C.

    2011-03-01

    As electronic devices shrink in size to reduce material costs, device size and weight, thinner material thicknesses are also utilized. Feature sizes are also decreasing, which is pushing manufacturers towards single step laser direct write process as an attractive alternative to conventional, multiple step photolithography processes by eliminating process steps and the cost of chemicals. The fragile nature of these thin materials makes them difficult to machine either mechanically or with conventional nanosecond pulsewidth, Diode Pumped Solids State (DPSS) lasers. Picosecond laser pulses can cut materials with reduced damage regions and selectively remove thin films due to the reduced thermal effects of the shorter pulsewidth. Also, the high repetition rate allows high speed processing for industrial applications. Selective removal of thin films for OLED patterning, silicon solar cells and flat panel displays is discussed, as well as laser cutting of transparent materials with low melting point such as Polyethylene Terephthalate (PET). For many of these thin film applications, where low pulse energy and high repetition rate are required, throughput can be increased by the use of a novel technique to using multiple beams from a single laser source is outlined.

  12. 36 CFR 4.11 - Load, weight and size limits.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... INTERIOR VEHICLES AND TRAFFIC SAFETY § 4.11 Load, weight and size limits. (a) Vehicle load, weight and size limits established by State law apply to a vehicle operated on a park road. However, the superintendent may designate more restrictive limits when appropriate for traffic safety or protection of the road...

  13. Subject-Adaptive Real-Time Sleep Stage Classification Based on Conditional Random Field

    PubMed Central

    Luo, Gang; Min, Wanli

    2007-01-01

    Sleep staging is the pattern recognition task of classifying sleep recordings into sleep stages. This task is one of the most important steps in sleep analysis. It is crucial for the diagnosis and treatment of various sleep disorders, and also relates closely to brain-machine interfaces. We report an automatic, online sleep stager using electroencephalogram (EEG) signal based on a recently-developed statistical pattern recognition method, conditional random field, and novel potential functions that have explicit physical meanings. Using sleep recordings from human subjects, we show that the average classification accuracy of our sleep stager almost approaches the theoretical limit and is about 8% higher than that of existing systems. Moreover, for a new subject snew with limited training data Dnew, we perform subject adaptation to improve classification accuracy. Our idea is to use the knowledge learned from old subjects to obtain from Dnew a regulated estimate of CRF’s parameters. Using sleep recordings from human subjects, we show that even without any Dnew, our sleep stager can achieve an average classification accuracy of 70% on snew. This accuracy increases with the size of Dnew and eventually becomes close to the theoretical limit. PMID:18693884

  14. Simple scaling laws for the evaporation of droplets pinned on pillars: Transfer-rate- and diffusion-limited regimes.

    PubMed

    Hernandez-Perez, Ruth; García-Cordero, José L; Escobar, Juan V

    2017-12-01

    The evaporation of droplets can give rise to a wide range of interesting phenomena in which the dynamics of the evaporation are crucial. In this work, we find simple scaling laws for the evaporation dynamics of axisymmetric droplets pinned on millimeter-sized pillars. Different laws are found depending on whether evaporation is limited by the diffusion of vapor molecules or by the transfer rate across the liquid-vapor interface. For the diffusion-limited regime, we find that a mass-loss rate equal to 3/7 of that of a free-standing evaporating droplet brings a good balance between simplicity and physical correctness. We also find a scaling law for the evaporation of multicomponent solutions. The scaling laws found are validated against experiments of the evaporation of droplets of (1) water, (2) blood plasma, and (3) a mixture of water and polyethylene glycol, pinned on acrylic pillars of different diameters. These results shed light on the macroscopic dynamics of evaporation on pillars as a first step towards the understanding of other complex phenomena that may be taking place during the evaporation process, such as particle transport and chemical reactions.

  15. Simple scaling laws for the evaporation of droplets pinned on pillars: Transfer-rate- and diffusion-limited regimes

    NASA Astrophysics Data System (ADS)

    Hernandez-Perez, Ruth; García-Cordero, José L.; Escobar, Juan V.

    2017-12-01

    The evaporation of droplets can give rise to a wide range of interesting phenomena in which the dynamics of the evaporation are crucial. In this work, we find simple scaling laws for the evaporation dynamics of axisymmetric droplets pinned on millimeter-sized pillars. Different laws are found depending on whether evaporation is limited by the diffusion of vapor molecules or by the transfer rate across the liquid-vapor interface. For the diffusion-limited regime, we find that a mass-loss rate equal to 3/7 of that of a free-standing evaporating droplet brings a good balance between simplicity and physical correctness. We also find a scaling law for the evaporation of multicomponent solutions. The scaling laws found are validated against experiments of the evaporation of droplets of (1) water, (2) blood plasma, and (3) a mixture of water and polyethylene glycol, pinned on acrylic pillars of different diameters. These results shed light on the macroscopic dynamics of evaporation on pillars as a first step towards the understanding of other complex phenomena that may be taking place during the evaporation process, such as particle transport and chemical reactions.

  16. Effect of Inclusion Size and Distribution on the Corrosion Behavior of Medical-Device Grade Nitinol Tubing

    NASA Astrophysics Data System (ADS)

    Wohlschlögel, Markus; Steegmüller, Rainer; Schüßler, Andreas

    2014-07-01

    Nonmetallic inclusions in Nitinol, such as carbides (TiC) and intermetallic oxides (Ti4Ni2O x ), are known to be triggers for fatigue failure of Nitinol medical devices. These mechanically brittle inclusions are introduced during the melting process. As a result of hot and cold working in the production of Nitinol tubing inclusions are fractionalized due to the mechanical deformation imposed. While the role of inclusions regarding Nitinol fatigue performance has been studied extensively in the past, their effect on Nitinol corrosion behavior was investigated in only a limited number of studies. The focus of the present work was to understand the effect of inclusion size and distribution on the corrosion behavior of medical-device grade Nitinol tubing made from three different ingot sources during different manufacturing stages: (i) for the initial stage (hollow: round bar with centric hole), (ii) after hot drawing, and (iii) after the final drawing step (final tubing dimensions: outer diameter 0.3 mm, wall thickness 0.1 mm). For one ingot source, two different material qualities were investigated. Potentiodynamic polarization tests were performed for electropolished samples of the above-mentioned stages. Results indicate that inclusion size rather than inclusion quantity affects the susceptibility of electropolished Nitinol to pitting corrosion.

  17. Balance exercise for persons with multiple sclerosis using Wii games: a randomised, controlled multi-centre study.

    PubMed

    Nilsagård, Ylva E; Forsberg, Anette S; von Koch, Lena

    2013-02-01

    The use of interactive video games is expanding within rehabilitation. The evidence base is, however, limited. Our aim was to evaluate the effects of a Nintendo Wii Fit® balance exercise programme on balance function and walking ability in people with multiple sclerosis (MS). A multi-centre, randomised, controlled single-blinded trial with random allocation to exercise or no exercise. The exercise group participated in a programme of 12 supervised 30-min sessions of balance exercises using Wii games, twice a week for 6-7 weeks. Primary outcome was the Timed Up and Go test (TUG). In total, 84 participants were enrolled; four were lost to follow-up. After the intervention, there were no statistically significant differences between groups but effect sizes for the TUG, TUGcognitive and, the Dynamic Gait Index (DGI) were moderate and small for all other measures. Statistically significant improvements within the exercise group were present for all measures (large to moderate effect sizes) except in walking speed and balance confidence. The non-exercise group showed statistically significant improvements for the Four Square Step Test and the DGI. In comparison with no intervention, a programme of supervised balance exercise using Nintendo Wii Fit® did not render statistically significant differences, but presented moderate effect sizes for several measures of balance performance.

  18. Uniform discotic wax particles via electrospray emulsification.

    PubMed

    Mejia, Andres F; He, Peng; Luo, Dawei; Marquez, Manuel; Cheng, Zhengdong

    2009-06-01

    We present a novel colloidal discotic system: the formation and self-assembling of wax microdisks with a narrow size distribution. Uniform wax emulsions are first fabricated by electrospraying of melt alpha-eicosene. The size of the emulsions can be flexibly tailored by varying the flow rate of the discontinuous phase, its electric conductivity, and the applied voltage. The process of entrainment of wax droplets, vital for obtaining uniform emulsions, is facilitated by the reduction of air-water surface tension and the density of the continuous phase. Then uniform wax discotic particles are produced via phase transition, during which the formation of a layered structure of the rotator phase of wax converts the droplets, one by one, into oblate particles. The time span for the conversion from spherical emulsions to disk particles is linearly dependent on the size of droplets in the emulsion, indicating the growth of a rotator phase from surface to the center is the limiting step in the shape transition. Using polarized light microscopy, the self-assembling of wax disks is observed by increasing disk concentration and inducing depletion attraction among disks, where several phases, such as isotropic, condensed, columnar stacking, and self-assembly of columnar rods are present sequentially during solvent evaporation of a suspension drop.

  19. Companions in Color: High-Resolution Imaging of Kepler’s Sub-Neptune Host Stars

    NASA Astrophysics Data System (ADS)

    Ware, Austin; Wolfgang, Angie; Kannan, Deepti

    2018-01-01

    A current problem in astronomy is determining how sub-Neptune-sized exoplanets form in planetary systems. These kinds of planets, which fall between 1 and 4 times the size of Earth, were discovered in abundance by the Kepler Mission and were typically found with relatively short orbital periods. The combination of their size and orbital period make them unusual in relation to the Solar System, leading to the question of how these exoplanets form and evolve. One possibility is that they have been influenced by distant stellar companions. To help assess the influence of these objects on the present-day, observed properties of exoplanets, we conduct a NIR search for visual stellar companions to the stars around which the Kepler Mission discovered planets. We use high-resolution images obtained with the adaptive optics systems at the Lick Observatory Shane-3m telescope to find these companion stars. Importantly, we also determine the effective brightness and distance from the planet-hosting star at which it is possible to detect these companions. Out of the 200 KOIs in our sample, 42 KOIs (21%) have visual companions within 3”, and 90 (46%) have them within 6”. These findings are consistent with recent high-resolution imaging from Furlan et al. 2017 that found at least one visual companion within 4” for 31% of sampled KOIs (37% within 4" for our sample). Our results are also complementary to Furlan et al. 2017, with only 17 visual companions commonly detected in the same filter. As for detection limits, our preliminary results indicate that we can detect companion stars < 3-5 magnitudes fainter than the planet-hosting star at a separation of ~ 1”. These detection limits will enable us to determine the probability that possible companion stars could be hidden within the noise around the planet-hosting star, an important step in determining the frequency with which these short-period, sub-Neptune-sized planets occur within binary star systems.

  20. Mechanism of Nitrogenase H 2 Formation by Metal-Hydride Protonation Probed by Mediated Electrocatalysis and H/D Isotope Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khadka, Nimesh; Milton, Ross D.; Shaw, Sudipta

    Nitrogenase catalyzes the reduction of dinitrogen (N2) to ammonia (NH3) with obligatory reduction of protons (H+) to dihydrogen (H2) through a mechanism involving reductive elimination of two [Fe-H-Fe] bridging hydrides at its active site FeMo-cofactor. The overall rate-limiting step is associated with ATP-driven electron delivery from Fe protein, precluding isotope effect measurements on substrate reduction steps. Here, we use mediated bioelectrocatalysis to drive electron delivery to MoFe protein without Fe protein and ATP hydrolysis, thereby eliminating the normal rate-limiting step. The ratio of catalytic current in mixtures of H2O and D2O, the proton inventory, changes linearly with the D2O/H2O ratio,more » revealing that a single H/D is involved in the rate limiting step. Kinetic models, along with measurements that vary the electron/proton delivery rate and use different substrates, reveal that the rate-limiting step under these conditions is the H2 formation reaction. Altering the chemical environment around the active site FeMo-cofactor in the MoFe protein either by substituting nearby amino acids or transferring the isolated FeMo-cofactor into a different peptide matrix, changes the net isotope effect, but the proton inventory plot remains linear, consistent with an unchanging rate-limiting step. Density functional theory predicts a transition state for H2 formation where the proton from S-H+ moves to the hydride in Fe-H-, predicting the number and magnitude of the observed H/D isotope effect. This study not only reveals the mechanism of H2 formation, but also illustrates a strategy for mechanistic study that can be applied to other enzymes and to biomimetic complexes.« less

  1. A WENO-Limited, ADER-DT, Finite-Volume Scheme for Efficient, Robust, and Communication-Avoiding Multi-Dimensional Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norman, Matthew R

    2014-01-01

    The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronizationmore » and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.« less

  2. Anomalous metastability in a temperature-driven transition

    NASA Astrophysics Data System (ADS)

    Ibáñez Berganza, M.; Coletti, P.; Petri, A.

    2014-06-01

    The Langer theory of metastability provides a description of the lifetime and properties of the metastable phase of the Ising model field-driven transition, describing the magnetic-field-driven transition in ferromagnets and the chemical-potential-driven transition of fluids. An immediate further step is to apply it to the study of a transition driven by the temperature, as the one exhibited by the two-dimensional Potts model. For this model, a study based on the analytical continuation of the free energy (Meunier J. L. and Morel A., Eur. Phys. J. B, 13 (2000) 341) predicts the anomalous vanishing of the metastable temperature range in the large-system-size limit, an issue that has been controversial since the eighties. By a GPU algorithm we compare the Monte Carlo dynamics with the theory. For temperatures close to the transition we obtain agreement and characterize the dependence on the system size, which is essentially different with respect to the Ising case. For smaller temperatures, we observe the onset of stationary states with non-Boltzmann statistics, not predicted by the theory.

  3. A MEMS Micro-Translation Stage with Long Linear Translation

    NASA Technical Reports Server (NTRS)

    Ferguson, Cynthia K.; English, J. M.; Nordin, G. P.; Ashley, P. R.; Abushagur, M. A. G.

    2004-01-01

    A MEMS Micro-Translation Stage (MTS) actuator concept has been developed that is capable of traveling long distances, while maintaining low power, low voltage, and accuracy as required by many applications, including optical coupling. The Micro-Translation Stage (MTS) uses capacitive electrostatic forces in a linear motor application, with stationary stators arranged linearly on both sides of a channel, and matching rotors on a moveable shuttle. This creates a force that allows the shuttle to be pulled along the channel. It is designed to carry 100 micron-sized elements on the top surface, and can travel back and forth in the channel, either in a stepping fashion allowing many interim stops, or it can maintain constant adjustable speeds for a controlled scanning motion. The MTS travel range is limited only by the size of the fabrication wafer. Analytical modeling and simulations were performed based on the fabrication process, to assure the stresses, friction and electrostatic forces were acceptable to allow successful operation of this device. The translation forces were analyzed to be near 0.5 micron N, with a 300 micron N stop-to-stop time of 11.8 ms.

  4. Digital simulation of scalar optical diffraction: revisiting chirp function sampling criteria and consequences.

    PubMed

    Voelz, David G; Roggemann, Michael C

    2009-11-10

    Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.

  5. Printability of calcium phosphate powders for three-dimensional printing of tissue engineering scaffolds.

    PubMed

    Butscher, Andre; Bohner, Marc; Roth, Christian; Ernstberger, Annika; Heuberger, Roman; Doebelin, Nicola; von Rohr, Philipp Rudolf; Müller, Ralph

    2012-01-01

    Three-dimensional printing (3DP) is a versatile method to produce scaffolds for tissue engineering. In 3DP the solid is created by the reaction of a liquid selectively sprayed onto a powder bed. Despite the importance of the powder properties, there has to date been a relatively poor understanding of the relation between the powder properties and the printing outcome. This article aims at improving this understanding by looking at the link between key powder parameters (particle size, flowability, roughness, wettability) and printing accuracy. These powder parameters are determined as key factors with a predictive value for the final 3DP outcome. Promising results can be expected for mean particle size in the range of 20-35 μm, compaction rate in the range of 1.3-1.4, flowability in the range of 5-7 and powder bed surface roughness of 10-25 μm. Finally, possible steps and strategies in pushing the physical limits concerning improved quality in 3DP are addressed and discussed. Copyright © 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  6. Molecular dynamics modeling and simulation of void growth in two dimensions

    NASA Astrophysics Data System (ADS)

    Chang, H.-J.; Segurado, J.; Rodríguez de la Fuente, O.; Pabón, B. M.; LLorca, J.

    2013-10-01

    The mechanisms of growth of a circular void by plastic deformation were studied by means of molecular dynamics in two dimensions (2D). While previous molecular dynamics (MD) simulations in three dimensions (3D) have been limited to small voids (up to ≈10 nm in radius), this strategy allows us to study the behavior of voids of up to 100 nm in radius. MD simulations showed that plastic deformation was triggered by the nucleation of dislocations at the atomic steps of the void surface in the whole range of void sizes studied. The yield stress, defined as stress necessary to nucleate stable dislocations, decreased with temperature, but the void growth rate was not very sensitive to this parameter. Simulations under uniaxial tension, uniaxial deformation and biaxial deformation showed that the void growth rate increased very rapidly with multiaxiality but it did not depend on the initial void radius. These results were compared with previous 3D MD and 2D dislocation dynamics simulations to establish a map of mechanisms and size effects for plastic void growth in crystalline solids.

  7. A first generation dynamic ingress, redistribution and transport model of soil track-in: DIRT.

    PubMed

    Johnson, D L

    2008-12-01

    This work introduces a spatially resolved quantitative model, based on conservation of mass and first order transfer kinetics, for following the transport and redistribution of outdoor soil to, and within, the indoor environment by track-in on footwear. Implementations of the DIRT model examined the influence of room size, rug area and location, shoe size, and mass transfer coefficients for smooth and carpeted floor surfaces using the ratio of mass loading on carpeted to smooth floor surfaces as a performance metric. Results showed that in the limit for large numbers of random steps the dual aspects of deposition to and track-off from the carpets govern this ratio. Using recently obtained experimental measurements, historic transport and distribution parameters, cleaning efficiencies for the different floor surfaces, and indoor dust deposition rates to provide model boundary conditions, DIRT predicts realistic floor surface loadings. The spatio-temporal variability in model predictions agrees with field observations and suggests that floor surface dust loadings are constantly in flux; steady state distributions are hardly, if ever, achieved.

  8. Theoretical analysis of Lumry-Eyring models in differential scanning calorimetry

    PubMed Central

    Sanchez-Ruiz, Jose M.

    1992-01-01

    A theoretical analysis of several protein denaturation models (Lumry-Eyring models) that include a rate-limited step leading to an irreversibly denatured state of the protein (the final state) has been carried out. The differential scanning calorimetry transitions predicted for these models can be broadly classified into four groups: situations A, B, C, and C′. (A) The transition is calorimetrically irreversible but the rate-limited, irreversible step takes place with significant rate only at temperatures slightly above those corresponding to the transition. Equilibrium thermodynamics analysis is permissible. (B) The transition is distorted by the occurrence of the rate-limited step; nevertheless, it contains thermodynamic information about the reversible unfolding of the protein, which could be obtained upon the appropriate data treatment. (C) The heat absorption is entirely determined by the kinetics of formation of the final state and no thermodynamic information can be extracted from the calorimetric transition; the rate-determining step is the irreversible process itself. (C′) same as C, but, in this case, the rate-determining step is a previous step in the unfolding pathway. It is shown that ligand and protein concentration effects on transitions corresponding to situation C (strongly rate-limited transitions) are similar to those predicted by equilibrium thermodynamics for simple reversible unfolding models. It has been widely held in recent literature that experimentally observed ligand and protein concentration effects support the applicability of equilibrium thermodynamics to irreversible protein denaturation. The theoretical analysis reported here disfavors this claim. PMID:19431826

  9. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  10. Controlling Surface Chemistry to Deconvolute Corrosion Benefits Derived from SMAT Processing

    NASA Astrophysics Data System (ADS)

    Murdoch, Heather A.; Labukas, Joseph P.; Roberts, Anthony J.; Darling, Kristopher A.

    2017-07-01

    Grain refinement through surface plastic deformation processes such as surface mechanical attrition treatment has shown measureable benefits for mechanical properties, but the impact on corrosion behavior has been inconsistent. Many factors obfuscate the particular corrosion mechanisms at work, including grain size, but also texture, processing contamination, and surface roughness. Many studies attempting to link corrosion and grain size have not been able to decouple these effects. Here we introduce a preprocessing step to mitigate the surface contamination effects that have been a concern in previous corrosion studies on plastically deformed surfaces; this allows comparison of corrosion behavior across grain sizes while controlling for texture and surface roughness. Potentiodynamic polarization in aqueous NaCl solution suggests that different corrosion mechanisms are responsible for samples prepared with the preprocessing step.

  11. Measuring the costs and benefits of conservation programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Einhorn, M.A.

    1985-07-25

    A step-by-step analysis of the effects of utility-sponsored conservation promoting programs begins by identifying several factors which will reduce a program's effectiveness. The framework for measuring cost savings and designing a conservation program needs to consider the size of appliance subsidies, what form incentives should take, and how will customer behavior change as a result of incentives. Continual reevaluation is necessary to determine whether to change the size of rebates or whether to continue the program. Analytical tools for making these determinations are improving as conceptual breakthroughs in econometrics permit more rigorous analysis. 5 figures.

  12. A State Event Detection Algorithm for Numerically Simulating Hybrid Systems with Model Singularities

    DTIC Science & Technology

    2007-01-01

    the case of non- constant step sizes. Therefore the event dynamics after the predictor and corrector phases are, respectively, gpk +1 = g( xk + hk+1{ m...the Extrapolation Polynomial Using a Taylor series expansion of the predicted event function eq.(6) gpk +1 = gk + hk+1 dgp dt ∣∣∣∣ (x,t)=(xk,tk) + h2k...1 2! d2gp dt2 ∣∣∣∣ (x,t)=(xk,tk) + . . . , (8) we can determine the value of gpk +1 as a function of the, yet undetermined, step size hk+1. Recalling

  13. A 10-step safety management framework for construction small and medium-sized enterprises.

    PubMed

    Gunduz, Murat; Laitinen, Heikki

    2017-09-01

    It is of great importance to develop an occupational health and safety management system (OHS MS) to form a systemized approach to improve health and safety. It is a known fact that thousands of accidents and injuries occur in the construction industry. Most of these accidents occur in small and medium-sized enterprises (SMEs). This article provides a 10-step user-friendly OHS MS for the construction industry. A quantitative OHS MS indexing method is also introduced in the article. The practical application of the system to real SMEs and its promising results are also presented.

  14. A Modeling Framework for Predicting the Size of Sediments Produced on Hillslopes and Supplied to Channels

    NASA Astrophysics Data System (ADS)

    Sklar, L. S.; Mahmoudi, M.

    2016-12-01

    Landscape evolution models rarely represent sediment size explicitly, despite the importance of sediment size in regulating rates of bedload sediment transport, river incision into bedrock, and many other processes in channels and on hillslopes. A key limitation has been the lack of a general model for predicting the size of sediments produced on hillslopes and supplied to channels. Here we present a framework for such a model, as a first step toward building a `geomorphic transport law' that balances mechanistic realism with computational simplicity and is widely applicable across diverse landscapes. The goal is to take as inputs landscape-scale boundary conditions such as lithology, climate and tectonics, and predict the spatial variation in the size distribution of sediments supplied to channels across catchments. The model framework has two components. The first predicts the initial size distribution of particles produced by erosion of bedrock underlying hillslopes, while the second accounts for the effects of physical and chemical weathering during transport down slopes and delivery to channels. The initial size distribution can be related to the spacing and orientation of fractures within bedrock, which depend on the stresses and deformation experienced during exhumation and on rock resistance to fracture propagation. Other controls on initial size include the sizes of mineral grains in crystalline rocks, the sizes of cemented particles in clastic sedimentary rocks, and the potential for characteristic size distributions produced by tree throw, frost cracking, and other erosional processes. To model how weathering processes transform the initial size distribution we consider the effects of erosion rate and the thickness of soil and weathered bedrock on hillslope residence time. Residence time determines the extent of size reduction, for given values of model terms that represent the potential for chemical and physical weathering. Chemical weathering potential is parameterized in terms of mean annual precipitation and temperature, and the fraction of soluble minerals. Physical weathering potential can be parameterized in terms of topographic attributes, including slope, curvature and aspect. Finally, we compare model predictions with field data from Inyo Creek in the Sierra Nevada Mtns, USA.

  15. Adaptive time stepping for fluid-structure interaction solvers

    DOE PAGES

    Mayr, M.; Wall, W. A.; Gee, M. W.

    2017-12-22

    In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less

  16. Adaptive time stepping for fluid-structure interaction solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayr, M.; Wall, W. A.; Gee, M. W.

    In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less

  17. Anisotropy in Ostwald ripening and step-terraced surface formation on GaAs(0 0 1): Experiment and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Kazantsev, D. M.; Akhundov, I. O.; Shwartz, N. L.; Alperovich, V. L.; Latyshev, A. V.

    2015-12-01

    Ostwald ripening and step-terraced morphology formation on the GaAs(0 0 1) surface during annealing in equilibrium conditions are investigated experimentally and by Monte Carlo simulation. Fourier and autocorrelation analyses are used to reveal surface relief anisotropy and provide information about islands and pits shape and their size distribution. Two origins of surface anisotropy are revealed. At the initial stage of surface smoothing, crystallographic anisotropy is observed, which is caused presumably by the anisotropy of surface diffusion at GaAs(0 0 1). A difference of diffusion activation energies along [1 1 0] and [1 1 bar 0] axes of the (0 0 1) face is estimated as ΔEd ≈ 0.1 eV from the comparison of experimental results and simulation. At later stages of surface smoothing the anisotropy of the surface relief is determined by the vicinal steps direction. At the initial stage of step-terraced morphology formation the kinetics of monatomic islands and pits growth agrees with the Ostwald ripening theory. At the final stage the size of islands and pits decreases due to their incorporation into the forming vicinal steps.

  18. A combined NLP-differential evolution algorithm approach for the optimization of looped water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2011-08-01

    This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.

  19. Step size of the rotary proton motor in single FoF1-ATP synthase from a thermoalkaliphilic bacterium by DCO-ALEX FRET

    NASA Astrophysics Data System (ADS)

    Hammann, Eva; Zappe, Andrea; Keis, Stefanie; Ernst, Stefan; Matthies, Doreen; Meier, Thomas; Cook, Gregory M.; Börsch, Michael

    2012-02-01

    Thermophilic enzymes operate at high temperatures but show reduced activities at room temperature. They are in general more stable during preparation and, accordingly, are considered to be more rigid in structure. Crystallization is often easier compared to proteins from bacteria growing at ambient temperatures, especially for membrane proteins. The ATP-producing enzyme FoF1-ATP synthase from thermoalkaliphilic Caldalkalibacillus thermarum strain TA2.A1 is driven by a Fo motor consisting of a ring of 13 c-subunits. We applied a single-molecule Förster resonance energy transfer (FRET) approach using duty cycle-optimized alternating laser excitation (DCO-ALEX) to monitor the expected 13-stepped rotary Fo motor at work. New FRET transition histograms were developed to identify the smaller step sizes compared to the 10-stepped Fo motor of the Escherichia coli enzyme. Dwell time analysis revealed the temperature and the LDAO dependence of the Fo motor activity on the single molecule level. Back-and-forth stepping of the Fo motor occurs fast indicating a high flexibility in the membrane part of this thermophilic enzyme.

  20. 2D Si island nucleation on the Si(111) surface at initial and late growth stages: On the role of step permeability in pyramidlike growth

    NASA Astrophysics Data System (ADS)

    Rogilo, D. I.; Fedina, L. I.; Kosolobov, S. S.; Ranguelov, B. S.; Latyshev, A. V.

    2017-01-01

    Initial and late stages of 2D Si island nucleation and growth (2DNG) on extra-large ( 100 μm) and medium size (1-10 μm) atomically flat Si(111)-(7×7) terraces bordered by step bunches have been studied by in situ REM at T =600-750 °С. At first, the layer-by-layer 2DNG takes place on whole terraces and 2D island concentration dependence on deposition rate R corresponds to critical nucleus size i =1. Continuous 2DNG triggers morphological instabilities: elongated pyramidlike waves and separate pyramids emerge on all terraces at T ≤720 °С and T =750 °С, respectively. Both instabilities arise due to the imbalance of uphill/downhill adatom currents related with large Ehrlich-Schwöbel (ES) barriers and permeability of straight [ 11 bar 2 ] -type step edges. However, the first one is initiated by dominant downhill adatom current to distant sinks: bunches, wave's step edges, and "vacancy" islands emerging on terraces due to 2D island coalescence. As a result, top layer size decreases to the critical terrace width λ where 2DNG takes place. From the analysis of λ ∝ R - χ / 2 scaling at T =650 °C, we have found that i increases from i =2 on a three-layer wave to i =6-8 on a six-layer wave. This authenticates the significance of downhill adatom sink to distant steps related to the step permeability. The second instability type at T >720 °C is related to the raising of uphill adatom current due to slightly larger ES barrier for step-up attachment comparing to the step-down one (EES- 0.9 eV [Phys. Rev. Lett. 111 (2013) 036105]). This leads to "second layer" 2D nucleation on top layers, which triggers the growth of separate pyramids. Because of small difference between ES barriers, net uphill/downhill adatom currents are nearly equivalent, and therefore layer coverage distributions of both instabilities display similar linear slopes.

Top