Sample records for step size based

  1. One size fits all electronics for insole-based activity monitoring.

    PubMed

    Hegde, Nagaraj; Bries, Matthew; Melanson, Edward; Sazonov, Edward

    2017-07-01

    Footwear based wearable sensors are becoming prominent in many areas of monitoring health and wellness, such as gait and activity monitoring. In our previous research we introduced an insole based wearable system SmartStep, which is completely integrated in a socially acceptable package. From a manufacturing perspective, SmartStep's electronics had to be custom made for each shoe size, greatly complicating the manufacturing process. In this work we explore the possibility of making a universal electronics platform for SmartStep - SmartStep 3.0, which can be used in the most common insole sizes without modifications. A pilot human subject experiments were run to compare the accuracy between the one-size fits all (SmartStep 3.0) and custom size SmartStep 2.0. A total of ~10 hours of data was collected in the pilot study involving three participants performing different activities of daily living while wearing SmartStep 2.0 and SmartStep 3.0. Leave one out cross validation resulted in a 98.5% average accuracy from SmartStep 2.0, while SmartStep 3.0 resulted in 98.3% accuracy, suggesting that the SmartStep 3.0 can be as accurate as SmartStep 2.0, while fitting most common shoe sizes.

  2. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width.

    PubMed

    Learn, R; Feigenbaum, E

    2016-06-01

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  3. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Learn, R.; Feigenbaum, E.

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  4. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE PAGES

    Learn, R.; Feigenbaum, E.

    2016-05-27

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  5. Role of step size and max dwell time in anatomy based inverse optimization for prostate implants

    PubMed Central

    Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha

    2013-01-01

    In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323

  6. Steepest descent method implementation on unconstrained optimization problem using C++ program

    NASA Astrophysics Data System (ADS)

    Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.

    2018-03-01

    Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.

  7. Simulation and experimental design of a new advanced variable step size Incremental Conductance MPPT algorithm for PV systems.

    PubMed

    Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir

    2016-05-01

    Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Effects of an aft facing step on the surface of a laminar flow glider wing

    NASA Technical Reports Server (NTRS)

    Sandlin, Doral R.; Saiki, Neal

    1993-01-01

    A motor glider was used to perform a flight test study on the effects of aft facing steps in a laminar boundary layer. This study focuses on two dimensional aft facing steps oriented spanwise to the flow. The size and location of the aft facing steps were varied in order to determine the critical size that will force premature transition. Transition over a step was found to be primarily a function of Reynolds number based on step height. Both of the step height Reynolds numbers for premature and full transition were determined. A hot film anemometry system was used to detect transition.

  9. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    DOE PAGES

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...

    2016-08-09

    Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less

  10. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    NASA Astrophysics Data System (ADS)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with the code Gauss2 [S. Gonzalez-Pinto, R. Rojas-Bello, Gauss2, a Fortran 90 code for second order initial value problems, ], based on an adaptive two stage Runge-Kutta-Gauss method with this discontinuous step-size policy.

  11. Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi

    A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.

  12. Methodological aspects of an adaptive multidirectional pattern search to optimize speech perception using three hearing-aid algorithms

    NASA Astrophysics Data System (ADS)

    Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes

    2004-12-01

    In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .

  13. A Conformational Transition in the Myosin VI Converter Contributes to the Variable Step Size

    PubMed Central

    Ovchinnikov, V.; Cecchini, M.; Vanden-Eijnden, E.; Karplus, M.

    2011-01-01

    Myosin VI (MVI) is a dimeric molecular motor that translocates backwards on actin filaments with a surprisingly large and variable step size, given its short lever arm. A recent x-ray structure of MVI indicates that the large step size can be explained in part by a novel conformation of the converter subdomain in the prepowerstroke state, in which a 53-residue insert, unique to MVI, reorients the lever arm nearly parallel to the actin filament. To determine whether the existence of the novel converter conformation could contribute to the step-size variability, we used a path-based free-energy simulation tool, the string method, to show that there is a small free-energy difference between the novel converter conformation and the conventional conformation found in other myosins. This result suggests that MVI can bind to actin with the converter in either conformation. Models of MVI/MV chimeric dimers show that the variability in the tilting angle of the lever arm that results from the two converter conformations can lead to step-size variations of ∼12 nm. These variations, in combination with other proposed mechanisms, could explain the experimentally determined step-size variability of ∼25 nm for wild-type MVI. Mutations to test the findings by experiment are suggested. PMID:22098742

  14. Active control of impulsive noise with symmetric α-stable distribution based on an improved step-size normalized adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yali; Zhang, Qizhi; Yin, Yixin

    2015-05-01

    In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.

  15. Statistical Modeling of Robotic Random Walks on Different Terrain

    NASA Astrophysics Data System (ADS)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  16. Predict the fatigue life of crack based on extended finite element method and SVR

    NASA Astrophysics Data System (ADS)

    Song, Weizhen; Jiang, Zhansi; Jiang, Hui

    2018-05-01

    Using extended finite element method (XFEM) and support vector regression (SVR) to predict the fatigue life of plate crack. Firstly, the XFEM is employed to calculate the stress intensity factors (SIFs) with given crack sizes. Then predicetion model can be built based on the function relationship of the SIFs with the fatigue life or crack length. Finally, according to the prediction model predict the SIFs at different crack sizes or different cycles. Because of the accuracy of the forward Euler method only ensured by the small step size, a new prediction method is presented to resolve the issue. The numerical examples were studied to demonstrate the proposed method allow a larger step size and have a high accuracy.

  17. Framework for Creating a Smart Growth Economic Development Strategy

    EPA Pesticide Factsheets

    This step-by-step guide can help small and mid-sized cities, particularly those that have limited population growth, areas of disinvestment, and/or a struggling economy, build a place-based economic development strategy.

  18. Surfactant-controlled polymerization of semiconductor clusters to quantum dots through competing step-growth and living chain-growth mechanisms.

    PubMed

    Evans, Christopher M; Love, Alyssa M; Weiss, Emily A

    2012-10-17

    This article reports control of the competition between step-growth and living chain-growth polymerization mechanisms in the formation of cadmium chalcogenide colloidal quantum dots (QDs) from CdSe(S) clusters by varying the concentration of anionic surfactant in the synthetic reaction mixture. The growth of the particles proceeds by step-addition from initially nucleated clusters in the absence of excess phosphinic or carboxylic acids, which adsorb as their anionic conjugate bases, and proceeds indirectly by dissolution of clusters, and subsequent chain-addition of monomers to stable clusters (Ostwald ripening) in the presence of excess phosphinic or carboxylic acid. Fusion of clusters by step-growth polymerization is an explanation for the consistent observation of so-called "magic-sized" clusters in QD growth reactions. Living chain-addition (chain addition with no explicit termination step) produces QDs over a larger range of sizes with better size dispersity than step-addition. Tuning the molar ratio of surfactant to Se(2-)(S(2-)), the limiting ionic reagent, within the living chain-addition polymerization allows for stoichiometric control of QD radius without relying on reaction time.

  19. Avoiding Stair-Step Artifacts in Image Registration for GOES-R Navigation and Registration Assessment

    NASA Technical Reports Server (NTRS)

    Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John

    2016-01-01

    In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.

  20. Study of CdTe quantum dots grown using a two-step annealing method

    NASA Astrophysics Data System (ADS)

    Sharma, Kriti; Pandey, Praveen K.; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.

    2006-02-01

    High size dispersion, large average radius of quantum dot and low-volume ratio has been a major hurdle in the development of quantum dot based devices. In the present paper, we have grown CdTe quantum dots in a borosilicate glass matrix using a two-step annealing method. Results of optical characterization and the theoretical model of absorption spectra have shown that quantum dots grown using two-step annealing have lower average radius, lesser size dispersion, higher volume ratio and higher decrease in bulk free energy as compared to quantum dots grown conventionally.

  1. Contrast, size, and orientation-invariant target detection in infrared imagery

    NASA Astrophysics Data System (ADS)

    Zhou, Yi-Tong; Crawshaw, Richard D.

    1991-08-01

    Automatic target detection in IR imagery is a very difficult task due to variations in target brightness, shape, size, and orientation. In this paper, the authors present a contrast, size, and orientation invariant algorithm based on Gabor functions for detecting targets from a single IR image frame. The algorithms consists of three steps. First, it locates potential targets by using low-resolution Gabor functions which resist noise and background clutter effects, then, it removes false targets and eliminates redundant target points based on a similarity measure. These two steps mimic human vision processing but are different from Zeevi's Foveating Vision System. Finally, it uses both low- and high-resolution Gabor functions to verify target existence. This algorithm has been successfully tested on several IR images that contain multiple examples of military vehicles with different size and brightness in various background scenes and orientations.

  2. DNA bipedal motor walking dynamics: an experimental and theoretical study of the dependency on step size

    PubMed Central

    Khara, Dinesh C; Berger, Yaron; Ouldridge, Thomas E

    2018-01-01

    Abstract We present a detailed coarse-grained computer simulation and single molecule fluorescence study of the walking dynamics and mechanism of a DNA bipedal motor striding on a DNA origami. In particular, we study the dependency of the walking efficiency and stepping kinetics on step size. The simulations accurately capture and explain three different experimental observations. These include a description of the maximum possible step size, a decrease in the walking efficiency over short distances and a dependency of the efficiency on the walking direction with respect to the origami track. The former two observations were not expected and are non-trivial. Based on this study, we suggest three design modifications to improve future DNA walkers. Our study demonstrates the ability of the oxDNA model to resolve the dynamics of complex DNA machines, and its usefulness as an engineering tool for the design of DNA machines that operate in the three spatial dimensions. PMID:29294083

  3. Optimal setups for forced-choice staircases with fixed step sizes.

    PubMed

    García-Pérez, M A

    2000-01-01

    Forced-choice staircases with fixed step sizes are used in a variety of formats whose relative merits have never been studied. This paper presents a comparative study aimed at determining their optimal format. Factors included in the study were the up/down rule, the length (number of reversals), and the size of the steps. The study also addressed the issue of whether a protocol involving three staircases running for N reversals each (with a subsequent average of the estimates provided by each individual staircase) has better statistical properties than an alternative protocol involving a single staircase running for 3N reversals. In all cases the size of a step up was different from that of a step down, in the appropriate ratio determined by García-Pérez (Vision Research, 1998, 38, 1861 - 1881). The results of a simulation study indicate that a) there are no conditions in which the 1-down/1-up rule is advisable; b) different combinations of up/down rule and number of reversals appear equivalent in terms of precision and cost: c) using a single long staircase with 3N reversals is more efficient than running three staircases with N reversals each: d) to avoid bias and attain sufficient accuracy, threshold estimates should be based on at least 30 reversals: and e) to avoid excessive cost and imprecision, the size of the step up should be between 2/3 and 3/3 the (known or presumed) spread of the psychometric function. An empirical study with human subjects confirmed the major characteristics revealed by the simulations.

  4. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  5. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  6. Making It Count: Improving Estimates of the Size of Transgender and Gender Nonconforming Populations.

    PubMed

    Deutsch, Madeline B

    2016-06-01

    An accurate estimate of the number of transgender and gender nonconforming people is essential to inform policy and funding priorities and decisions. Historical reports of population sizes of 1 in 4000 to 1 in 50,000 have been based on clinical populations and likely underestimate the size of the transgender population. More recent population-based studies have found a 10- to 100-fold increase in population size. Studies that estimate population size should be population based, employ the two-step method to allow for collection of both gender identity and sex assigned at birth, and include measures to capture the range of transgender people with nonbinary gender identities.

  7. A multilayer concentric filter device to diminish clogging for separation of particles and microalgae based on size.

    PubMed

    Chen, Chih-Chung; Chen, Yu-An; Liu, Yi-Ju; Yao, Da-Jeng

    2014-04-21

    Microalgae species have great economic importance; they are a source of medicines, health foods, animal feeds, industrial pigments, cosmetic additives and biodiesel. Specific microalgae species collected from the environment must be isolated for examination and further application, but their varied size and culture conditions make their isolation using conventional methods, such as filtration, streaking plate and flow cytometric sorting, labour-intensive and costly. A separation device based on size is one of the most rapid, simple and inexpensive methods to separate microalgae, but this approach encounters major disadvantages of clogging and multiple filtration steps when the size of microalgae varies over a wide range. In this work, we propose a multilayer concentric filter device with varied pore size and is driven by a centrifugation force. The device, which includes multiple filter layers, was employed to separate a heterogeneous population of microparticles into several subpopulations by filtration in one step. A cross-flow to attenuate prospective clogging was generated by altering the rate of rotation instantly through the relative motion between the fluid and the filter according to the structural design of the device. Mixed microparticles of varied size were tested to demonstrate that clogging was significantly suppressed due to a highly efficient separation. Microalgae in a heterogeneous population collected from an environmental soil collection were separated and enriched into four subpopulations according to size in a one step filtration process. A microalgae sample contaminated with bacteria and insect eggs was also tested to prove the decontamination capability of the device.

  8. Automatic stage identification of Drosophila egg chamber based on DAPI images

    PubMed Central

    Jia, Dongyu; Xu, Qiuping; Xie, Qian; Mio, Washington; Deng, Wu-Min

    2016-01-01

    The Drosophila egg chamber, whose development is divided into 14 stages, is a well-established model for developmental biology. However, visual stage determination can be a tedious, subjective and time-consuming task prone to errors. Our study presents an objective, reliable and repeatable automated method for quantifying cell features and classifying egg chamber stages based on DAPI images. The proposed approach is composed of two steps: 1) a feature extraction step and 2) a statistical modeling step. The egg chamber features used are egg chamber size, oocyte size, egg chamber ratio and distribution of follicle cells. Methods for determining the on-site of the polytene stage and centripetal migration are also discussed. The statistical model uses linear and ordinal regression to explore the stage-feature relationships and classify egg chamber stages. Combined with machine learning, our method has great potential to enable discovery of hidden developmental mechanisms. PMID:26732176

  9. Least-squares finite element methods for compressible Euler equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Carey, G. F.

    1990-01-01

    A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.

  10. Neuronal differentiation of human mesenchymal stem cells in response to the domain size of graphene substrates.

    PubMed

    Lee, Yoo-Jung; Seo, Tae Hoon; Lee, Seula; Jang, Wonhee; Kim, Myung Jong; Sung, Jung-Suk

    2018-01-01

    Graphene is a noncytotoxic monolayer platform with unique physical, chemical, and biological properties. It has been demonstrated that graphene substrate may provide a promising biocompatible scaffold for stem cell therapy. Because chemical vapor deposited graphene has a two dimensional polycrystalline structure, it is important to control the individual domain size to obtain desirable properties for nano-material. However, the biological effects mediated by differences in domain size of graphene have not yet been reported. On the basis of the control of graphene domain achieved by one-step growth (1step-G, small domain) and two-step growth (2step-G, large domain) process, we found that the neuronal differentiation of bone marrow-derived human mesenchymal stem cells (hMSCs) highly depended on the graphene domain size. The defects at the domain boundaries in 1step-G graphene was higher (×8.5) and had a relatively low (13% lower) contact angle of water droplet than 2step-G graphene, leading to enhanced cell-substrate adhesion and upregulated neuronal differentiation of hMSCs. We confirmed that the strong interactions between cells and defects at the domain boundaries in 1step-G graphene can be obtained due to their relatively high surface energy, which is stronger than interactions between cells and graphene surfaces. Our results may provide valuable information on the development of graphene-based scaffold by understanding which properties of graphene domain influence cell adhesion efficacy and stem cell differentiation. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 106A: 43-51, 2018. © 2017 Wiley Periodicals, Inc.

  11. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  12. An improved maximum power point tracking method for a photovoltaic system

    NASA Astrophysics Data System (ADS)

    Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes

    2016-06-01

    In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.

  13. 2D stepping drive for hyperspectral systems

    NASA Astrophysics Data System (ADS)

    Endrödy, Csaba; Mehner, Hannes; Grewe, Adrian; Sinzinger, Stefan; Hoffmann, Martin

    2015-07-01

    We present the design, fabrication and characterization of a compact 2D stepping microdrive for pinhole array positioning. The miniaturized solution enables a highly integrated compact hyperspectral imaging system. Based on the geometry of the pinhole array, an inch-worm drive with electrostatic actuators was designed resulting in a compact (1 cm2) positioning system featuring a step size of about 15 µm in a 170 µm displacement range. The high payload (20 mg) as required for the pinhole array and the compact system design exceed the known electrostatic inch-worm-based microdrives.

  14. Research on optimal DEM cell size for 3D visualization of loess terraces

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei

    2009-10-01

    In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.

  15. Study on experimental characterization of carbon fiber reinforced polymer panel using digital image correlation: A sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Kashfuddoja, Mohammad; Prasath, R. G. R.; Ramji, M.

    2014-11-01

    In this work, the experimental characterization of polymer-matrix and polymer based carbon fiber reinforced composite laminate by employing a whole field non-contact digital image correlation (DIC) technique is presented. The properties are evaluated based on full field data obtained from DIC measurements by performing a series of tests as per ASTM standards. The evaluated properties are compared with the results obtained from conventional testing and analytical models and they are found to closely match. Further, sensitivity of DIC parameters on material properties is investigated and their optimum value is identified. It is found that the subset size has more influence on material properties as compared to step size and their predicted optimum value for the case of both matrix and composite material is found consistent with each other. The aspect ratio of region of interest (ROI) chosen for correlation should be the same as that of camera resolution aspect ratio for better correlation. Also, an open cutout panel made of the same composite laminate is taken into consideration to demonstrate the sensitivity of DIC parameters on predicting complex strain field surrounding the hole. It is observed that the strain field surrounding the hole is much more sensitive to step size rather than subset size. Lower step size produced highly pixilated strain field, showing sensitivity of local strain at the expense of computational time in addition with random scattered noisy pattern whereas higher step size mitigates the noisy pattern at the expense of losing the details present in data and even alters the natural trend of strain field leading to erroneous maximum strain locations. The subset size variation mainly presents a smoothing effect, eliminating noise from strain field while maintaining the details in the data without altering their natural trend. However, the increase in subset size significantly reduces the strain data at hole edge due to discontinuity in correlation. Also, the DIC results are compared with FEA prediction to ascertain the suitable value of DIC parameters towards better accuracy.

  16. Short-term Time Step Convergence in a Climate Model

    DOE PAGES

    Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...

    2015-02-11

    A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less

  17. Compact and broadband antenna based on a step-shaped metasurface.

    PubMed

    Li, Ximing; Yang, Jingjing; Feng, Yun; Yang, Meixia; Huang, Ming

    2017-08-07

    A metasurface (MS) is highly useful for improving the performance of patch antennae and reducing their size due to their inherent and unique electromagnetic properties. In this paper, a compact and broadband antenna based on a step-shaped metasurface (SMS) at an operating frequency of 4.3 GHz is presented, which is fed by a planar monopole and enabled by selecting an SMS with high selectivity. The SMS consists of an array of metallic step-shaped unit cells underneath the monopole, which provide footprint miniaturization and bandwidth expansion. Numerical results show that the SMS-based antenna with a maximum size of 0.42λ02 (where λ 0 is the operating wavelength in free space) exhibits a 22.3% impedance bandwidth (S11 < -10 dB) and a high gain of more than 7.15 dBi within the passband. Experimental results at microwave frequencies verify the performance of the proposed antenna, demonstrating substantial consistency with the simulation results. The compact and broadband antenna therefore predicts numerous potential applications within modern wireless communication systems.

  18. Reynolds number scaling to predict droplet size distribution in dispersed and undispersed subsurface oil releases.

    PubMed

    Li, Pu; Weng, Linlu; Niu, Haibo; Robinson, Brian; King, Thomas; Conmy, Robyn; Lee, Kenneth; Liu, Lei

    2016-12-15

    This study was aimed at testing the applicability of modified Weber number scaling with Alaska North Slope (ANS) crude oil, and developing a Reynolds number scaling approach for oil droplet size prediction for high viscosity oils. Dispersant to oil ratio and empirical coefficients were also quantified. Finally, a two-step Rosin-Rammler scheme was introduced for the determination of droplet size distribution. This new approach appeared more advantageous in avoiding the inconsistency in interfacial tension measurements, and consequently delivered concise droplet size prediction. Calculated and observed data correlated well based on Reynolds number scaling. The relation indicated that chemical dispersant played an important role in reducing the droplet size of ANS under different seasonal conditions. The proposed Reynolds number scaling and two-step Rosin-Rammler approaches provide a concise, reliable way to predict droplet size distribution, supporting decision making in chemical dispersant application during an offshore oil spill. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Targeting high value metals in lithium-ion battery recycling via shredding and size-based separation.

    PubMed

    Wang, Xue; Gaustad, Gabrielle; Babbitt, Callie W

    2016-05-01

    Development of lithium-ion battery recycling systems is a current focus of much research; however, significant research remains to optimize the process. One key area not studied is the utilization of mechanical pre-recycling steps to improve overall yield. This work proposes a pre-recycling process, including mechanical shredding and size-based sorting steps, with the goal of potential future scale-up to the industrial level. This pre-recycling process aims to achieve material segregation with a focus on the metallic portion and provide clear targets for subsequent recycling processes. The results show that contained metallic materials can be segregated into different size fractions at different levels. For example, for lithium cobalt oxide batteries, cobalt content has been improved from 35% by weight in the metallic portion before this pre-recycling process to 82% in the ultrafine (<0.5mm) fraction and to 68% in the fine (0.5-1mm) fraction, and been excluded in the larger pieces (>6mm). However, size fractions across multiple battery chemistries showed significant variability in material concentration. This finding indicates that sorting by cathode before pre-treatment could reduce the uncertainty of input materials and therefore improve the purity of output streams. Thus, battery labeling systems may be an important step towards implementation of any pre-recycling process. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. N-terminus of Cardiac Myosin Essential Light Chain Modulates Myosin Step-Size

    PubMed Central

    Wang, Yihua; Ajtai, Katalin; Kazmierczak, Katarzyna; Szczesna-Cordary, Danuta; Burghardt, Thomas P.

    2016-01-01

    Muscle myosin cyclically hydrolyzes ATP to translate actin. Ventricular cardiac myosin (βmys) moves actin with three distinct unitary step-sizes resulting from its lever-arm rotation and with step-frequencies that are modulated in a myosin regulation mechanism. The lever-arm associated essential light chain (vELC) binds actin by its 43 residue N-terminal extension. Unitary steps were proposed to involve the vELC N-terminal extension with the 8 nm step engaging the vELC/actin bond facilitating an extra ~19 degrees of lever-arm rotation while the predominant 5 nm step forgoes vELC/actin binding. A minor 3 nm step is the unlikely conversion of the completed 5 to the 8 nm step. This hypothesis was tested using a 17 residue N-terminal truncated vELC in porcine βmys (Δ17βmys) and a 43 residue N-terminal truncated human vELC expressed in transgenic mouse heart (Δ43αmys). Step-size and step-frequency were measured using the Qdot motility assay. Both Δ17βmys and Δ43αmys had significantly increased 5 nm step-frequency and coincident loss in the 8 nm step-frequency compared to native proteins suggesting the vELC/actin interaction drives step-size preference. Step-size and step-frequency probability densities depend on the relative fraction of truncated vELC and relate linearly to pure myosin species concentrations in a mixture containing native vELC homodimer, two truncated vELCs in the modified homodimer, and one native and one truncated vELC in the heterodimer. Step-size and step-frequency, measured for native homodimer and at two or more known relative fractions of truncated vELC, are surmised for each pure species by using a new analytical method. PMID:26671638

  1. 12 CFR 1022.54 - Duties of users making written firm offers of credit or insurance based on information contained...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... than the type size of the principal text on the same page, but in no event smaller than 12 point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (B) On the front side of the...

  2. 12 CFR 1022.54 - Duties of users making written firm offers of credit or insurance based on information contained...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... than the type size of the principal text on the same page, but in no event smaller than 12 point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (B) On the front side of the...

  3. 12 CFR 1022.54 - Duties of users making written firm offers of credit or insurance based on information contained...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... than the type size of the principal text on the same page, but in no event smaller than 12 point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (B) On the front side of the...

  4. A chaos wolf optimization algorithm with self-adaptive variable step-size

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  5. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    NASA Astrophysics Data System (ADS)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  6. OGS#PETSc approach for robust and efficient simulations of strongly coupled hydrothermal processes in EGS reservoirs

    NASA Astrophysics Data System (ADS)

    Watanabe, Norihiro; Blucher, Guido; Cacace, Mauro; Kolditz, Olaf

    2016-04-01

    A robust and computationally efficient solution is important for 3D modelling of EGS reservoirs. This is particularly the case when the reservoir model includes hydraulic conduits such as induced or natural fractures, fault zones, and wellbore open-hole sections. The existence of such hydraulic conduits results in heterogeneous flow fields and in a strengthened coupling between fluid flow and heat transport processes via temperature dependent fluid properties (e.g. density and viscosity). A commonly employed partitioned solution (or operator-splitting solution) may not robustly work for such strongly coupled problems its applicability being limited by small time step sizes (e.g. 5-10 days) whereas the processes have to be simulated for 10-100 years. To overcome this limitation, an alternative approach is desired which can guarantee a robust solution of the coupled problem with minor constraints on time step sizes. In this work, we present a Newton-Raphson based monolithic coupling approach implemented in the OpenGeoSys simulator (OGS) combined with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library. The PETSc library is used for both linear and nonlinear solvers as well as MPI-based parallel computations. The suggested method has been tested by application to the 3D reservoir site of Groß Schönebeck, in northern Germany. Results show that the exact Newton-Raphson approach can also be limited to small time step sizes (e.g. one day) due to slight oscillations in the temperature field. The usage of a line search technique and modification of the Jacobian matrix were necessary to achieve robust convergence of the nonlinear solution. For the studied example, the proposed monolithic approach worked even with a very large time step size of 3.5 years.

  7. Weak-guidance-theory review of dispersion and birefringence management by laser inscription

    NASA Astrophysics Data System (ADS)

    Zheltikov, A. M.; Reid, D. T.

    2008-01-01

    A brief review of laser inscription of micro- and nanophotonic structures in transparent materials is provided in terms of a compact and convenient formalism based on the theory of weak optical waveguides. We derive physically instructive approximate expressions allowing propagation constants of laser-inscribed micro- and nanowaveguides to be calculated as functions of the transverse waveguide size, refractive index step, and dielectric properties of the host material. Based on this analysis, we demonstrate that dispersion engineering capabilities of laser micromachining techniques are limited by the smallness of the refractive index step typical of laser-inscribed structures. However, a laser inscription of waveguides in pre-formed micro- and nanostructures suggests a variety of interesting options for a fine dispersion and birefringence tuning of small-size waveguides and photonic wires.

  8. Between-monitor differences in step counts are related to body size: implications for objective physical activity measurement.

    PubMed

    Pomeroy, Jeremy; Brage, Søren; Curtis, Jeffrey M; Swan, Pamela D; Knowler, William C; Franks, Paul W

    2011-04-27

    The quantification of the relationships between walking and health requires that walking is measured accurately. We correlated different measures of step accumulation to body size, overall physical activity level, and glucose regulation. Participants were 25 men and 25 women American Indians without diabetes (Age: 20-34 years) in Phoenix, Arizona, USA. We assessed steps/day during 7 days of free living, simultaneously with three different monitors (Accusplit-AX120, MTI-ActiGraph, and Dynastream-AMP). We assessed total physical activity during free-living with doubly labeled water combined with resting metabolic rate measured by expired gas indirect calorimetry. Glucose tolerance was determined during an oral glucose tolerance test. Based on observed counts in the laboratory, the AMP was the most accurate device, followed by the MTI and the AX120, respectively. The estimated energy cost of 1000 steps per day was lower in the AX120 than the MTI or AMP. The correlation between AX120-assessed steps/day and waist circumference was significantly higher than the correlation between AMP steps and waist circumference. The difference in steps per day between the AX120 and both the AMP and the MTI were significantly related to waist circumference. Between-monitor differences in step counts influence the observed relationship between walking and obesity-related traits.

  9. Interventions to increase physical activity in middle-age women at the workplace: a randomized controlled trial.

    PubMed

    Ribeiro, Marcos Ausenka; Martins, Milton Arruda; Carvalho, Celso R F

    2014-01-01

    A four-group randomized controlled trial evaluated the impact of distinct workplace interventions to increase the physical activity (PA) and to reduce anthropometric parameters in middle-age women. One-hundred and ninety-five women age 40-50 yr who were employees from a university hospital and physically inactive at their leisure time were randomly assigned to one of four groups: minimal treatment comparator (MTC; n = 47), pedometer-based individual counseling (PedIC; n = 53), pedometer-based group counseling (PedGC; n = 48), and aerobic training (AT; n = 47). The outcomes were total number of steps (primary outcome), those performed at moderate intensity (≥ 110 steps per minute), and weight and waist circumference (secondary outcomes). Evaluations were performed at baseline, at the end of a 3-month intervention, and 3 months after that. Data were presented as delta [(after 3 months-baseline) or (after 6 months-baseline)] and 95% confidence interval. To detect the differences among the groups, a one-way ANOVA and a Holm-Sidak post hoc test was used (P < 0.05). The Cohen effect size was calculated, and an intention-to-treat approach was performed. Only groups using pedometers (PedIC and PedGC) increased the total number of steps after 3 months (P < 0.05); however, the increase observed in PedGC group (1475 steps per day) was even higher than that in PedIC (512 steps per day, P < 0.05) with larger effect size (1.4). The number of steps performed at moderate intensity also increased only in the PedGC group (845 steps per day, P < 0.05). No PA benefit was observed at 6 months. Women submitted to AT did not modify PA daily life activity but reduced anthropometric parameters after 3 and 6 months (P < 0.05). Our results show that in the workplace setting, pedometer-based PA intervention with counseling is effective increasing daily life number of steps, whereas AT is effective for weight loss.

  10. Absolute phase estimation: adaptive local denoising and global unwrapping.

    PubMed

    Bioucas-Dias, Jose; Katkovnik, Vladimir; Astola, Jaakko; Egiazarian, Karen

    2008-10-10

    The paper attacks absolute phase estimation with a two-step approach: the first step applies an adaptive local denoising scheme to the modulo-2 pi noisy phase; the second step applies a robust phase unwrapping algorithm to the denoised modulo-2 pi phase obtained in the first step. The adaptive local modulo-2 pi phase denoising is a new algorithm based on local polynomial approximations. The zero-order and the first-order approximations of the phase are calculated in sliding windows of varying size. The zero-order approximation is used for pointwise adaptive window size selection, whereas the first-order approximation is used to filter the phase in the obtained windows. For phase unwrapping, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [IEEE Trans. Image Process.16, 698 (2007)] to the denoised wrapped phase. Simulations give evidence that the proposed algorithm yields state-of-the-art performance, enabling strong noise attenuation while preserving image details. (c) 2008 Optical Society of America

  11. Sol-gel preparation of hydrophobic silica antireflective coatings with low refractive index by base/acid two-step catalysis.

    PubMed

    Cai, Shuang; Zhang, Yulu; Zhang, Hongli; Yan, Hongwei; Lv, Haibing; Jiang, Bo

    2014-07-23

    Hydrophobic antireflective coatings with a low refractive index were prepared via a base/acid-catalyzed two-step sol-gel process using tetraethylorthosilicate (TEOS) and methyltriethoxysilane (MTES) as precursors, respectively. The base-catalyzed hydrolysis of TEOS leads to the formation of a sol with spherical silica particles in the first step. In the second step, the acid-catalyzed MTES hydrolysis and condensation occur at the surface of the initial base-catalyzed spherical silica particles, which enlarge the silica particle size from 12.9 to 35.0 nm. By a dip-coating process, this hybrid sol gives an antireflective coating with a refractive index of about 1.15. Moreover, the water contact angles of the resulted coatings increase from 22.4 to 108.7° with the increases of MTES content, which affords the coatings an excellent hydrophobicity. A "core-shell" particle growth mechanism of the hybrid sol was proposed and the relationship between the microstructure of silica sols and the properties of AR coatings was investigated.

  12. Green synthesis of colloid silver nanoparticles and resulting biodegradable starch/silver nanocomposites.

    PubMed

    Cheviron, Perrine; Gouanvé, Fabrice; Espuche, Eliane

    2014-08-08

    Environmentally friendly silver nanocomposite films were prepared by an ex situ method consisting firstly in the preparation of colloidal silver dispersions and secondly in the dispersion of the as-prepared nanoparticles in a potato starch/glycerol matrix, keeping a green chemistry process all along the synthesis steps. In the first step concerned with the preparation of the colloidal silver dispersions, water, glucose and soluble starch were used as solvent, reducing agent and stabilizing agent, respectively. The influences of the glucose amount and reaction time were investigated on the size and size distribution of the silver nanoparticles. Two distinct silver nanoparticle populations in size (diameter around 5 nm size for the first one and from 20 to 50 nm for the second one) were distinguished and still highlighted in the potato starch/glycerol based nanocomposite films. It was remarkable that lower nanoparticle mean sizes were evidenced by both TEM and UV-vis analyses in the nanocomposites in comparison to the respective colloidal silver dispersions. A dispersion mechanism based on the potential interactions developed between the nanoparticles and the polymer matrix and on the polymer chain lengths was proposed to explain this morphology. These nanocomposite film series can be viewed as a promising candidate for many applications in antimicrobial packaging, biomedicines and sensors. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. In Vitro and In Vivo Single Myosin Step-Sizes in Striated Muscle a

    PubMed Central

    Burghardt, Thomas P.; Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin

    2016-01-01

    Myosin in muscle transduces ATP free energy into the mechanical work of moving actin. It has a motor domain transducer containing ATP and actin binding sites, and, mechanical elements coupling motor impulse to the myosin filament backbone providing transduction/mechanical-coupling. The mechanical coupler is a lever-arm stabilized by bound essential and regulatory light chains. The lever-arm rotates cyclically to impel bound filamentous actin. Linear actin displacement due to lever-arm rotation is the myosin step-size. A high-throughput quantum dot labeled actin in vitro motility assay (Qdot assay) measures motor step-size in the context of an ensemble of actomyosin interactions. The ensemble context imposes a constant velocity constraint for myosins interacting with one actin filament. In a cardiac myosin producing multiple step-sizes, a “second characterization” is step-frequency that adjusts longer step-size to lower frequency maintaining a linear actin velocity identical to that from a shorter step-size and higher frequency actomyosin cycle. The step-frequency characteristic involves and integrates myosin enzyme kinetics, mechanical strain, and other ensemble affected characteristics. The high-throughput Qdot assay suits a new paradigm calling for wide surveillance of the vast number of disease or aging relevant myosin isoforms that contrasts with the alternative model calling for exhaustive research on a tiny subset myosin forms. The zebrafish embryo assay (Z assay) performs single myosin step-size and step-frequency assaying in vivo combining single myosin mechanical and whole muscle physiological characterizations in one model organism. The Qdot and Z assays cover “bottom-up” and “top-down” assaying of myosin characteristics. PMID:26728749

  14. Variable aperture-based ptychographical iterative engine method

    NASA Astrophysics Data System (ADS)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  15. Interaction of rate- and size-effect using a dislocation density based strain gradient viscoplasticity model

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung N.; Siegmund, Thomas; Tomar, Vikas; Kruzic, Jamie J.

    2017-12-01

    Size effects occur in non-uniform plastically deformed metals confined in a volume on the scale of micrometer or sub-micrometer. Such problems have been well studied using strain gradient rate-independent plasticity theories. Yet, plasticity theories describing the time-dependent behavior of metals in the presence of size effects are presently limited, and there is no consensus about how the size effects vary with strain rates or whether there is an interaction between them. This paper introduces a constitutive model which enables the analysis of complex load scenarios, including loading rate sensitivity, creep, relaxation and interactions thereof under the consideration of plastic strain gradient effects. A strain gradient viscoplasticity constitutive model based on the Kocks-Mecking theory of dislocation evolution, namely the strain gradient Kocks-Mecking (SG-KM) model, is established and allows one to capture both rate and size effects, and their interaction. A formulation of the model in the finite element analysis framework is derived. Numerical examples are presented. In a special virtual creep test with the presence of plastic strain gradients, creep rates are found to diminish with the specimen size, and are also found to depend on the loading rate in an initial ramp loading step. Stress relaxation in a solid medium containing cylindrical microvoids is predicted to increase with decreasing void radius and strain rate in a prior ramp loading step.

  16. Optimal Padding for the Two-Dimensional Fast Fourier Transform

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.

    2011-01-01

    One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications.

  17. Design of a single-step immunoassay principle based on the combination of an enzyme-labeled antibody release coating and a hydrogel copolymerized with a fluorescent enzyme substrate in a microfluidic capillary device.

    PubMed

    Wakayama, Hideki; Henares, Terence G; Jigawa, Kaede; Funano, Shun-ichi; Sueyoshi, Kenji; Endo, Tatsuro; Hisamoto, Hideaki

    2013-11-21

    A combination of an enzyme-labeled antibody release coating and a novel fluorescent enzyme substrate-copolymerized hydrogel in a microchannel for a single-step, no-wash microfluidic immunoassay is demonstrated. This hydrogel discriminates the free enzyme-conjugated antibody from an antigen-enzyme-conjugated antibody immunocomplex based on the difference in molecular size. A selective and sensitive immunoassay, with 10-1000 ng mL(-1) linear range, is reported.

  18. Recursive Directional Ligation Approach for Cloning Recombinant Spider Silks.

    PubMed

    Dinjaski, Nina; Huang, Wenwen; Kaplan, David L

    2018-01-01

    Recent advances in genetic engineering have provided a route to produce various types of recombinant spider silks. Different cloning strategies have been applied to achieve this goal (e.g., concatemerization, step-by-step ligation, recursive directional ligation). Here we describe recursive directional ligation as an approach that allows for facile modularity and control over the size of the genetic cassettes. This approach is based on sequential ligation of genetic cassettes (monomers) where the junctions between them are formed without interrupting key gene sequences with additional base pairs.

  19. Unstable vicinal crystal growth from cellular automata

    NASA Astrophysics Data System (ADS)

    Krasteva, A.; Popova, H.; KrzyŻewski, F.; Załuska-Kotur, M.; Tonchev, V.

    2016-03-01

    In order to study the unstable step motion on vicinal crystal surfaces we devise vicinal Cellular Automata. Each cell from the colony has value equal to its height in the vicinal, initially the steps are regularly distributed. Another array keeps the adatoms, initially distributed randomly over the surface. The growth rule defines that each adatom at right nearest neighbor position to a (multi-) step attaches to it. The update of whole colony is performed at once and then time increases. This execution of the growth rule is followed by compensation of the consumed particles and by diffusional update(s) of the adatom population. Two principal sources of instability are employed - biased diffusion and infinite inverse Ehrlich-Schwoebel barrier (iiSE). Since these factors are not opposed by step-step repulsion the formation of multi-steps is observed but in general the step bunches preserve a finite width. We monitor the developing surface patterns and quantify the observations by scaling laws with focus on the eventual transition from diffusion-limited to kinetics-limited phenomenon. The time-scaling exponent of the bunch size N is 1/2 for the case of biased diffusion and 1/3 for the case of iiSE. Additional distinction is possible based on the time-scaling exponents of the sizes of multi-step Nmulti, these are 0.36÷0.4 (for biased diffusion) and 1/4 (iiSE).

  20. Mitigating Handoff Call Dropping in Wireless Cellular Networks: A Call Admission Control Technique

    NASA Astrophysics Data System (ADS)

    Ekpenyong, Moses Effiong; Udoh, Victoria Idia; Bassey, Udoma James

    2016-06-01

    Handoff management has been an important but challenging issue in the field of wireless communication. It seeks to maintain seamless connectivity of mobile users changing their points of attachment from one base station to another. This paper derives a call admission control model and establishes an optimal step-size coefficient (k) that regulates the admission probability of handoff calls. An operational CDMA network carrier was investigated through the analysis of empirical data collected over a period of 1 month, to verify the performance of the network. Our findings revealed that approximately 23 % of calls in the existing system were lost, while 40 % of the calls (on the average) were successfully admitted. A simulation of the proposed model was then carried out under ideal network conditions to study the relationship between the various network parameters and validate our claim. Simulation results showed that increasing the step-size coefficient degrades the network performance. Even at optimum step-size (k), the network could still be compromised in the presence of severe network crises, but our model was able to recover from these problems and still functions normally.

  1. Facile fabrication of a silicon nanowire sensor by two size reduction steps for detection of alpha-fetoprotein biomarker of liver cancer

    NASA Astrophysics Data System (ADS)

    Binh Pham, Van; ThanhTung Pham, Xuan; Nhat Khoa Phan, Thanh; Thanh Tuyen Le, Thi; Chien Dang, Mau

    2015-12-01

    We present a facile technique that only uses conventional micro-techniques and two size-reduction steps to fabricate wafer-scale silicon nanowire (SiNW) with widths of 200 nm. Initially, conventional lithography was used to pattern SiNW with 2 μm width. Then the nanowire width was decreased to 200 nm by two size-reduction steps with isotropic wet etching. The fabricated SiNW was further investigated when used with nanowire field-effect sensors. The electrical characteristics of the fabricated SiNW devices were characterized and pH sensitivity was investigated. Then a simple and effective surface modification process was carried out to modify SiNW for subsequent binding of a desired receptor. The complete SiNW-based biosensor was then used to detect alpha-fetoprotein (AFP), one of the medically approved biomarkers for liver cancer diagnosis. Electrical measurements showed that the developed SiNW biosensor could detect AFP with concentrations of about 100 ng mL-1. This concentration is lower than the necessary AFP concentration for liver cancer diagnosis.

  2. Orbit and uncertainty propagation: a comparison of Gauss-Legendre-, Dormand-Prince-, and Chebyshev-Picard-based approaches

    NASA Astrophysics Data System (ADS)

    Aristoff, Jeffrey M.; Horwood, Joshua T.; Poore, Aubrey B.

    2014-01-01

    We present a new variable-step Gauss-Legendre implicit-Runge-Kutta-based approach for orbit and uncertainty propagation, VGL-IRK, which includes adaptive step-size error control and which collectively, rather than individually, propagates nearby sigma points or states. The performance of VGL-IRK is compared to a professional (variable-step) implementation of Dormand-Prince 8(7) (DP8) and to a fixed-step, optimally-tuned, implementation of modified Chebyshev-Picard iteration (MCPI). Both nearly-circular and highly-elliptic orbits are considered using high-fidelity gravity models and realistic integration tolerances. VGL-IRK is shown to be up to eleven times faster than DP8 and up to 45 times faster than MCPI (for the same accuracy), in a serial computing environment. Parallelization of VGL-IRK and MCPI is also discussed.

  3. Controlling CH3NH3PbI(3-x)Cl(x) Film Morphology with Two-Step Annealing Method for Efficient Hybrid Perovskite Solar Cells.

    PubMed

    Liu, Dong; Wu, Lili; Li, Chunxiu; Ren, Shengqiang; Zhang, Jingquan; Li, Wei; Feng, Lianghuan

    2015-08-05

    The methylammonium lead halide perovskite solar cells have become very attractive because they can be prepared with low-cost solution-processable technology and their power conversion efficiency have been increasing from 3.9% to 20% in recent years. However, the high performance of perovskite photovoltaic devices are dependent on the complicated process to prepare compact perovskite films with large grain size. Herein, a new method is developed to achieve excellent CH3NH3PbI3-xClx film with fine morphology and crystallization based on one step deposition and two-step annealing process. This method include the spin coating deposition of the perovskite films with the precursor solution of PbI2, PbCl2, and CH3NH3I at the molar ratio 1:1:4 in dimethylformamide (DMF) and the post two-step annealing (TSA). The first annealing is achieved by solvent-induced process in DMF to promote migration and interdiffusion of the solvent-assisted precursor ions and molecules and realize large size grain growth. The second annealing is conducted by thermal-induced process to further improve morphology and crystallization of films. The compact perovskite films are successfully prepared with grain size up to 1.1 μm according to SEM observation. The PL decay lifetime, and the optic energy gap for the film with two-step annealing are 460 ns and 1.575 eV, respectively, while they are 307 and 327 ns and 1.577 and 1.582 eV for the films annealed in one-step thermal and one-step solvent process. On the basis of the TSA process, the photovoltaic devices exhibit the best efficiency of 14% under AM 1.5G irradiation (100 mW·cm(-2)).

  4. An imbalance in cluster sizes does not lead to notable loss of power in cross-sectional, stepped-wedge cluster randomised trials with a continuous outcome.

    PubMed

    Kristunas, Caroline A; Smith, Karen L; Gray, Laura J

    2017-03-07

    The current methodology for sample size calculations for stepped-wedge cluster randomised trials (SW-CRTs) is based on the assumption of equal cluster sizes. However, as is often the case in cluster randomised trials (CRTs), the clusters in SW-CRTs are likely to vary in size, which in other designs of CRT leads to a reduction in power. The effect of an imbalance in cluster size on the power of SW-CRTs has not previously been reported, nor what an appropriate adjustment to the sample size calculation should be to allow for any imbalance. We aimed to assess the impact of an imbalance in cluster size on the power of a cross-sectional SW-CRT and recommend a method for calculating the sample size of a SW-CRT when there is an imbalance in cluster size. The effect of varying degrees of imbalance in cluster size on the power of SW-CRTs was investigated using simulations. The sample size was calculated using both the standard method and two proposed adjusted design effects (DEs), based on those suggested for CRTs with unequal cluster sizes. The data were analysed using generalised estimating equations with an exchangeable correlation matrix and robust standard errors. An imbalance in cluster size was not found to have a notable effect on the power of SW-CRTs. The two proposed adjusted DEs resulted in trials that were generally considerably over-powered. We recommend that the standard method of sample size calculation for SW-CRTs be used, provided that the assumptions of the method hold. However, it would be beneficial to investigate, through simulation, what effect the maximum likely amount of inequality in cluster sizes would be on the power of the trial and whether any inflation of the sample size would be required.

  5. Auxotonic to isometric contraction transitioning in a beating heart causes myosin step-size to down shift

    PubMed Central

    Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin

    2017-01-01

    Myosin motors in cardiac ventriculum convert ATP free energy to the work of moving blood volume under pressure. The actin bound motor cyclically rotates its lever-arm/light-chain complex linking motor generated torque to the myosin filament backbone and translating actin against resisting force. Previous research showed that the unloaded in vitro motor is described with high precision by single molecule mechanical characteristics including unitary step-sizes of approximately 3, 5, and 8 nm and their relative step-frequencies of approximately 13, 50, and 37%. The 3 and 8 nm unitary step-sizes are dependent on myosin essential light chain (ELC) N-terminus actin binding. Step-size and step-frequency quantitation specifies in vitro motor function including duty-ratio, power, and strain sensitivity metrics. In vivo, motors integrated into the muscle sarcomere form the more complex and hierarchically functioning muscle machine. The goal of the research reported here is to measure single myosin step-size and step-frequency in vivo to assess how tissue integration impacts motor function. A photoactivatable GFP tags the ventriculum myosin lever-arm/light-chain complex in the beating heart of a live zebrafish embryo. Detected single GFP emission reports time-resolved myosin lever-arm orientation interpreted as step-size and step-frequency providing single myosin mechanical characteristics over the active cycle. Following step-frequency of cardiac ventriculum myosin transitioning from low to high force in relaxed to auxotonic to isometric contraction phases indicates that the imposition of resisting force during contraction causes the motor to down-shift to the 3 nm step-size accounting for >80% of all the steps in the near-isometric phase. At peak force, the ATP initiated actomyosin dissociation is the predominant strain inhibited transition in the native myosin contraction cycle. The proposed model for motor down-shifting and strain sensing involves ELC N-terminus actin binding. Overall, the approach is a unique bottom-up single molecule mechanical characterization of a hierarchically functional native muscle myosin. PMID:28423017

  6. Characterizing 3D grain size distributions from 2D sections in mylonites using a modified version of the Saltykov method

    NASA Astrophysics Data System (ADS)

    Lopez-Sanchez, Marco; Llana-Fúnez, Sergio

    2016-04-01

    The understanding of creep behaviour in rocks requires knowledge of 3D grain size distributions (GSD) that result from dynamic recrystallization processes during deformation. The methods to estimate directly the 3D grain size distribution -serial sectioning, synchrotron or X-ray-based tomography- are expensive, time-consuming and, in most cases and at best, challenging. This means that in practice grain size distributions are mostly derived from 2D sections. Although there are a number of methods in the literature to derive the actual 3D grain size distributions from 2D sections, the most popular in highly deformed rocks is the so-called Saltykov method. It has though two major drawbacks: the method assumes no interaction between grains, which is not true in the case of recrystallised mylonites; and uses histograms to describe distributions, which limits the quantification of the GSD. The first aim of this contribution is to test whether the interaction between grains in mylonites, i.e. random grain packing, affects significantly the GSDs estimated by the Saltykov method. We test this using the random resampling technique in a large data set (n = 12298). The full data set is built from several parallel thin sections that cut a completely dynamically recrystallized quartz aggregate in a rock sample from a Variscan shear zone in NW Spain. The results proved that the Saltykov method is reliable as long as the number of grains is large (n > 1000). Assuming that a lognormal distribution is an optimal approximation for the GSD in a completely dynamically recrystallized rock, we introduce an additional step to the Saltykov method, which allows estimating a continuous probability distribution function of the 3D grain size population. The additional step takes the midpoints of the classes obtained by the Saltykov method and fits a lognormal distribution with a trust region using a non-linear least squares algorithm. The new protocol is named the two-step method. The conclusion of this work is that both the Saltykov and the two-step methods are accurate and simple enough to be useful in practice in rocks, alloys or ceramics with near-equant grains and expected lognormal distributions. The Saltykov method is particularly suitable to estimate the volumes of particular grain fractions, while the two-step method to quantify the full GSD (mean and standard deviation in log grain size). The two-step method is implemented in a free, open-source and easy-to-handle script (see http://marcoalopez.github.io/GrainSizeTools/).

  7. A Variable Step-Size Proportionate Affine Projection Algorithm for Identification of Sparse Impulse Response

    NASA Astrophysics Data System (ADS)

    Liu, Ligang; Fukumoto, Masahiro; Saiki, Sachio; Zhang, Shiyong

    2009-12-01

    Proportionate adaptive algorithms have been proposed recently to accelerate convergence for the identification of sparse impulse response. When the excitation signal is colored, especially the speech, the convergence performance of proportionate NLMS algorithms demonstrate slow convergence speed. The proportionate affine projection algorithm (PAPA) is expected to solve this problem by using more information in the input signals. However, its steady-state performance is limited by the constant step-size parameter. In this article we propose a variable step-size PAPA by canceling the a posteriori estimation error. This can result in high convergence speed using a large step size when the identification error is large, and can then considerably decrease the steady-state misalignment using a small step size after the adaptive filter has converged. Simulation results show that the proposed approach can greatly improve the steady-state misalignment without sacrificing the fast convergence of PAPA.

  8. Criteria for software modularization

    NASA Technical Reports Server (NTRS)

    Card, David N.; Page, Gerald T.; Mcgarry, Frank E.

    1985-01-01

    A central issue in programming practice involves determining the appropriate size and information content of a software module. This study attempted to determine the effectiveness of two widely used criteria for software modularization, strength and size, in reducing fault rate and development cost. Data from 453 FORTRAN modules developed by professional programmers were analyzed. The results indicated that module strength is a good criterion with respect to fault rate, whereas arbitrary module size limitations inhibit programmer productivity. This analysis is a first step toward defining empirically based standards for software modularization.

  9. Model based Inverse Methods for Sizing Cracks of Varying Shape and Location in Bolt hole Eddy Current (BHEC) Inspections (Postprint)

    DTIC Science & Technology

    2016-02-10

    using bolt hole eddy current (BHEC) techniques. Data was acquired for a wide range of crack sizes and shapes, including mid- bore , corner and through...to select the most appropriate VIC-3D surrogate model for subsequent crack sizing inversion step. Inversion results for select mid- bore , through and...the flaw. 15. SUBJECT TERMS Bolt hole eddy current (BHEC); mid- bore , corner and through-thickness crack types; VIC-3D generated surrogate models

  10. Enhancement of cell growth on honeycomb-structured polylactide surface using atmospheric-pressure plasma jet modification

    NASA Astrophysics Data System (ADS)

    Cheng, Kuang-Yao; Chang, Chia-Hsing; Yang, Yi-Wei; Liao, Guo-Chun; Liu, Chih-Tung; Wu, Jong-Shinn

    2017-02-01

    In this paper, we compare the cell growth results of NIH-3T3 and Neuro-2A cells over 72 h on flat and honeycomb structured PLA films without and with a two-step atmospheric-pressure nitrogen-based plasma jet treatment. We developed a fabrication system used for forming of a uniform honeycomb structure on PLA surface, which can produce two different pore sizes, 3-4 μm and 7-8 μm, of honeycomb pattern. We applied a previously developed nitrogen-based atmospheric-pressure dielectric barrier discharge (DBD) jet system to treat the PLA film without and with honeycomb structure. NIH-3T3 and a much smaller Neuro-2A cells were cultivated on the films under various surface conditions. The results show that the two-step plasma treatment in combination with a honeycomb structure can enhance cell growth on PLA film, should the cell size be not too smaller than the pore size of honeycomb structure, e.g., NIH-3T3. Otherwise, cell growth would be better on flat PLA film, e.g., Neuro-2A.

  11. Variable aperture-based ptychographical iterative engine method.

    PubMed

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  12. VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal

    NASA Astrophysics Data System (ADS)

    Satheeskumaran, S.; Sabrigiriraj, M.

    2016-06-01

    Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.

  13. Understanding and controlling the step bunching instability in aqueous silicon etching

    NASA Astrophysics Data System (ADS)

    Bao, Hailing

    Chemical etching of silicon has been widely used for more than half a century in the semiconductor industry. It not only forms the basis for current wafer cleaning processes, it also serves as a powerful tool to create a variety of surface morphologies for different applications. Its potential for controlling surface morphology at the atomic scale over micron-size regions is especially appealing. In spite of its wide usage, the chemistry of silicon etching is poorly understood. Many seemingly simple but fundamental questions have not been answered. As a result, the development of new etchants and new etching protocols are based on expensive and tedious trial-and-error experiments. A better understanding of the etching mechanism would direct the rational formulation of new etchants that produce controlled etch morphologies. Particularly, micron-scale step bunches spontaneously develop on the vicinal Si(111) surface etched in KOH or other anisotropic aqueous etchants. The ability to control the size, orientation, density and regularity of these surface features would greatly improve the performance of microelectromechanical devices. This study is directed towards understanding the chemistry and step bunching instability in aqueous anisotropic etching of silicon through a combination of experimental techniques and theoretical simulations. To reveal the cause of step-bunching instability, kinetic Monte Carlo simulations were constructed based on an atomistic model of the silicon lattice and a modified kinematic wave theory. The simulations showed that inhomogeneity was the origin of step-bunching, which was confirmed through STM studies of etch morphologies created under controlled flow conditions. To quantify the size of the inhomogeneities in different etchants and to clarify their effects, a five-parallel-trench pattern was fabricated. This pattern used a nitride mask to protect most regions of the wafer; five evenly spaced etch windows were opened to the Si(110) substrate. Combining data from these etched patterns and surface IR spectra, a modified mechanism, which explained most experimental observations, was proposed. Control of the step-bunching instability was accomplished with a second micromachined etch barrier pattern which consisted of a circular array of seventy-two long, narrow trenches in an etch mask. Using this pattern, well aligned, regularly shaped, evenly-distributed, near-atomically flat terraces in micron size were produced controllably.

  14. Protein crystal growth in low gravity

    NASA Technical Reports Server (NTRS)

    Feigelson, Robert S.

    1994-01-01

    This research involved (1) using the Atomic Force Microscope (AFM) in a study on the growth of lysozyme crystals and (2) refinement of the design of the Thermonucleator which controls the supersaturation required for the nucleation and growth of protein crystals separately. AFM studies of the (110) tetragonal face confirmed that lysozyme crystals grow by step propagation. There appears to be very little step pile up in the growth regimes which we studied. The step height was measured at = 54A which was equal to the (110) interpane spacing. The AFM images showed areas of step retardation and the formation of pits. These defects ranged in size from 0.1 to 0.4 mu. The source of these defects was not determined. The redesign of the Thermonucleator produced an instrument based on thermoelectric technology which is both easier to use and more amenable to use in a mu g environment. The use of thermoelectric technology resulted in a considerable size reduction which will allow for the design of a multi-unit growth apparatus. The performance of the new apparatus was demonstrated to be the same as the original design.

  15. Blueprint for Acquisition Reform, Version 3.0

    DTIC Science & Technology

    2008-07-01

    represents a substantial and immediate step forward in establishing the Coast Guard as a model mid-sized federal agency for acquisition processes...Blueprint for Acquisition Reform in the U. S. Coast Guard “The Coast Guard must become the model for mid-sized Federal agency acquisition in process...acquisition (DoD 5000 model >CG Major Systems Acquisition Manual) • Deepwater Program Executive Officer (PEO): System of Systems performance-based

  16. Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method

    NASA Astrophysics Data System (ADS)

    Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.

    2017-10-01

    The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.

  17. Effects of homogenization treatment on recrystallization behavior of 7150 aluminum sheet during post-rolling annealing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Zhanying; Department of Applied Science, University of Québec at Chicoutimi, Saguenay, QC G7H 2B1; Zhao, Gang

    2016-04-15

    The effects of two homogenization treatments applied to the direct chill (DC) cast billet on the recrystallization behavior in 7150 aluminum alloy during post-rolling annealing have been investigated using the electron backscatter diffraction (EBSD) technique. Following hot and cold rolling to the sheet, measured orientation maps, the recrystallization fraction and grain size, the misorientation angle and the subgrain size were used to characterize the recovery and recrystallization processes at different annealing temperatures. The results were compared between the conventional one-step homogenization and the new two-step homogenization, with the first step being pretreated at 250 °C. Al{sub 3}Zr dispersoids with highermore » densities and smaller sizes were obtained after the two-step homogenization, which strongly retarded subgrain/grain boundary mobility and inhibited recrystallization. Compared with the conventional one-step homogenized samples, a significantly lower recrystallized fraction and a smaller recrystallized grain size were obtained under all annealing conditions after cold rolling in the two-step homogenized samples. - Highlights: • Effects of two homogenization treatments on recrystallization in 7150 Al sheets • Quantitative study on the recrystallization evolution during post-rolling annealing • Al{sub 3}Zr dispersoids with higher densities and smaller sizes after two-step treatment • Higher recrystallization resistance of 7150 sheets with two-step homogenization.« less

  18. A Novel Method for Block Size Forensics Based on Morphological Operations

    NASA Astrophysics Data System (ADS)

    Luo, Weiqi; Huang, Jiwu; Qiu, Guoping

    Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.

  19. Improvements of the particle-in-cell code EUTERPE for petascaling machines

    NASA Astrophysics Data System (ADS)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Kleiber, Ralf; Castejón, Francisco; Cela, José M.

    2011-09-01

    In the present work we report some performance measures and computational improvements recently carried out using the gyrokinetic code EUTERPE (Jost, 2000 [1] and Jost et al., 1999 [2]), which is based on the general particle-in-cell (PIC) method. The scalability of the code has been studied for up to sixty thousand processing elements and some steps towards a complete hybridization of the code were made. As a numerical example, non-linear simulations of Ion Temperature Gradient (ITG) instabilities have been carried out in screw-pinch geometry and the results are compared with earlier works. A parametric study of the influence of variables (step size of the time integrator, number of markers, grid size) on the quality of the simulation is presented.

  20. Approximated affine projection algorithm for feedback cancellation in hearing aids.

    PubMed

    Lee, Sangmin; Kim, In-Young; Park, Young-Cheol

    2007-09-01

    We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.

  1. Autonomous reinforcement learning with experience replay.

    PubMed

    Wawrzyński, Paweł; Tanwani, Ajay Kumar

    2013-05-01

    This paper considers the issues of efficiency and autonomy that are required to make reinforcement learning suitable for real-life control tasks. A real-time reinforcement learning algorithm is presented that repeatedly adjusts the control policy with the use of previously collected samples, and autonomously estimates the appropriate step-sizes for the learning updates. The algorithm is based on the actor-critic with experience replay whose step-sizes are determined on-line by an enhanced fixed point algorithm for on-line neural network training. An experimental study with simulated octopus arm and half-cheetah demonstrates the feasibility of the proposed algorithm to solve difficult learning control problems in an autonomous way within reasonably short time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Performance analysis and kernel size study of the Lynx real-time operating system

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei; Gibson, James S.; Fernquist, Alan R.

    1993-01-01

    This paper analyzes the Lynx real-time operating system (LynxOS), which has been selected as the operating system for the Space Station Freedom Data Management System (DMS). The features of LynxOS are compared to other Unix-based operating system (OS). The tools for measuring the performance of LynxOS, which include a high-speed digital timer/counter board, a device driver program, and an application program, are analyzed. The timings for interrupt response, process creation and deletion, threads, semaphores, shared memory, and signals are measured. The memory size of the DMS Embedded Data Processor (EDP) is limited. Besides, virtual memory is not suitable for real-time applications because page swap timing may not be deterministic. Therefore, the DMS software, including LynxOS, has to fit in the main memory of an EDP. To reduce the LynxOS kernel size, the following steps are taken: analyzing the factors that influence the kernel size; identifying the modules of LynxOS that may not be needed in an EDP; adjusting the system parameters of LynxOS; reconfiguring the device drivers used in the LynxOS; and analyzing the symbol table. The reductions in kernel disk size, kernel memory size and total kernel size reduction from each step mentioned above are listed and analyzed.

  3. Influence of BMI and dietary restraint on self-selected portions of prepared meals in US women.

    PubMed

    Labbe, David; Rytz, Andréas; Brunstrom, Jeffrey M; Forde, Ciarán G; Martin, Nathalie

    2017-04-01

    The rise of obesity prevalence has been attributed in part to an increase in food and beverage portion sizes selected and consumed among overweight and obese consumers. Nevertheless, evidence from observations of adults is mixed and contradictory findings might reflect the use of small or unrepresentative samples. The objective of this study was i) to determine the extent to which BMI and dietary restraint predict self-selected portion sizes for a range of commercially available prepared savoury meals and ii) to consider the importance of these variables relative to two previously established predictors of portion selection, expected satiation and expected liking. A representative sample of female consumers (N = 300, range 18-55 years) evaluated 15 frozen savoury prepared meals. For each meal, participants rated their expected satiation and expected liking, and selected their ideal portion using a previously validated computer-based task. Dietary restraint was quantified using the Dutch Eating Behaviour Questionnaire (DEBQ-R). Hierarchical multiple regression was performed on self-selected portions with age, hunger level, and meal familiarity entered as control variables in the first step of the model, expected satiation and expected liking as predictor variables in the second step, and DEBQ-R and BMI as exploratory predictor variables in the third step. The second and third steps significantly explained variance in portion size selection (18% and 4%, respectively). Larger portion selections were significantly associated with lower dietary restraint and with lower expected satiation. There was a positive relationship between BMI and portion size selection (p = 0.06) and between expected liking and portion size selection (p = 0.06). Our discussion considers future research directions, the limited variance explained by our model, and the potential for portion size underreporting by overweight participants. Copyright © 2016 Nestec S.A. Published by Elsevier Ltd.. All rights reserved.

  4. Identification of optimal mask size parameter for noise filtering in 99mTc-methylene diphosphonate bone scintigraphy images.

    PubMed

    Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-11-01

    Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.

  5. Anticipatory Postural Adjustment During Self-Initiated, Cued, and Compensatory Stepping in Healthy Older Adults and Patients With Parkinson Disease.

    PubMed

    Schlenstedt, Christian; Mancini, Martina; Horak, Fay; Peterson, Daniel

    2017-07-01

    To characterize anticipatory postural adjustments (APAs) across a variety of step initiation tasks in people with Parkinson disease (PD) and healthy subjects. Cross-sectional study. Step initiation was analyzed during self-initiated gait, perceptual cued gait, and compensatory forward stepping after platform perturbation. People with PD were assessed on and off levodopa. University research laboratory. People (N=31) with PD (n=19) and healthy aged-matched subjects (n=12). Not applicable. Mediolateral (ML) size of APAs (calculated from center of pressure recordings), step kinematics, and body alignment. With respect to self-initiated gait, the ML size of APAs was significantly larger during the cued condition and significantly smaller during the compensatory condition (P<.001). Healthy subjects and patients with PD did not differ in body alignment during the stance phase prior to stepping. No significant group effect was found for ML size of APAs between healthy subjects and patients with PD. However, the reduction in APA size from cued to compensatory stepping was significantly less pronounced in PD off medication compared with healthy subjects, as indicated by a significant group by condition interaction effect (P<.01). No significant differences were found comparing patients with PD on and off medications. Specific stepping conditions had a significant effect on the preparation and execution of step initiation. Therefore, APA size should be interpreted with respect to the specific stepping condition. Across-task changes in people with PD were less pronounced compared with healthy subjects. Antiparkinsonian medication did not significantly improve step initiation in this mildly affected PD cohort. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  6. Critical Motor Number for Fractional Steps of Cytoskeletal Filaments in Gliding Assays

    PubMed Central

    Li, Xin; Lipowsky, Reinhard; Kierfeld, Jan

    2012-01-01

    In gliding assays, filaments are pulled by molecular motors that are immobilized on a solid surface. By varying the motor density on the surface, one can control the number of motors that pull simultaneously on a single filament. Here, such gliding assays are studied theoretically using Brownian (or Langevin) dynamics simulations and taking the local force balance between motors and filaments as well as the force-dependent velocity of the motors into account. We focus on the filament stepping dynamics and investigate how single motor properties such as stalk elasticity and step size determine the presence or absence of fractional steps of the filaments. We show that each gliding assay can be characterized by a critical motor number, . Because of thermal fluctuations, fractional filament steps are only detectable as long as . The corresponding fractional filament step size is where is the step size of a single motor. We first apply our computational approach to microtubules pulled by kinesin-1 motors. For elastic motor stalks that behave as linear springs with a zero rest length, the critical motor number is found to be , and the corresponding distributions of the filament step sizes are in good agreement with the available experimental data. In general, the critical motor number depends on the elastic stalk properties and is reduced to for linear springs with a nonzero rest length. Furthermore, is shown to depend quadratically on the motor step size . Therefore, gliding assays consisting of actin filaments and myosin-V are predicted to exhibit fractional filament steps up to motor number . Finally, we show that fractional filament steps are also detectable for a fixed average motor number as determined by the surface density (or coverage) of the motors on the substrate surface. PMID:22927953

  7. Low Complexity Compression and Speed Enhancement for Optical Scanning Holography

    PubMed Central

    Tsang, P. W. M.; Poon, T.-C.; Liu, J.-P.; Kim, T.; Kim, Y. S.

    2016-01-01

    In this paper we report a low complexity compression method that is suitable for compact optical scanning holography (OSH) systems with different optical settings. Our proposed method can be divided into 2 major parts. First, an automatic decision maker is applied to select the rows of holographic pixels to be scanned. This process enhances the speed of acquiring a hologram, and also lowers the data rate. Second, each row of down-sampled pixels is converted into a one-bit representation with delta modulation (DM). Existing DM-based hologram compression techniques suffers from the disadvantage that a core parameter, commonly known as the step size, has to be determined in advance. However, the correct value of the step size for compressing each row of hologram is dependent on the dynamic range of the pixels, which could deviate significantly with the object scene, as well as OSH systems with different opical settings. We have overcome this problem by incorporating a dynamic step-size adjustment scheme. The proposed method is applied in the compression of holograms that are acquired with 2 different OSH systems, demonstrating a compression ratio of over two orders of magnitude, while preserving favorable fidelity on the reconstructed images. PMID:27708410

  8. Purification of complex samples: Implementation of a modular and reconfigurable droplet-based microfluidic platform with cascaded deterministic lateral displacement separation modules

    PubMed Central

    Pudda, Catherine; Boizot, François; Verplanck, Nicolas; Revol-Cavalier, Frédéric; Berthier, Jean; Thuaire, Aurélie

    2018-01-01

    Particle separation in microfluidic devices is a common problematic for sample preparation in biology. Deterministic lateral displacement (DLD) is efficiently implemented as a size-based fractionation technique to separate two populations of particles around a specific size. However, real biological samples contain components of many different sizes and a single DLD separation step is not sufficient to purify these complex samples. When connecting several DLD modules in series, pressure balancing at the DLD outlets of each step becomes critical to ensure an optimal separation efficiency. A generic microfluidic platform is presented in this paper to optimize pressure balancing, when DLD separation is connected either to another DLD module or to a different microfluidic function. This is made possible by generating droplets at T-junctions connected to the DLD outlets. Droplets act as pressure controllers, which perform at the same time the encapsulation of DLD sorted particles and the balance of output pressures. The optimized pressures to apply on DLD modules and on T-junctions are determined by a general model that ensures the equilibrium of the entire platform. The proposed separation platform is completely modular and reconfigurable since the same predictive model applies to any cascaded DLD modules of the droplet-based cartridge. PMID:29768490

  9. Individual-based modelling of population growth and diffusion in discrete time.

    PubMed

    Tkachenko, Natalie; Weissmann, John D; Petersen, Wesley P; Lake, George; Zollikofer, Christoph P E; Callegari, Simone

    2017-01-01

    Individual-based models (IBMs) of human populations capture spatio-temporal dynamics using rules that govern the birth, behavior, and death of individuals. We explore a stochastic IBM of logistic growth-diffusion with constant time steps and independent, simultaneous actions of birth, death, and movement that approaches the Fisher-Kolmogorov model in the continuum limit. This model is well-suited to parallelization on high-performance computers. We explore its emergent properties with analytical approximations and numerical simulations in parameter ranges relevant to human population dynamics and ecology, and reproduce continuous-time results in the limit of small transition probabilities. Our model prediction indicates that the population density and dispersal speed are affected by fluctuations in the number of individuals. The discrete-time model displays novel properties owing to the binomial character of the fluctuations: in certain regimes of the growth model, a decrease in time step size drives the system away from the continuum limit. These effects are especially important at local population sizes of <50 individuals, which largely correspond to group sizes of hunter-gatherers. As an application scenario, we model the late Pleistocene dispersal of Homo sapiens into the Americas, and discuss the agreement of model-based estimates of first-arrival dates with archaeological dates in dependence of IBM model parameter settings.

  10. Supercritical Fluid Atomic Layer Deposition: Base-Catalyzed Deposition of SiO2.

    PubMed

    Kalan, Roghi E; McCool, Benjamin A; Tripp, Carl P

    2016-07-19

    An in situ FTIR thin film technique was used to study the sequential atomic layer deposition (ALD) reactions of SiCl4, tetraethyl orthosilicate (TEOS) precursors, and water on nonporous silica powder using supercritical CO2 (sc-CO2) as the solvent. The IR work on nonporous powders was used to identify the reaction sequence for using a sc-CO2-based ALD to tune the pore size of a mesoporous silica. The IR studies showed that only trace adsorption of SiCl4 occurred on the silica, and this was due to the desiccating power of sc-CO2 to remove the adsorbed water from the surface. This was overcome by employing a three-step reaction scheme involving a first step of adsorption of triethylamine (TEA), followed by SiCl4 and then H2O. For TEOS, a three-step reaction sequence using TEA, TEOS, and then water offered no advantage, as the TEOS simply displaced the TEA from the silica surface. A two-step reaction involving the addition of TEOS followed by H2O in a second step did lead to silica film growth. However, higher growth rates were obtained when using a mixture of TEOS/TEA in the first step. The hydrolysis of the adsorbed TEOS was also much slower than that of the adsorbed SiCl4, and this was overcome by using a mixture of water/TEA during the second step. While the three-step process with SiCl4 showed a higher linear growth rate than obtained with two-step process using TEOS/TEA, its use was not practical, as the HCl generated led to corrosion of our sc-CO2 delivery system. However, when applying the two-step ALD reaction using TEOS on an MCM-41 powder, a 0.21 nm decrease in pore diameter was obtained after the first ALD cycle whereas further ALD cycles did not lead to further pore size reduction. This was attributed to the difficulty in removal of the H2O in the pores after the first cycle.

  11. An Assessment Program Designed To Improve Communication Instruction through a Competency-Based Core Curriculum.

    ERIC Educational Resources Information Center

    Aitken, Joan E.; Neer, Michael R.

    This paper provides an example procedure used to design and install a program of assessment to improve communication instruction through a competency-based core curriculum at a mid-sized, urban university. The paper models the various steps in the process, and includes specific tests, forms, memos, course description, sources, and procedures which…

  12. Comparison of prostate contours between conventional stepping transverse imaging and Twister-based sagittal imaging in permanent interstitial prostate brachytherapy.

    PubMed

    Kawakami, Shogo; Ishiyama, Hiromichi; Satoh, Takefumi; Tsumura, Hideyasu; Sekiguchi, Akane; Takenaka, Kouji; Tabata, Ken-Ichi; Iwamura, Masatsugu; Hayakawa, Kazushige

    2017-08-01

    To compare prostate contours on conventional stepping transverse image acquisitions with those on twister-based sagittal image acquisitions. Twenty prostate cancer patients who were planned to have permanent interstitial prostate brachytherapy were prospectively accrued. A transrectal ultrasonography probe was inserted, with the patient in lithotomy position. Transverse images were obtained with stepping movement of the transverse transducer. In the same patient, sagittal images were also obtained through rotation of the sagittal transducer using the "Twister" mode. The differences of prostate size among the two types of image acquisitions were compared. The relationships among the difference of the two types of image acquisitions, dose-volume histogram (DVH) parameters on the post-implant computed tomography (CT) analysis, as well as other factors were analyzed. The sagittal image acquisitions showed a larger prostate size compared to the transverse image acquisitions especially in the anterior-posterior (AP) direction ( p < 0.05). Interestingly, relative size of prostate apex in AP direction in sagittal image acquisitions compared to that in transverse image acquisitions was correlated to DVH parameters such as D 90 ( R = 0.518, p = 0.019), and V 100 ( R = 0.598, p = 0.005). There were small but significant differences in the prostate contours between the transverse and the sagittal planning image acquisitions. Furthermore, our study suggested that the differences between the two types of image acquisitions might correlated to dosimetric results on CT analysis.

  13. Modeling myosin VI stepping dynamics

    NASA Astrophysics Data System (ADS)

    Tehver, Riina

    Myosin VI is a molecular motor that transports intracellular cargo as well as acts as an anchor. The motor has been measured to have unusually large step size variation and it has been reported to make both long forward and short inchworm-like forward steps, as well as step backwards. We have been developing a model that incorporates this diverse stepping behavior in a consistent framework. Our model allows us to predict the dynamics of the motor under different conditions and investigate the evolutionary advantages of the large step size variation.

  14. Synthesis and Size Dependent Reflectance Study of Water Soluble SnS Nanoparticles

    PubMed Central

    Xu, Ying; Al-Salim, Najeh; Tilley, Richard D.

    2012-01-01

    Near-monodispersed water soluble SnS nanoparticles in the diameter range of 3–6 nm are synthesized by a facile, solution based one-step approach using ethanolamine ligands. The optimal amount of triethanolamine is investigated. The effect of further heat treatment on the size of these SnS nanoparticles is discussed. Diffuse reflectance study of SnS nanoparticles agrees with predictions from quantum confinement model. PMID:28348295

  15. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  16. Preparation of cellulose based microspheres by combining spray coagulating with spray drying.

    PubMed

    Wang, Qiao; Fu, Aiping; Li, Hongliang; Liu, Jingquan; Guo, Peizhi; Zhao, Xiu Song; Xia, Lin Hua

    2014-10-13

    Porous microspheres of regenerated cellulose with size in range of 1-2 μm and composite microspheres of chitosan coated cellulose with size of 1-3 μm were obtained through a two-step spray-assisted approach. The spray coagulating process must combine with a spray drying step to guarantee the formation of stable microspheres of cellulose. This approach exhibits the following two main virtues. First, the preparation was performed using aqueous solution of cellulose as precursor in the absence of organic solvent and surfactant; Second, neither crosslinking agent nor separated crosslinking process was required for formation of stable microspheres. Moreover, the spray drying step also provided us with the chance to encapsulate guests into the resultant cellulose microspheres. The potential application of the cellulose microspheres acting as drug delivery vector has been studied in two PBS (phosphate-buffered saline) solution with pH values at 4.0 and 7.4 to mimic the environments of stomach and intestine, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Evolutionary pattern search algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimentalmore » analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.« less

  18. Assessing the potential of quartz crystal microbalance to estimate water vapor transfer in micrometric size cellulose particles.

    PubMed

    Thoury-Monbrun, Valentin; Gaucel, Sébastien; Rouessac, Vincent; Guillard, Valérie; Angellier-Coussy, Hélène

    2018-06-15

    This study aims at assessing the use of a quartz crystal microbalance (QCM) coupled with an adsorption system to measure water vapor transfer properties in micrometric size cellulose particles. This apparatus allows measuring successfully water vapor sorption kinetics at successive relative humidity (RH) steps on a dispersion of individual micrometric size cellulose particles (1 μg) with a total acquisition duration of the order of one hour. Apparent diffusivity and water uptake at equilibrium were estimated at each step of RH by considering two different particle geometries in mass transfer modeling, i.e. sphere or finite cylinder, based on the results obtained from image analysis. Water vapor diffusivity values varied from 2.4 × 10 -14  m 2  s -1 to 4.2 × 10 -12  m 2  s -1 over the tested RH range (0-80%) whatever the model used. A finite cylinder or spherical geometry could be used equally for diffusivity identification for a particle size aspect ratio lower than 2. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples

    NASA Astrophysics Data System (ADS)

    Petit, Johan; Lallemant, Lucile

    2017-05-01

    In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.

  20. STEP-TRAMM - A modeling interface for simulating localized rainfall induced shallow landslides and debris flow runout pathways

    NASA Astrophysics Data System (ADS)

    Or, D.; von Ruette, J.; Lehmann, P.

    2017-12-01

    Landslides and subsequent debris-flows initiated by rainfall represent a common natural hazard in mountainous regions. We integrated a landslide hydro-mechanical triggering model with a simple model for debris flow runout pathways and developed a graphical user interface (GUI) to represent these natural hazards at catchment scale at any location. The STEP-TRAMM GUI provides process-based estimates of the initiation locations and sizes of landslides patterns based on digital elevation models (SRTM) linked with high resolution global soil maps (SoilGrids 250 m resolution) and satellite based information on rainfall statistics for the selected region. In the preprocessing phase the STEP-TRAMM model estimates soil depth distribution to supplement other soil information for delineating key hydrological and mechanical properties relevant to representing local soil failure. We will illustrate this publicly available GUI and modeling platform to simulate effects of deforestation on landslide hazards in several regions and compare model outcome with satellite based information.

  1. Adaptive Implicit Non-Equilibrium Radiation Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby; Wang, Zhen; Berrill, Mark A

    2013-01-01

    We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  2. Least-squares finite element solutions for three-dimensional backward-facing step flow

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Hou, Lin-Jun; Lin, Tsung-Liang

    1993-01-01

    Comprehensive numerical solutions of the steady state incompressible viscous flow over a three-dimensional backward-facing step up to Re equals 800 are presented. The results are obtained by the least-squares finite element method (LSFEM) which is based on the velocity-pressure-vorticity formulation. The computed model is of the same size as that of Armaly's experiment. Three-dimensional phenomena are observed even at low Reynolds number. The calculated values of the primary reattachment length are in good agreement with experimental results.

  3. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  4. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  5. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  6. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  7. Single-molecule fluorescence reveals the unwinding stepping mechanism of replicative helicase.

    PubMed

    Syed, Salman; Pandey, Manjula; Patel, Smita S; Ha, Taekjip

    2014-03-27

    Bacteriophage T7 gp4 serves as a model protein for replicative helicases that couples deoxythymidine triphosphate (dTTP) hydrolysis to directional movement and DNA strand separation. We employed single-molecule fluorescence resonance energy transfer methods to resolve steps during DNA unwinding by T7 helicase. We confirm that the unwinding rate of T7 helicase decreases with increasing base pair stability. For duplexes containing >35% guanine-cytosine (GC) base pairs, we observed stochastic pauses every 2-3 bp during unwinding. The dwells on each pause were distributed nonexponentially, consistent with two or three rounds of dTTP hydrolysis before each unwinding step. Moreover, we observed backward movements of the enzyme on GC-rich DNAs at low dTTP concentrations. Our data suggest a coupling ratio of 1:1 between base pairs unwound and dTTP hydrolysis, and they further support the concept that nucleic acid motors can have a hierarchy of different-sized steps or can accumulate elastic energy before transitioning to a subsequent phase. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  8. STEP-TRAMM - A modeling interface for simulating localized rainfall induced shallow landslides and debris flow runout pathways

    NASA Astrophysics Data System (ADS)

    von Ruette, Jonas; Lehmann, Peter; Fan, Linfeng; Bickel, Samuel; Or, Dani

    2017-04-01

    Landslides and subsequent debris-flows initiated by rainfall represent a ubiquitous natural hazard in steep mountainous regions. We integrated a landslide hydro-mechanical triggering model and associated debris flow runout pathways with a graphical user interface (GUI) to represent these natural hazards in a wide range of catchments over the globe. The STEP-TRAMM GUI provides process-based locations and sizes of landslides patterns using digital elevation models (DEM) from SRTM database (30 m resolution) linked with soil maps from global database SoilGrids (250 m resolution) and satellite based information on rainfall statistics for the selected region. In a preprocessing step STEP-TRAMM models soil depth distribution and complements soil information that jointly capture key hydrological and mechanical properties relevant to local soil failure representation. In the presentation we will discuss feature of this publicly available platform and compare landslide and debris flow patterns for different regions considering representative intense rainfall events. Model outcomes will be compared for different spatial and temporal resolutions to test applicability of web-based information on elevation and rainfall for hazard assessment.

  9. A flow-free droplet-based device for high throughput polymorphic crystallization.

    PubMed

    Yang, Shih-Mo; Zhang, Dapeng; Chen, Wang; Chen, Shih-Chi

    2015-06-21

    Crystallization is one of the most crucial steps in the process of pharmaceutical formulation. In recent years, emulsion-based platforms have been developed and broadly adopted to generate high quality products. However, these conventional approaches such as stirring are still limited in several aspects, e.g., unstable crystallization conditions and broad size distribution; besides, only simple crystal forms can be produced. In this paper, we present a new flow-free droplet-based formation process for producing highly controlled crystallization with two examples: (1) NaCl crystallization reveals the ability to package saturated solution into nanoliter droplets, and (2) glycine crystallization demonstrates the ability to produce polymorphic crystallization forms by controlling the droplet size and temperature. In our process, the saturated solution automatically fills the microwell array powered by degassed bulk PDMS. A critical oil covering step is then introduced to isolate the saturated solution and control the water dissolution rate. Utilizing surface tension, the solution is uniformly packaged in the form of thousands of isolating droplets at the bottom of each microwell of 50-300 μm diameter. After water dissolution, individual crystal structures are automatically formed inside the microwell array. This approach facilitates the study of different glycine growth processes: α-form generated inside the droplets and γ-form generated at the edge of the droplets. With precise temperature control over nanoliter-sized droplets, the growth of ellipsoidal crystalline agglomerates of glycine was achieved for the first time. Optical and SEM images illustrate that the ellipsoidal agglomerates consist of 2-5 μm glycine clusters with inner spiral structures of ~35 μm screw pitch. Lastly, the size distribution of spherical crystalline agglomerates (SAs) produced from microwells of different sizes was measured to have a coefficient variation (CV) of less than 5%, showing crystal sizes can be precisely controlled by microwell sizes with high uniformity. This new method can be used to reliably fabricate monodispersed crystals for pharmaceutical applications.

  10. One-step estimation of networked population size: Respondent-driven capture-recapture with anonymity.

    PubMed

    Khan, Bilal; Lee, Hsuan-Wei; Fellows, Ian; Dombrowski, Kirk

    2018-01-01

    Size estimation is particularly important for populations whose members experience disproportionate health issues or pose elevated health risks to the ambient social structures in which they are embedded. Efforts to derive size estimates are often frustrated when the population is hidden or hard-to-reach in ways that preclude conventional survey strategies, as is the case when social stigma is associated with group membership or when group members are involved in illegal activities. This paper extends prior research on the problem of network population size estimation, building on established survey/sampling methodologies commonly used with hard-to-reach groups. Three novel one-step, network-based population size estimators are presented, for use in the context of uniform random sampling, respondent-driven sampling, and when networks exhibit significant clustering effects. We give provably sufficient conditions for the consistency of these estimators in large configuration networks. Simulation experiments across a wide range of synthetic network topologies validate the performance of the estimators, which also perform well on a real-world location-based social networking data set with significant clustering. Finally, the proposed schemes are extended to allow them to be used in settings where participant anonymity is required. Systematic experiments show favorable tradeoffs between anonymity guarantees and estimator performance. Taken together, we demonstrate that reasonable population size estimates are derived from anonymous respondent driven samples of 250-750 individuals, within ambient populations of 5,000-40,000. The method thus represents a novel and cost-effective means for health planners and those agencies concerned with health and disease surveillance to estimate the size of hidden populations. We discuss limitations and future work in the concluding section.

  11. Normalised subband adaptive filtering with extended adaptiveness on degree of subband filters

    NASA Astrophysics Data System (ADS)

    Samuyelu, Bommu; Rajesh Kumar, Pullakura

    2017-12-01

    This paper proposes an adaptive normalised subband adaptive filtering (NSAF) to accomplish the betterment of NSAF performance. In the proposed NSAF, an extended adaptiveness is introduced from its variants in two ways. In the first way, the step-size is set adaptive, and in the second way, the selection of subbands is set adaptive. Hence, the proposed NSAF is termed here as variable step-size-based NSAF with selected subbands (VS-SNSAF). Experimental investigations are carried out to demonstrate the performance (in terms of convergence) of the VS-SNSAF against the conventional NSAF and its state-of-the-art adaptive variants. The results report the superior performance of VS-SNSAF over the traditional NSAF and its variants. It is also proved for its stability, robustness against noise and substantial computing complexity.

  12. Learning Rate Updating Methods Applied to Adaptive Fuzzy Equalizers for Broadband Power Line Communications

    NASA Astrophysics Data System (ADS)

    Ribeiro, Moisés V.

    2004-12-01

    This paper introduces adaptive fuzzy equalizers with variable step size for broadband power line (PL) communications. Based on delta-bar-delta and local Lipschitz estimation updating rules, feedforward, and decision feedback approaches, we propose singleton and nonsingleton fuzzy equalizers with variable step size to cope with the intersymbol interference (ISI) effects of PL channels and the hardness of the impulse noises generated by appliances and nonlinear loads connected to low-voltage power grids. The computed results show that the convergence rates of the proposed equalizers are higher than the ones attained by the traditional adaptive fuzzy equalizers introduced by J. M. Mendel and his students. Additionally, some interesting BER curves reveal that the proposed techniques are efficient for mitigating the above-mentioned impairments.

  13. Kinesin Steps Do Not Alternate in Size☆

    PubMed Central

    Fehr, Adrian N.; Asbury, Charles L.; Block, Steven M.

    2008-01-01

    Abstract Kinesin is a two-headed motor protein that transports cargo inside cells by moving stepwise on microtubules. Its exact trajectory along the microtubule is unknown: alternative pathway models predict either uniform 8-nm steps or alternating 7- and 9-nm steps. By analyzing single-molecule stepping traces from “limping” kinesin molecules, we were able to distinguish alternate fast- and slow-phase steps and thereby to calculate the step sizes associated with the motions of each of the two heads. We also compiled step distances from nonlimping kinesin molecules and compared these distributions against models predicting uniform or alternating step sizes. In both cases, we find that kinesin takes uniform 8-nm steps, a result that strongly constrains the allowed models. PMID:18083906

  14. Growth of group II-VI semiconductor quantum dots with strong quantum confinement and low size dispersion

    NASA Astrophysics Data System (ADS)

    Pandey, Praveen K.; Sharma, Kriti; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.

    2003-11-01

    CdTe quantum dots embedded in glass matrix are grown using two-step annealing method. The results for the optical transmission characterization are analysed and compared with the results obtained from CdTe quantum dots grown using conventional single-step annealing method. A theoretical model for the absorption spectra is used to quantitatively estimate the size dispersion in the two cases. In the present work, it is established that the quantum dots grown using two-step annealing method have stronger quantum confinement, reduced size dispersion and higher volume ratio as compared to the single-step annealed samples. (

  15. A note on sample size calculation for mean comparisons based on noncentral t-statistics.

    PubMed

    Chow, Shein-Chung; Shao, Jun; Wang, Hansheng

    2002-11-01

    One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.

  16. Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.

    PubMed

    Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng

    2013-01-01

    Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less

  18. Modeling ultrasound propagation through material of increasing geometrical complexity.

    PubMed

    Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen

    2018-06-01

    Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  19. A Novel WA-BPM Based on the Generalized Multistep Scheme in the Propagation Direction in the Waveguide

    NASA Astrophysics Data System (ADS)

    Ji, Yang; Chen, Hong; Tang, Hongwu

    2017-06-01

    A highly accurate wide-angle scheme, based on the generalized mutistep scheme in the propagation direction, is developed for the finite difference beam propagation method (FD-BPM). Comparing with the previously presented method, the simulation shows that our method results in a more accurate solution, and the step size can be much larger

  20. Space-Time Joint Interference Cancellation Using Fuzzy-Inference-Based Adaptive Filtering Techniques in Frequency-Selective Multipath Channels

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng

    2006-12-01

    An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.

  1. Firefighter Hand Anthropometry and Structural Glove Sizing: A New Perspective.

    PubMed

    Hsiao, Hongwei; Whitestone, Jennifer; Kau, Tsui-Ying; Hildreth, Brooke

    2015-12-01

    We evaluated the current use and fit of structural firefighting gloves and developed an improved sizing scheme that better accommodates the U.S. firefighter population. Among surveys, 24% to 30% of men and 31% to 62% of women reported experiencing problems with the fit or bulkiness of their structural firefighting gloves. An age-, race/ethnicity-, and gender-stratified sample of 863 male and 88 female firefighters across the United States participated in the study. Fourteen hand dimensions relevant to glove design were measured. A cluster analysis of the hand dimensions was performed to explore options for an improved sizing scheme. The current national standard structural firefighting glove-sizing scheme underrepresents firefighter hand size range and shape variation. In addition, mismatch between existing sizing specifications and hand characteristics, such as hand dimensions, user selection of glove size, and the existing glove sizing specifications, is significant. An improved glove-sizing plan based on clusters of overall hand size and hand/finger breadth-to-length contrast has been developed. This study presents the most up-to-date firefighter hand anthropometry and a new perspective on glove accommodation. The new seven-size system contains narrower variations (standard deviations) for almost all dimensions for each glove size than the current sizing practices. The proposed science-based sizing plan for structural firefighting gloves provides a step-forward perspective (i.e., including two women hand model-based sizes and two wide-palm sizes for men) for glove manufacturers to advance firefighter hand protection. © 2015, Human Factors and Ergonomics Society.

  2. Controlling dental enamel-cavity ablation depth with optimized stepping parameters along the focal plane normal using a three axis, numerically controlled picosecond laser.

    PubMed

    Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Sun, Yuchun; Wang, Yong

    2015-02-01

    The purpose of this study was to establish a depth-control method in enamel-cavity ablation by optimizing the timing of the focal-plane-normal stepping and the single-step size of a three axis, numerically controlled picosecond laser. Although it has been proposed that picosecond lasers may be used to ablate dental hard tissue, the viability of such a depth-control method in enamel-cavity ablation remains uncertain. Forty-two enamel slices with approximately level surfaces were prepared and subjected to two-dimensional ablation by a picosecond laser. The additive-pulse layer, n, was set to 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70. A three-dimensional microscope was then used to measure the ablation depth, d, to obtain a quantitative function relating n and d. Six enamel slices were then subjected to three dimensional ablation to produce 10 cavities, respectively, with additive-pulse layer and single-step size set to corresponding values. The difference between the theoretical and measured values was calculated for both the cavity depth and the ablation depth of a single step. These were used to determine minimum-difference values for both the additive-pulse layer (n) and single-step size (d). When the additive-pulse layer and the single-step size were set 5 and 45, respectively, the depth error had a minimum of 2.25 μm, and 450 μm deep enamel cavities were produced. When performing three-dimensional ablating of enamel with a picosecond laser, adjusting the timing of the focal-plane-normal stepping and the single-step size allows for the control of ablation-depth error to the order of micrometers.

  3. Preliminary Empirical Models for Predicting Shrinkage, Part Geometry and Metallurgical Aspects of Ti-6Al-4V Shaped Metal Deposition Builds

    NASA Astrophysics Data System (ADS)

    Escobar-Palafox, Gustavo; Gault, Rosemary; Ridgway, Keith

    2011-12-01

    Shaped Metal Deposition (SMD) is an additive manufacturing process which creates parts layer by layer by weld depositions. In this work, empirical models that predict part geometry (wall thickness and outer diameter) and some metallurgical aspects (i.e. surface texture, portion of finer Widmanstätten microstructure) for the SMD process were developed. The models are based on an orthogonal fractional factorial design of experiments with four factors at two levels. The factors considered were energy level (a relationship between heat source power and the rate of raw material input.), step size, programmed diameter and travel speed. The models were validated using previous builds; the prediction error for part geometry was under 11%. Several relationships between the factors and responses were identified. Current had a significant effect on wall thickness; thickness increases with increasing current. Programmed diameter had a significant effect on percentage of shrinkage; this decreased with increasing component size. Surface finish decreased with decreasing step size and current.

  4. Optimization Issues with Complex Rotorcraft Comprehensive Analysis

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.; Tarzanin, Frank J.; Hirsh, Joel E.; Young, Darrell K.

    1998-01-01

    This paper investigates the use of the general purpose automatic differentiation (AD) tool called Automatic Differentiation of FORTRAN (ADIFOR) as a means of generating sensitivity derivatives for use in Boeing Helicopter's proprietary comprehensive rotor analysis code (VII). ADIFOR transforms an existing computer program into a new program that performs a sensitivity analysis in addition to the original analysis. In this study both the pros (exact derivatives, no step-size problems) and cons (more CPU, more memory) of ADIFOR are discussed. The size (based on the number of lines) of the VII code after ADIFOR processing increased by 70 percent and resulted in substantial computer memory requirements at execution. The ADIFOR derivatives took about 75 percent longer to compute than the finite-difference derivatives. However, the ADIFOR derivatives are exact and are not functions of step-size. The VII sensitivity derivatives generated by ADIFOR are compared with finite-difference derivatives. The ADIFOR and finite-difference derivatives are used in three optimization schemes to solve a low vibration rotor design problem.

  5. Critical motor number for fractional steps of cytoskeletal filaments in gliding assays.

    PubMed

    Li, Xin; Lipowsky, Reinhard; Kierfeld, Jan

    2012-01-01

    In gliding assays, filaments are pulled by molecular motors that are immobilized on a solid surface. By varying the motor density on the surface, one can control the number N of motors that pull simultaneously on a single filament. Here, such gliding assays are studied theoretically using brownian (or Langevin) dynamics simulations and taking the local force balance between motors and filaments as well as the force-dependent velocity of the motors into account. We focus on the filament stepping dynamics and investigate how single motor properties such as stalk elasticity and step size determine the presence or absence of fractional steps of the filaments. We show that each gliding assay can be characterized by a critical motor number, N(c). Because of thermal fluctuations, fractional filament steps are only detectable as long as N < N(c). The corresponding fractional filament step size is l/N where l is the step size of a single motor. We first apply our computational approach to microtubules pulled by kinesin-1 motors. For elastic motor stalks that behave as linear springs with a zero rest length, the critical motor number is found to be N(c) = 4, and the corresponding distributions of the filament step sizes are in good agreement with the available experimental data. In general, the critical motor number N(c) depends on the elastic stalk properties and is reduced to N(c) = 3 for linear springs with a nonzero rest length. Furthermore, N(c) is shown to depend quadratically on the motor step size l. Therefore, gliding assays consisting of actin filaments and myosin-V are predicted to exhibit fractional filament steps up to motor number N = 31. Finally, we show that fractional filament steps are also detectable for a fixed average motor number as determined by the surface density (or coverage) of the motors on the substrate surface.

  6. Microstructure of room temperature ionic liquids at stepped graphite electrodes

    DOE PAGES

    Feng, Guang; Li, Song; Zhao, Wei; ...

    2015-07-14

    Molecular dynamics simulations of room temperature ionic liquid (RTIL) [emim][TFSI] at stepped graphite electrodes were performed to investigate the influence of the thickness of the electrode surface step on the microstructure of interfacial RTILs. A strong correlation was observed between the interfacial RTIL structure and the step thickness in electrode surface as well as the ion size. Specifically, when the step thickness is commensurate with ion size, the interfacial layering of cation/anion is more evident; whereas, the layering tends to be less defined when the step thickness is close to the half of ion size. Furthermore, two-dimensional microstructure of ionmore » layers exhibits different patterns and alignments of counter-ion/co-ion lattice at neutral and charged electrodes. As the cation/anion layering could impose considerable effects on ion diffusion, the detailed information of interfacial RTILs at stepped graphite presented here would help to understand the molecular mechanism of RTIL-electrode interfaces in supercapacitors.« less

  7. Design rules for vertical interconnections by reverse offset printing

    NASA Astrophysics Data System (ADS)

    Kusaka, Yasuyuki; Kanazawa, Shusuke; Ushijima, Hirobumi

    2018-03-01

    Formation of vertical interconnections by reverse offset printing was investigated, particularly focusing on the transfer step, in which an ink pattern is transferred from a polydimethylsiloxane (PDMS) sheet for the step coverage of contact holes. We systematically examined the coverage of contact holes made of a tapered photoresist layer by varying the hole size, the hole depth, PDMS elasticity, PDMS thickness, printing speed, and printing indentation depth. Successful ink filling was achieved when the PDMS was softer, and the optimal PDMS thickness varied depending on the size of the contact holes. This behaviour is related to the bell-type uplift deformation of incompressible PDMS, which can be described by contact mechanics numerical simulations. Based on direct observation of PDMS/resist-hole contact behaviour, the step coverage of contact holes typically involves two steps of contact area growth: (i) the PDMS first touches the bottom of the holes and then (ii) the contact area gradually and radially widens toward the tapered sidewall. From an engineering perspective, it is pointed out that mechanical synchronisation mismatch in the roll-to-sheet type printing invokes the cracking of ink layers at the edges of contact holes. According to the above design rule, ink filling into a contact hole with thickness of 2.5 µm and radius of 10 µm was achieved. Contact chain patterns with 1386 points of vertical interconnections with the square hole size of up to 10 µm successfully demonstrated the validity of the technique presented herein.

  8. Influence of fragment size and postoperative joint congruency on long-term outcome of posterior malleolar fractures.

    PubMed

    Drijfhout van Hooff, Cornelis Christiaan; Verhage, Samuel Marinus; Hoogendoorn, Jochem Maarten

    2015-06-01

    One of the factors contributing to long-term outcome of posterior malleolar fractures is the development of osteoarthritis. Based on biomechanical, cadaveric, and small population studies, fixation of posterior malleolar fracture fragments (PMFFs) is usually performed when fragment size exceeds 25-33%. However, the influence of fragment size on long-term clinical and radiological outcome size remains unclear. A retrospective cohort study of 131 patients treated for an isolated ankle fracture with involvement of the posterior malleolus was performed. Mean follow-up was 6.9 (range, 2.5-15.9) years. Patients were divided into groups depending on size of the fragment, small (<5%, n = 20), medium (5-25%, n = 86), or large (>25%, n = 25), and presence of step-off after operative treatment. We have compared functional outcome measures (AOFAS, AAOS), pain (VAS), and dorsiflexion restriction compared to the contralateral ankle and the incidence of osteoarthritis on X-ray. There were no nonunions, 56% of patients had no radiographic osteoarthritis, VAS was 10 of 100, and median clinical score was 90 of 100. More osteoarthritis occurred in ankle fractures with medium and large PMFFs compared to small fragments (small 16%, medium 48%, large 54%; P = .006). Also when comparing small with medium-sized fragments (P = .02), larger fragment size did not lead to a significantly decreased function (median AOFAS 95 vs 88, P = .16). If the PMFF size was >5%, osteoarthritis occurred more frequently when there was a postoperative step-off ≥1 mm in the tibiotalar joint surface (41% vs 61%, P = .02) (whether the posterior fragment had been fixed or not). In this group, fixing the PMFF did not influence development of osteoarthritis. However, in 42% of the cases with fixation of the fragment a postoperative step-off remained (vs 45% in the group without fixation). Osteoarthritis is 1 component of long-term outcome of malleolar fractures, and the results of this study demonstrate that there was more radiographic osteoarthritis in patients with medium and large posterior fragments than in those with small fragments. Radiographic osteoarthritis also occurred more frequently when postoperative step-off was 1 mm or more, whether the posterior fragment was fixed or not. However, clinical scores were not different for these groups. Level IV, retrospective case series. © The Author(s) 2015.

  9. Lab-on-a-disc agglutination assay for protein detection by optomagnetic readout and optical imaging using nano- and micro-sized magnetic beads.

    PubMed

    Uddin, Rokon; Burger, Robert; Donolato, Marco; Fock, Jeppe; Creagh, Michael; Hansen, Mikkel Fougt; Boisen, Anja

    2016-11-15

    We present a biosensing platform for the detection of proteins based on agglutination of aptamer coated magnetic nano- or microbeads. The assay, from sample to answer, is integrated on an automated, low-cost microfluidic disc platform. This ensures fast and reliable results due to a minimum of manual steps involved. The detection of the target protein was achieved in two ways: (1) optomagnetic readout using magnetic nanobeads (MNBs); (2) optical imaging using magnetic microbeads (MMBs). The optomagnetic readout of agglutination is based on optical measurement of the dynamics of MNB aggregates whereas the imaging method is based on direct visualization and quantification of the average size of MMB aggregates. By enhancing magnetic particle agglutination via application of strong magnetic field pulses, we obtained identical limits of detection of 25pM with the same sample-to-answer time (15min 30s) using the two differently sized beads for the two detection methods. In both cases a sample volume of only 10µl is required. The demonstrated automation, low sample-to-answer time and portability of both detection instruments as well as integration of the assay on a low-cost disc are important steps for the implementation of these as portable tools in an out-of-lab setting. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Compressed ECG biometric: a fast, secured and efficient method for identification of CVD patient.

    PubMed

    Sufi, Fahim; Khalil, Ibrahim; Mahmood, Abdun

    2011-12-01

    Adoption of compression technology is often required for wireless cardiovascular monitoring, due to the enormous size of Electrocardiography (ECG) signal and limited bandwidth of Internet. However, compressed ECG must be decompressed before performing human identification using present research on ECG based biometric techniques. This additional step of decompression creates a significant processing delay for identification task. This becomes an obvious burden on a system, if this needs to be done for a trillion of compressed ECG per hour by the hospital. Even though the hospital might be able to come up with an expensive infrastructure to tame the exuberant processing, for small intermediate nodes in a multihop network identification preceded by decompression is confronting. In this paper, we report a technique by which a person can be identified directly from his / her compressed ECG. This technique completely obviates the step of decompression and therefore upholds biometric identification less intimidating for the smaller nodes in a multihop network. The biometric template created by this new technique is lower in size compared to the existing ECG based biometrics as well as other forms of biometrics like face, finger, retina etc. (up to 8302 times lower than face template and 9 times lower than existing ECG based biometric template). Lower size of the template substantially reduces the one-to-many matching time for biometric recognition, resulting in a faster biometric authentication mechanism.

  11. Relative dosimetrical verification in high dose rate brachytherapy using two-dimensional detector array IMatriXX

    PubMed Central

    Manikandan, A.; Biplab, Sarkar; David, Perianayagam A.; Holla, R.; Vivek, T. R.; Sujatha, N.

    2011-01-01

    For high dose rate (HDR) brachytherapy, independent treatment verification is needed to ensure that the treatment is performed as per prescription. This study demonstrates dosimetric quality assurance of the HDR brachytherapy using a commercially available two-dimensional ion chamber array called IMatriXX, which has a detector separation of 0.7619 cm. The reference isodose length, step size, and source dwell positional accuracy were verified. A total of 24 dwell positions, which were verified for positional accuracy gave a total error (systematic and random) of –0.45 mm, with a standard deviation of 1.01 mm and maximum error of 1.8 mm. Using a step size of 5 mm, reference isodose length (the length of 100% isodose line) was verified for single and multiple catheters of same and different source loadings. An error ≤1 mm was measured in 57% of tests analyzed. Step size verification for 2, 3, 4, and 5 cm was performed and 70% of the step size errors were below 1 mm, with maximum of 1.2 mm. The step size ≤1 cm could not be verified by the IMatriXX as it could not resolve the peaks in dose profile. PMID:21897562

  12. The optimal design of stepped wedge trials with equal allocation to sequences and a comparison to other trial designs.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew

    2017-12-01

    Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.

  13. Novel Anthropometry Based on 3D-Bodyscans Applied to a Large Population Based Cohort.

    PubMed

    Löffler-Wirth, Henry; Willscher, Edith; Ahnert, Peter; Wirkner, Kerstin; Engel, Christoph; Loeffler, Markus; Binder, Hans

    2016-01-01

    Three-dimensional (3D) whole body scanners are increasingly used as precise measuring tools for the rapid quantification of anthropometric measures in epidemiological studies. We analyzed 3D whole body scanning data of nearly 10,000 participants of a cohort collected from the adult population of Leipzig, one of the largest cities in Eastern Germany. We present a novel approach for the systematic analysis of this data which aims at identifying distinguishable clusters of body shapes called body types. In the first step, our method aggregates body measures provided by the scanner into meta-measures, each representing one relevant dimension of the body shape. In a next step, we stratified the cohort into body types and assessed their stability and dependence on the size of the underlying cohort. Using self-organizing maps (SOM) we identified thirteen robust meta-measures and fifteen body types comprising between 1 and 18 percent of the total cohort size. Thirteen of them are virtually gender specific (six for women and seven for men) and thus reflect most abundant body shapes of women and men. Two body types include both women and men, and describe androgynous body shapes that lack typical gender specific features. The body types disentangle a large variability of body shapes enabling distinctions which go beyond the traditional indices such as body mass index, the waist-to-height ratio, the waist-to-hip ratio and the mortality-hazard ABSI-index. In a next step, we will link the identified body types with disease predispositions to study how size and shape of the human body impact health and disease.

  14. Linear micromechanical stepping drive for pinhole array positioning

    NASA Astrophysics Data System (ADS)

    Endrödy, Csaba; Mehner, Hannes; Grewe, Adrian; Hoffmann, Martin

    2015-05-01

    A compact linear micromechanical stepping drive for positioning a 7 × 5.5 mm2 optical pinhole array is presented. The system features a step size of 13.2 µm and a full displacement range of 200 µm. The electrostatic inch-worm stepping mechanism shows a compact design capable of positioning a payload 50% of its own weight. The stepping drive movement, step sizes and position accuracy are characterized. The actuated pinhole array is integrated in a confocal chromatic hyperspectral imaging system, where coverage of the object plane, and therefore the useful picture data, can be multiplied by 14 in contrast to a non-actuated array.

  15. Evaluation of Second-Level Inference in fMRI Analysis

    PubMed Central

    Roels, Sanne P.; Loeys, Tom; Moerkerke, Beatrijs

    2016-01-01

    We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference. PMID:26819578

  16. The role of particle jamming on the formation and stability of step-pool morphology: insight from a reduced-complexity model

    NASA Astrophysics Data System (ADS)

    Saletti, M.; Molnar, P.; Hassan, M. A.

    2017-12-01

    Granular processes have been recognized as key drivers in earth surface dynamics, especially in steep landscapes because of the large size of sediment found in channels. In this work we focus on step-pool morphologies, studying the effect of particle jamming on step formation. Starting from the jammed-state hypothesis, we assume that grains generate steps because of particle jamming and those steps are inherently more stable because of additional force chains in the transversal direction. We test this hypothesis with a particle-based reduced-complexity model, CAST2, where sediment is organized in patches and entrainment, transport and deposition of grains depend on flow stage and local topography through simplified phenomenological rules. The model operates with 2 grain sizes: fine grains, that can be mobilized both my large and moderate flows, and coarse grains, mobile only during large floods. First, we identify the minimum set of processes necessary to generate and maintain steps in a numerical channel: (a) occurrence of floods, (b) particle jamming, (c) low sediment supply, and (d) presence of sediment with different entrainment probabilities. Numerical results are compared with field observations collected in different step-pool channels in terms of step density, a variable that captures the proportion of the channel occupied by steps. Not only the longitudinal profiles of numerical channels display step sequences similar to those observed in real step-pool streams, but also the values of step density are very similar when all the processes mentioned before are considered. Moreover, with CAST2 it is possible to run long simulations with repeated flood events, to test the effect of flood frequency on step formation. Numerical results indicate that larger step densities belong to system more frequently perturbed by floods, compared to system having a lower flood frequency. Our results highlight the important interactions between external hydrological forcing and internal geomorphic adjustment (e.g. jamming) on the response of step-pool streams, showing the potential of reduced-complexity models in fluvial geomorphology.

  17. A Cost Model for Testing Unmanned and Autonomous Systems of Systems

    DTIC Science & Technology

    2011-02-01

    those risks. In addition, the fundamental methods presented by Aranha and Borba to include the complexity and sizing of tests for UASoS, can be expanded...used as an input for test execution effort estimation models (Aranha & Borba , 2007). Such methodology is very relevant to this work because as a UASoS...calculate the test effort based on the complexity of the SoS. However, Aranha and Borba define test size as the number of steps required to complete

  18. The Study of Residential Areas Extraction Based on GF-3 Texture Image Segmentation

    NASA Astrophysics Data System (ADS)

    Shao, G.; Luo, H.; Tao, X.; Ling, Z.; Huang, Y.

    2018-04-01

    The study chooses the standard stripe and dual polarization SAR images of GF-3 as the basic data. Residential areas extraction processes and methods based upon GF-3 images texture segmentation are compared and analyzed. GF-3 images processes include radiometric calibration, complex data conversion, multi-look processing, images filtering, and then conducting suitability analysis for different images filtering methods, the filtering result show that the filtering method of Kuan is efficient for extracting residential areas, then, we calculated and analyzed the texture feature vectors using the GLCM (the Gary Level Co-occurrence Matrix), texture feature vectors include the moving window size, step size and angle, the result show that window size is 11*11, step is 1, and angle is 0°, which is effective and optimal for the residential areas extracting. And with the FNEA (Fractal Net Evolution Approach), we segmented the GLCM texture images, and extracted the residential areas by threshold setting. The result of residential areas extraction verified and assessed by confusion matrix. Overall accuracy is 0.897, kappa is 0.881, and then we extracted the residential areas by SVM classification based on GF-3 images, the overall accuracy is less 0.09 than the accuracy of extraction method based on GF-3 Texture Image Segmentation. We reached the conclusion that residential areas extraction based on GF-3 SAR texture image multi-scale segmentation is simple and highly accurate. although, it is difficult to obtain multi-spectrum remote sensing image in southern China, in cloudy and rainy weather throughout the year, this paper has certain reference significance.

  19. 7 CFR 3052.520 - Major program determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Auditors § 3052.520 Major program determination. (a) General. The auditor shall use a risk-based approach... section shall be followed. (b) Step 1. (1) The auditor shall identify the larger Federal programs, which... providing loans significantly affects the number or size of Type A programs, the auditor shall consider this...

  20. Finite element mesh refinement criteria for stress analysis

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1990-01-01

    This paper discusses procedures for finite-element mesh selection and refinement. The objective is to improve accuracy. The procedures are based on (1) the minimization of the stiffness matrix race (optimizing node location); (2) the use of h-version refinement (rezoning, element size reduction, and increasing the number of elements); and (3) the use of p-version refinement (increasing the order of polynomial approximation of the elements). A step-by-step procedure of mesh selection, improvement, and refinement is presented. The criteria for 'goodness' of a mesh are based on strain energy, displacement, and stress values at selected critical points of a structure. An analysis of an aircraft lug problem is presented as an example.

  1. Demodulation algorithm for optical fiber F-P sensor.

    PubMed

    Yang, Huadong; Tong, Xinglin; Cui, Zhang; Deng, Chengwei; Guo, Qian; Hu, Pan

    2017-09-10

    The demodulation algorithm is very important to improving the measurement accuracy of a sensing system. In this paper, the variable step size hill climbing search method will be initially used for the optical fiber Fabry-Perot (F-P) sensing demodulation algorithm. Compared with the traditional discrete gap transformation demodulation algorithm, the computation is greatly reduced by changing step size of each climb, which could achieve nano-scale resolution, high measurement accuracy, high demodulation rates, and large dynamic demodulation range. An optical fiber F-P pressure sensor based on micro-electro-mechanical system (MEMS) has been fabricated to carry out the experiment, and the results show that the resolution of the algorithm can reach nano-scale level, the sensor's sensitivity is about 2.5  nm/KPa, which is similar to the theoretical value, and this sensor has great reproducibility.

  2. A film-rupture model of hydrogen-induced, slow crack growth in alpha-beta titanium

    NASA Technical Reports Server (NTRS)

    Nelson, H. G.

    1975-01-01

    The appearance of the terrace like fracture morphology of gaseous hydrogen induced crack growth in acicular alpha-beta titanium alloys is discussed as a function of specimen configuration, magnitude of applied stress intensity, test temperature, and hydrogen pressure. Although the overall appearance of the terrace structure remained essentially unchanged, a distinguishable variation is found in the size of the individual terrace steps, and step size is found to be inversely dependent upon the rate of hydrogen induced slow crack growth. Additionally, this inverse relationship is independent of all the variables investigated. These observations are quantitatively discussed in terms of the formation and growth of a thin hydride film along the alpha-beta boundaries and a qualitative model for hydrogen induced slow crack growth is presented, based on the film-rupture model of stress corrosion cracking.

  3. SPIP: A computer program implementing the Interaction Picture method for simulation of light-wave propagation in optical fibre

    NASA Astrophysics Data System (ADS)

    Balac, Stéphane; Fernandez, Arnaud

    2016-02-01

    The computer program SPIP is aimed at solving the Generalized Non-Linear Schrödinger equation (GNLSE), involved in optics e.g. in the modelling of light-wave propagation in an optical fibre, by the Interaction Picture method, a new efficient alternative method to the Symmetric Split-Step method. In the SPIP program a dedicated costless adaptive step-size control based on the use of a 4th order embedded Runge-Kutta method is implemented in order to speed up the resolution.

  4. Dependence of Hurricane intensity and structures on vertical resolution and time-step size

    NASA Astrophysics Data System (ADS)

    Zhang, Da-Lin; Wang, Xiaoxue

    2003-09-01

    In view of the growing interests in the explicit modeling of clouds and precipitation, the effects of varying vertical resolution and time-step sizes on the 72-h explicit simulation of Hurricane Andrew (1992) are studied using the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR) mesoscale model (i.e., MM5) with the finest grid size of 6 km. It is shown that changing vertical resolution and time-step size has significant effects on hurricane intensity and inner-core cloud/precipitation, but little impact on the hurricane track. In general, increasing vertical resolution tends to produce a deeper storm with lower central pressure and stronger three-dimensional winds, and more precipitation. Similar effects, but to a less extent, occur when the time-step size is reduced. It is found that increasing the low-level vertical resolution is more efficient in intensifying a hurricane, whereas changing the upper-level vertical resolution has little impact on the hurricane intensity. Moreover, the use of a thicker surface layer tends to produce higher maximum surface winds. It is concluded that the use of higher vertical resolution, a thin surface layer, and smaller time-step sizes, along with higher horizontal resolution, is desirable to model more realistically the intensity and inner-core structures and evolution of tropical storms as well as the other convectively driven weather systems.

  5. An Ai Chi-based aquatic group improves balance and reduces falls in community-dwelling adults: A pilot observational cohort study.

    PubMed

    Skinner, Elizabeth H; Dinh, Tammy; Hewitt, Melissa; Piper, Ross; Thwaites, Claire

    2016-11-01

    Falls are associated with morbidity, loss of independence, and mortality. While land-based group exercise and Tai Chi programs reduce the risk of falls, aquatic therapy may allow patients to complete balance exercises with less pain and fear of falling; however, limited data exist. The objective of the study was to pilot the implementation of an aquatic group based on Ai Chi principles (Aquabalance) and to evaluate the safety, intervention acceptability, and intervention effect sizes. Pilot observational cohort study. Forty-two outpatients underwent a single 45-minute weekly group aquatic Ai Chi-based session for eight weeks (Aquabalance). Safety was monitored using organizational reporting systems. Patient attendance, satisfaction, and self-reported falls were also recorded. Balance measures included the Timed Up and Go (TUG) test, the Four Square Step Test (FSST), and the unilateral Step Tests. Forty-two patients completed the program. It was feasible to deliver Aquabalance, as evidenced by the median (IQR) attendance rate of 8.0 (7.8, 8.0) out of 8. No adverse events occurred and participants reported high satisfaction levels. Improvements were noted on the TUG, 10-meter walk test, the Functional Reach Test, the FSST, and the unilateral step tests (p < 0.05). The proportion of patients defined as high falls risk reduced from 38% to 21%. The study was limited by its small sample size, single-center nature, and the absence of a control group. Aquabalance was safe, well-attended, and acceptable to participants. A randomized controlled assessor-blinded trial is required.

  6. Nanoporous anodic aluminum oxide with a long-range order and tunable cell sizes by phosphoric acid anodization on pre-patterned substrates

    PubMed Central

    Surawathanawises, Krissada; Cheng, Xuanhong

    2014-01-01

    Nanoporous anodic aluminum oxide (AAO) has been explored for various applications due to its regular cell arrangement and relatively easy fabrication processes. However, conventional two-step anodization based on self-organization only allows the fabrication of a few discrete cell sizes and formation of small domains of hexagonally packed pores. Recent efforts to pre-pattern aluminum followed with anodization significantly improve the regularity and available pore geometries in AAO, while systematic study of the anodization condition, especially the impact of acid composition on pore formation guided by nanoindentation is still lacking. In this work, we pre-patterned aluminium thin films using ordered monolayers of silica beads and formed porous AAO in a single-step anodization in phosphoric acid. Controllable cell sizes ranging from 280 nm to 760 nm were obtained, matching the diameters of the silica nanobead molds used. This range of cell size is significantly greater than what has been reported for AAO formed in phosphoric acid in the literature. In addition, the relationships between the acid concentration, cell size, pore size, anodization voltage and film growth rate were studied quantitatively. The results are consistent with the theory of oxide formation through an electrochemical reaction. Not only does this study provide useful operational conditions of nanoindentation induced anodization in phosphoric acid, it also generates significant information for fundamental understanding of AAO formation. PMID:24535886

  7. Effects of combined silicon and molybdenum alloying on the size and evolution of microalloy precipitates in HSLA steels containing niobium and titanium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlina, Erik J., E-mail: e.pavlina@deakin.edu.au; Van Tyne, C.J.; Speer, J.G.

    2015-04-15

    The effects of combined silicon and molybdenum alloying additions on microalloy precipitate formation in austenite after single- and double-step deformations below the austenite no-recrystallization temperature were examined in high-strength low-alloy (HSLA) steels microalloyed with titanium and niobium. The precipitation sequence in austenite was evaluated following an interrupted thermomechanical processing simulation using transmission electron microscopy. Large (~ 105 nm), cuboidal titanium-rich nitride precipitates showed no evolution in size during reheating and simulated thermomechanical processing. The average size and size distribution of these precipitates were also not affected by the combined silicon and molybdenum additions or by deformation. Relatively fine (< 20more » nm), irregular-shaped niobium-rich carbonitride precipitates formed in austenite during isothermal holding at 1173 K. Based upon analysis that incorporated precipitate growth and coarsening models, the combined silicon and molybdenum additions were considered to increase the diffusivity of niobium in austenite by over 30% and result in coarser precipitates at 1173 K compared to the lower alloyed steel. Deformation decreased the size of the niobium-rich carbonitride precipitates that formed in austenite. - Highlights: • We examine combined Si and Mo additions on microalloy precipitation in austenite. • Precipitate size tends to decrease with increasing deformation steps. • Combined Si and Mo alloying additions increase the diffusivity of Nb in austenite.« less

  8. Evaluation of TOPLATS on three Mediterranean catchments

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel

    2016-08-01

    Physically based hydrological models are complex tools that provide a complete description of the different processes occurring on a catchment. The TOPMODEL-based Land-Atmosphere Transfer Scheme (TOPLATS) simulates water and energy balances at different time steps, in both lumped and distributed modes. In order to gain insight on the behavior of TOPLATS and its applicability in different conditions a detailed evaluation needs to be carried out. This study aimed to develop a complete evaluation of TOPLATS including: (1) a detailed review of previous research works using this model; (2) a sensitivity analysis (SA) of the model with two contrasted methods (Morris and Sobol) of different complexity; (3) a 4-step calibration strategy based on a multi-start Powell optimization algorithm; and (4) an analysis of the influence of simulation time step (hourly vs. daily). The model was applied on three catchments of varying size (La Tejeria, Cidacos and Arga), located in Navarre (Northern Spain), and characterized by different levels of Mediterranean climate influence. Both Morris and Sobol methods showed very similar results that identified Brooks-Corey Pore Size distribution Index (B), Bubbling pressure (ψc) and Hydraulic conductivity decay (f) as the three overall most influential parameters in TOPLATS. After calibration and validation, adequate streamflow simulations were obtained in the two wettest catchments, but the driest (Cidacos) gave poor results in validation, due to the large climatic variability between calibration and validation periods. To overcome this issue, an alternative random and discontinuous method of cal/val period selection was implemented, improving model results.

  9. Multiscale modeling of porous ceramics using movable cellular automaton method

    NASA Astrophysics Data System (ADS)

    Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.

    2017-10-01

    The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.

  10. Firefighter Hand Anthropometry and Structural Glove Sizing: A New Perspective

    PubMed Central

    Hsiao, Hongwei; Whitestone, Jennifer; Kau, Tsui-Ying; Hildreth, Brooke

    2015-01-01

    Objective We evaluated the current use and fit of structural firefighting gloves and developed an improved sizing scheme that better accommodates the U.S. firefighter population. Background Among surveys, 24% to 30% of men and 31% to 62% of women reported experiencing problems with the fit or bulkiness of their structural firefighting gloves. Method An age-, race/ethnicity-, and gender-stratified sample of 863 male and 88 female firefighters across the United States participated in the study. Fourteen hand dimensions relevant to glove design were measured. A cluster analysis of the hand dimensions was performed to explore options for an improved sizing scheme. Results The current national standard structural firefighting glove-sizing scheme underrepresents firefighter hand size range and shape variation. In addition, mismatch between existing sizing specifications and hand characteristics, such as hand dimensions, user selection of glove size, and the existing glove sizing specifications, is significant. An improved glove-sizing plan based on clusters of overall hand size and hand/finger breadth-to-length contrast has been developed. Conclusion This study presents the most up-to-date firefighter hand anthropometry and a new perspective on glove accommodation. The new seven-size system contains narrower variations (standard deviations) for almost all dimensions for each glove size than the current sizing practices. Application The proposed science-based sizing plan for structural firefighting gloves provides a step-forward perspective (i.e., including two women hand model–based sizes and two wide-palm sizes for men) for glove manufacturers to advance firefighter hand protection. PMID:26169309

  11. A practical Bayesian stepped wedge design for community-based cluster-randomized clinical trials: The British Columbia Telehealth Trial.

    PubMed

    Cunanan, Kristen M; Carlin, Bradley P; Peterson, Kevin A

    2016-12-01

    Many clinical trial designs are impractical for community-based clinical intervention trials. Stepped wedge trial designs provide practical advantages, but few descriptions exist of their clinical implementational features, statistical design efficiencies, and limitations. Enhance efficiency of stepped wedge trial designs by evaluating the impact of design characteristics on statistical power for the British Columbia Telehealth Trial. The British Columbia Telehealth Trial is a community-based, cluster-randomized, controlled clinical trial in rural and urban British Columbia. To determine the effect of an Internet-based telehealth intervention on healthcare utilization, 1000 subjects with an existing diagnosis of congestive heart failure or type 2 diabetes will be enrolled from 50 clinical practices. Hospital utilization is measured using a composite of disease-specific hospital admissions and emergency visits. The intervention comprises online telehealth data collection and counseling provided to support a disease-specific action plan developed by the primary care provider. The planned intervention is sequentially introduced across all participating practices. We adopt a fully Bayesian, Markov chain Monte Carlo-driven statistical approach, wherein we use simulation to determine the effect of cluster size, sample size, and crossover interval choice on type I error and power to evaluate differences in hospital utilization. For our Bayesian stepped wedge trial design, simulations suggest moderate decreases in power when crossover intervals from control to intervention are reduced from every 3 to 2 weeks, and dramatic decreases in power as the numbers of clusters decrease. Power and type I error performance were not notably affected by the addition of nonzero cluster effects or a temporal trend in hospitalization intensity. Stepped wedge trial designs that intervene in small clusters across longer periods can provide enhanced power to evaluate comparative effectiveness, while offering practical implementation advantages in geographic stratification, temporal change, use of existing data, and resource distribution. Current population estimates were used; however, models may not reflect actual event rates during the trial. In addition, temporal or spatial heterogeneity can bias treatment effect estimates. © The Author(s) 2016.

  12. Joint Transform Correlation for face tracking: elderly fall detection application

    NASA Astrophysics Data System (ADS)

    Katz, Philippe; Aron, Michael; Alfalou, Ayman

    2013-03-01

    In this paper, an iterative tracking algorithm based on a non-linear JTC (Joint Transform Correlator) architecture and enhanced by a digital image processing method is proposed and validated. This algorithm is based on the computation of a correlation plane where the reference image is updated at each frame. For that purpose, we use the JTC technique in real time to track a patient (target image) in a room fitted with a video camera. The correlation plane is used to localize the target image in the current video frame (frame i). Then, the reference image to be exploited in the next frame (frame i+1) is updated according to the previous one (frame i). In an effort to validate our algorithm, our work is divided into two parts: (i) a large study based on different sequences with several situations and different JTC parameters is achieved in order to quantify their effects on the tracking performances (decimation, non-linearity coefficient, size of the correlation plane, size of the region of interest...). (ii) the tracking algorithm is integrated into an application of elderly fall detection. The first reference image is a face detected by means of Haar descriptors, and then localized into the new video image thanks to our tracking method. In order to avoid a bad update of the reference frame, a method based on a comparison of image intensity histograms is proposed and integrated in our algorithm. This step ensures a robust tracking of the reference frame. This article focuses on face tracking step optimisation and evalutation. A supplementary step of fall detection, based on vertical acceleration and position, will be added and studied in further work.

  13. Patient-reported outcome measures versus inertial performance-based outcome measures: A prospective study in patients undergoing primary total knee arthroplasty.

    PubMed

    Bolink, S A A N; Grimm, B; Heyligers, I C

    2015-12-01

    Outcome assessment of total knee arthroplasty (TKA) by subjective patient reported outcome measures (PROMs) may not fully capture the functional (dis-)abilities of relevance. Objective performance-based outcome measures could provide distinct information. An ambulant inertial measurement unit (IMU) allows kinematic assessment of physical performance and could potentially be used for routine follow-up. To investigate the responsiveness of IMU measures in patients following TKA and compare outcomes with conventional PROMs. Patients with end stage knee OA (n=20, m/f=7/13; age=67.4 standard deviation 7.7 years) were measured preoperatively and one year postoperatively. IMU measures were derived during gait, sit-stand transfers and block step-up transfers. PROMs were assessed by using the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and Knee Society Score (KSS). Responsiveness was calculated by the effect size, correlations were calculated with Spearman's rho correlation coefficient. One year after TKA, patients performed significantly better at gait, sit-to-stand transfers and block step-up transfers. Measures of time and kinematic IMU measures demonstrated significant improvements postoperatively for each performance-based test. The largest improvement was found in block step-up transfers (effect size=0.56-1.20). WOMAC function score and KSS function score demonstrated moderate correlations (Spearman's rho=0.45-0.74) with some of the physical performance-based measures pre- and postoperatively. To characterize the changes in physical function after TKA, PROMs could be supplemented by performance-based measures, assessing function during different activities and allowing kinematic characterization with an ambulant IMU. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. A road map to the new frontier: finding ETI

    NASA Astrophysics Data System (ADS)

    Bertaux, J. L.

    2014-04-01

    An obvious New Frontier for humanity is to locate our nearest neighbors technically advanced (ETI, extra-terrestrial intelligence). This quest can be achieved with three steps. 1. find the nearest exoplanets in the habitable zone (HZ) 2. find biosignatures in their spectra 3. find signs of advance technology. We argue that steps 2 and 3 will require space telescopes that need to be oriented to targets already identified in step 1 as hosting exoplanets of Earth or super Earth size in the habitable zone. We show that non-transiting planets in HZ are 3 to 9 times nearer the sun than transiting planets, the gain factor being a function of star temperature. The requirement for step 1 is within the reach of a network of 2.5 m diameter ground-based automated telescopes associated with HARPS-type spectrometers.

  15. Molecular simulation of small Knudsen number flows

    NASA Astrophysics Data System (ADS)

    Fei, Fei; Fan, Jing

    2012-11-01

    The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.

  16. A new approach for bioassays based on frequency- and time-domain measurements of magnetic nanoparticles.

    PubMed

    Oisjöen, Fredrik; Schneiderman, Justin F; Astalan, Andrea Prieto; Kalabukhov, Alexey; Johansson, Christer; Winkler, Dag

    2010-01-15

    We demonstrate a one-step wash-free bioassay measurement system capable of tracking biochemical binding events. Our approach combines the high resolution of frequency- and high speed of time-domain measurements in a single device in combination with a fast one-step bioassay. The one-step nature of our magnetic nanoparticle (MNP) based assay reduces the time between sample extraction and quantitative results while mitigating the risks of contamination related to washing steps. Our method also enables tracking of binding events, providing the possibility of, for example, investigation of how chemical/biological environments affect the rate of a binding process or study of the action of certain drugs. We detect specific biological binding events occurring on the surfaces of fluid-suspended MNPs that modify their magnetic relaxation behavior. Herein, we extrapolate a modest sensitivity to analyte of 100 ng/ml with the present setup using our rapid one-step bioassay. More importantly, we determine the size-distributions of the MNP systems with theoretical fits to our data obtained from the two complementary measurement modalities and demonstrate quantitative agreement between them. Copyright 2009 Elsevier B.V. All rights reserved.

  17. A new theory for multistep discretizations of stiff ordinary differential equations: Stability with large step sizes

    NASA Technical Reports Server (NTRS)

    Majda, G.

    1985-01-01

    A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.

  18. Stopband-Extended and Size-Miniaturized Low-Pass Filter Based on Interdigital Capacitor Loaded Hairpin Resonator with Four Transmission Zeros

    NASA Astrophysics Data System (ADS)

    Wu, Jia-Jia; Li, Lin

    2018-04-01

    In this paper, a compact low-pass filter (LPF) with wide stopband is proposed based on interdigital capacitor loaded hairpin resonator. The structure composed of an upper high-impedance transmission line, a middle interdigital capacitor, and a pair of inter-coupled symmetrical stepped-impedance stubs. Detailed investigation into this structure based on even-odd mode approach reveals that up to four transmission zeros can be generated and reallocated by choosing the proper circuit parameters. And owing to the aid of transmission zeros, the fabricated quasi-elliptic LPFs experimentally demonstrate a wide 20dB stopband from 1.4fc to 5.1fc using a compact size of only 0.005 λg2.

  19. Automatic Solitary Lung Nodule Detection in Computed Tomography Images Slices

    NASA Astrophysics Data System (ADS)

    Sentana, I. W. B.; Jawas, N.; Asri, S. A.

    2018-01-01

    Lung nodule is an early indicator of some lung diseases, including lung cancer. In Computed Tomography (CT) based image, nodule is known as a shape that appears brighter than lung surrounding. This research aim to develop an application that automatically detect lung nodule in CT images. There are some steps in algorithm such as image acquisition and conversion, image binarization, lung segmentation, blob detection, and classification. Data acquisition is a step to taking image slice by slice from the original *.dicom format and then each image slices is converted into *.tif image format. Binarization that tailoring Otsu algorithm, than separated the background and foreground part of each image slices. After removing the background part, the next step is to segment part of the lung only so the nodule can localized easier. Once again Otsu algorithm is use to detect nodule blob in localized lung area. The final step is tailoring Support Vector Machine (SVM) to classify the nodule. The application has succeed detecting near round nodule with a certain threshold of size. Those detecting result shows drawback in part of thresholding size and shape of nodule that need to enhance in the next part of the research. The algorithm also cannot detect nodule that attached to wall and Lung Chanel, since it depend the searching only on colour differences.

  20. Image grating metrology using phase-stepping interferometry in scanning beam interference lithography

    NASA Astrophysics Data System (ADS)

    Li, Minkang; Zhou, Changhe; Wei, Chunlong; Jia, Wei; Lu, Yancong; Xiang, Changcheng; Xiang, XianSong

    2016-10-01

    Large-sized gratings are essential optical elements in laser fusion and space astronomy facilities. Scanning beam interference lithography is an effective method to fabricate large-sized gratings. To minimize the nonlinear phase written into the photo-resist, the image grating must be measured to adjust the left and right beams to interfere at their waists. In this paper, we propose a new method to conduct wavefront metrology based on phase-stepping interferometry. Firstly, a transmission grating is used to combine the two beams to form an interferogram which is recorded by a charge coupled device(CCD). Phase steps are introduced by moving the grating with a linear stage monitored by a laser interferometer. A series of interferograms are recorded as the displacement is measured by the laser interferometer. Secondly, to eliminate the tilt and piston error during the phase stepping, the iterative least square phase shift method is implemented to obtain the wrapped phase. Thirdly, we use the discrete cosine transform least square method to unwrap the phase map. Experiment results indicate that the measured wavefront has a nonlinear phase around 0.05 λ@404.7nm. Finally, as the image grating is acquired, we simulate the print-error written into the photo-resist.

  1. Network meta-analysis: application and practice using Stata

    PubMed Central

    2017-01-01

    This review aimed to arrange the concepts of a network meta-analysis (NMA) and to demonstrate the analytical process of NMA using Stata software under frequentist framework. The NMA tries to synthesize evidences for a decision making by evaluating the comparative effectiveness of more than two alternative interventions for the same condition. Before conducting a NMA, 3 major assumptions—similarity, transitivity, and consistency—should be checked. The statistical analysis consists of 5 steps. The first step is to draw a network geometry to provide an overview of the network relationship. The second step checks the assumption of consistency. The third step is to make the network forest plot or interval plot in order to illustrate the summary size of comparative effectiveness among various interventions. The fourth step calculates cumulative rankings for identifying superiority among interventions. The last step evaluates publication bias or effect modifiers for a valid inference from results. The synthesized evidences through five steps would be very useful to evidence-based decision-making in healthcare. Thus, NMA should be activated in order to guarantee the quality of healthcare system. PMID:29092392

  2. Network meta-analysis: application and practice using Stata.

    PubMed

    Shim, Sungryul; Yoon, Byung-Ho; Shin, In-Soo; Bae, Jong-Myon

    2017-01-01

    This review aimed to arrange the concepts of a network meta-analysis (NMA) and to demonstrate the analytical process of NMA using Stata software under frequentist framework. The NMA tries to synthesize evidences for a decision making by evaluating the comparative effectiveness of more than two alternative interventions for the same condition. Before conducting a NMA, 3 major assumptions-similarity, transitivity, and consistency-should be checked. The statistical analysis consists of 5 steps. The first step is to draw a network geometry to provide an overview of the network relationship. The second step checks the assumption of consistency. The third step is to make the network forest plot or interval plot in order to illustrate the summary size of comparative effectiveness among various interventions. The fourth step calculates cumulative rankings for identifying superiority among interventions. The last step evaluates publication bias or effect modifiers for a valid inference from results. The synthesized evidences through five steps would be very useful to evidence-based decision-making in healthcare. Thus, NMA should be activated in order to guarantee the quality of healthcare system.

  3. Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo

    With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.

  4. Analysis of stability for stochastic delay integro-differential equations.

    PubMed

    Zhang, Yu; Li, Longsuo

    2018-01-01

    In this paper, we concern stability of numerical methods applied to stochastic delay integro-differential equations. For linear stochastic delay integro-differential equations, it is shown that the mean-square stability is derived by the split-step backward Euler method without any restriction on step-size, while the Euler-Maruyama method could reproduce the mean-square stability under a step-size constraint. We also confirm the mean-square stability of the split-step backward Euler method for nonlinear stochastic delay integro-differential equations. The numerical experiments further verify the theoretical results.

  5. 29 CFR 99.520 - Major program determination.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Auditors § 99.520 Major program determination. (a) General. The auditor shall use a risk-based approach to... followed. (b) Step 1. (1) The auditor shall identify the larger Federal programs, which shall be labeled... size of Type A programs, the auditor shall consider this Federal program as a Type A program and...

  6. Synthesis and characterization of fluorapatite-titania (FAp-TiO 2) nanocomposite via mechanochemical process

    NASA Astrophysics Data System (ADS)

    Ebrahimi-Kahrizsangi, Reza; Nasiri-Tabrizi, Bahman; Chami, Akbar

    2010-09-01

    In this paper, synthesis of bionanocomposite of fluorapatite-titania (FAp-TiO 2) was studied by using one step mechanochemical process. Characterization of the products was accomplished by X-ray diffraction (XRD), Fourier transform infrared (FT-IR) spectroscopy, energy dispersive X-ray spectroscopy (EDX), scanning electron microscopy (SEM), and transmission electron microscopy (TEM) techniques. Based on XRD patterns and FT-IR spectroscopy, correlation between the structural features of the nanostructured FAp-TiO 2 and the process conditions was discussed. Variations in crystallite size, lattice strain, and volume fraction of grain boundary were investigated during milling and the following heat treatment. Crystallization of the nanocomposite occurred after thermal treatment at 650 °C. Morphological features of powders were influenced by the milling time. The resulting FAp-20 wt.%TiO 2 nanocomposite powder exhibited an average particle size of 15 nm after 20 h of milling. The results show that the one step mechanosynthesis technique is an effective route to prepare FAp-based nanocomposites with excellent morphological and structural features.

  7. Analysis of the track- and dose-averaged LET and LET spectra in proton therapy using the geant4 Monte Carlo code

    PubMed Central

    Guan, Fada; Peeler, Christopher; Bronk, Lawrence; Geng, Changran; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Grosshans, David; Mohan, Radhe; Titt, Uwe

    2015-01-01

    Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the geant 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from geant 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LETt and dose-averaged LET, LETd) using geant 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LETt and LETd of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LETt but significant for LETd. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in geant 4 can result in incorrect LETd calculation results in the dose plateau region for small step limits. The erroneous LETd results can be attributed to the algorithm to determine fluctuations in energy deposition along the tracking step in geant 4. The incorrect LETd values lead to substantial differences in the calculated RBE. Conclusions: When the geant 4 particle tracking method is used to calculate the average LET values within targets with a small step limit, such as smaller than 500 μm, the authors recommend the use of LETt in the dose plateau region and LETd around the Bragg peak. For a large step limit, i.e., 500 μm, LETd is recommended along the whole Bragg curve. The transition point depends on beam parameters and can be found by determining the location where the gradient of the ratio of LETd and LETt becomes positive. PMID:26520716

  8. Evolution of Particle Size Distributions in Fragmentation Over Time

    NASA Astrophysics Data System (ADS)

    Charalambous, C. A.; Pike, W. T.

    2013-12-01

    We present a new model of fragmentation based on a probabilistic calculation of the repeated fracture of a particle population. The resulting continuous solution, which is in closed form, gives the evolution of fragmentation products from an initial block, through a scale-invariant power-law relationship to a final comminuted powder. Models for the fragmentation of particles have been developed separately in mainly two different disciplines: the continuous integro-differential equations of batch mineral grinding (Reid, 1965) and the fractal analysis of geophysics (Turcotte, 1986) based on a discrete model with a single probability of fracture. The first gives a time-dependent development of the particle-size distribution, but has resisted a closed-form solution, while the latter leads to the scale-invariant power laws, but with no time dependence. Bird (2009) recently introduced a bridge between these two approaches with a step-wise iterative calculation of the fragmentation products. The development of the particle-size distribution occurs with discrete steps: during each fragmentation event, the particles will repeatedly fracture probabilistically, cascading down the length scales to a final size distribution reached after all particles have failed to further fragment. We have identified this process as the equivalent to a sequence of trials for each particle with a fixed probability of fragmentation. Although the resulting distribution is discrete, it can be reformulated as a continuous distribution in maturity over time and particle size. In our model, Turcotte's power-law distribution emerges at a unique maturation index that defines a regime boundary. Up to this index, the fragmentation is in an erosional regime with the initial particle size setting the scaling. Fragmentation beyond this index is in a regime of comminution with rebreakage of the particles down to the size limit of fracture. The maturation index can increment continuously, for example under grinding conditions, or as discrete steps, such as with impact events. In both cases our model gives the energy associated with the fragmentation in terms of the developing surface area of the population. We show the agreement of our model to the evolution of particle size distributions associated with episodic and continuous fragmentation and how the evolution of some popular fractals may be represented using this approach. C. A. Charalambous and W. T. Pike (2013). Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model. Abstract Submitted to the AGU 46th Fall Meeting. Bird, N. R. A., Watts, C. W., Tarquis, A. M., & Whitmore, A. P. (2009). Modeling dynamic fragmentation of soil. Vadose Zone Journal, 8(1), 197-201. Reid, K. J. (1965). A solution to the batch grinding equation. Chemical Engineering Science, 20(11), 953-963. Turcotte, D. L. (1986). Fractals and fragmentation. Journal of Geophysical Research: Solid Earth 91(B2), 1921-1926.

  9. Improving genetic evaluation of litter size and piglet mortality for both genotyped and nongenotyped individuals using a single-step method.

    PubMed

    Guo, X; Christensen, O F; Ostersen, T; Wang, Y; Lund, M S; Su, G

    2015-02-01

    A single-step method allows genetic evaluation using information of phenotypes, pedigree, and markers from genotyped and nongenotyped individuals simultaneously. This paper compared genomic predictions obtained from a single-step BLUP (SSBLUP) method, a genomic BLUP (GBLUP) method, a selection index blending (SELIND) method, and a traditional pedigree-based method (BLUP) for total number of piglets born (TNB), litter size at d 5 after birth (LS5), and mortality rate before d 5 (Mort; including stillbirth) in Danish Landrace and Yorkshire pigs. Data sets of 778,095 litters from 309,362 Landrace sows and 472,001 litters from 190,760 Yorkshire sows were used for the analysis. There were 332,795 Landrace and 207,255 Yorkshire animals in the pedigree data, among which 3,445 Landrace pigs (1,366 boars and 2,079 sows) and 3,372 Yorkshire pigs (1,241 boars and 2,131 sows) were genotyped with the Illumina PorcineSNP60 BeadChip. The results showed that the 3 methods with marker information (SSBLUP, GBLUP, and SELIND) produced more accurate predictions for genotyped animals than the pedigree-based method. For genotyped animals, the average of reliabilities for all traits in both breeds using traditional BLUP was 0.091, which increased to 0.171 w+hen using GBLUP and to 0.179 when using SELIND and further increased to 0.209 when using SSBLUP. Furthermore, the average reliability of EBV for nongenotyped animals was increased from 0.091 for traditional BLUP to 0.105 for the SSBLUP. The results indicate that the SSBLUP is a good approach to practical genomic prediction of litter size and piglet mortality in Danish Landrace and Yorkshire populations.

  10. Novel Anthropometry Based on 3D-Bodyscans Applied to a Large Population Based Cohort

    PubMed Central

    Löffler-Wirth, Henry; Willscher, Edith; Ahnert, Peter; Wirkner, Kerstin; Engel, Christoph; Loeffler, Markus; Binder, Hans

    2016-01-01

    Three-dimensional (3D) whole body scanners are increasingly used as precise measuring tools for the rapid quantification of anthropometric measures in epidemiological studies. We analyzed 3D whole body scanning data of nearly 10,000 participants of a cohort collected from the adult population of Leipzig, one of the largest cities in Eastern Germany. We present a novel approach for the systematic analysis of this data which aims at identifying distinguishable clusters of body shapes called body types. In the first step, our method aggregates body measures provided by the scanner into meta-measures, each representing one relevant dimension of the body shape. In a next step, we stratified the cohort into body types and assessed their stability and dependence on the size of the underlying cohort. Using self-organizing maps (SOM) we identified thirteen robust meta-measures and fifteen body types comprising between 1 and 18 percent of the total cohort size. Thirteen of them are virtually gender specific (six for women and seven for men) and thus reflect most abundant body shapes of women and men. Two body types include both women and men, and describe androgynous body shapes that lack typical gender specific features. The body types disentangle a large variability of body shapes enabling distinctions which go beyond the traditional indices such as body mass index, the waist-to-height ratio, the waist-to-hip ratio and the mortality-hazard ABSI-index. In a next step, we will link the identified body types with disease predispositions to study how size and shape of the human body impact health and disease. PMID:27467550

  11. Ultra-small particles of iron oxide as peroxidase for immunohistochemical detection

    NASA Astrophysics Data System (ADS)

    Wu, Yihang; Song, Mengjie; Xin, Zhuang; Zhang, Xiaoqing; Zhang, Yu; Wang, Chunyu; Li, Suyi; Gu, Ning

    2011-06-01

    Dimercaptosuccinic acid (DMSA) modified ultra-small particles of iron oxide (USPIO) were synthesized through a two-step process. The first step: oleic acid (OA) capped Fe3O4 (OA-USPIO) were synthesized by a novel oxidation coprecipitation method in H2O/DMSO mixing system, where DMSO acts as an oxidant simultaneously. The second step: OA was replaced by DMSA to obtain water-soluble nanoparticles. The as-synthesized nanoparticles were characterized by TEM, FTIR, TGA, VSM, DLS, EDS and UV-vis. Hydrodynamic sizes and Peroxidase-like catalytic activity of the nanoparticles were investigated. The hydrodynamic sizes of the nanoparticles (around 24.4 nm) were well suited to developing stable nanoprobes for bio-detection. The kinetic studies were performed to quantitatively evaluate the catalytic ability of the peroxidase-like nanoparticles. The calculated kinetic parameters indicated that the DMSA-USPIO possesses high catalytic activity. Based on the high activity, immunohistochemical experiments were established: using low-cost nanoparticles as the enzyme instead of expensive HRP, Nimotuzumab was conjugated onto the surface of the nanoparticles to construct a kind of ultra-small nanoprobe which was employed to detect epidermal growth factor receptor (EGFR) over-expressed on the membrane of esophageal cancer cell. The proper sizes of the probes and the result of membranous immunohistochemical staining suggest that the probes can be served as a useful diagnostic reagent for bio-detection.

  12. (I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.

    PubMed

    van Rijnsoever, Frank J

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.

  13. Adaptive interference cancel filter for evoked potential using high-order cumulants.

    PubMed

    Lin, Bor-Shyh; Lin, Bor-Shing; Chong, Fok-Ching; Lai, Feipei

    2004-01-01

    This paper is to present evoked potential (EP) processing using adaptive interference cancel (AIC) filter with second and high order cumulants. In conventional ensemble averaging method, people have to conduct repetitively experiments to record the required data. Recently, the use of AIC structure with second statistics in processing EP has proved more efficiency than traditional averaging method, but it is sensitive to both of the reference signal statistics and the choice of step size. Thus, we proposed higher order statistics-based AIC method to improve these disadvantages. This study was experimented in somatosensory EP corrupted with EEG. Gradient type algorithm is used in AIC method. Comparisons with AIC filter on second, third, fourth order statistics are also presented in this paper. We observed that AIC filter with third order statistics has better convergent performance for EP processing and is not sensitive to the selection of step size and reference input.

  14. Low Temperature Metal Free Growth of Graphene on Insulating Substrates by Plasma Assisted Chemical Vapor Deposition

    PubMed Central

    Muñoz, R.; Munuera, C.; Martínez, J. I.; Azpeitia, J.; Gómez-Aleixandre, C.; García-Hernández, M.

    2016-01-01

    Direct growth of graphene films on dielectric substrates (quartz and silica) is reported, by means of remote electron cyclotron resonance plasma assisted chemical vapor deposition r-(ECR-CVD) at low temperature (650°C). Using a two step deposition process- nucleation and growth- by changing the partial pressure of the gas precursors at constant temperature, mostly monolayer continuous films, with grain sizes up to 500 nm are grown, exhibiting transmittance larger than 92% and sheet resistance as low as 900 Ω·sq-1. The grain size and nucleation density of the resulting graphene sheets can be controlled varying the deposition time and pressure. In additon, first-principles DFT-based calculations have been carried out in order to rationalize the oxygen reduction in the quartz surface experimentally observed. This method is easily scalable and avoids damaging and expensive transfer steps of graphene films, improving compatibility with current fabrication technologies. PMID:28070341

  15. Line roughness improvements on self-aligned quadruple patterning by wafer stress engineering

    NASA Astrophysics Data System (ADS)

    Liu, Eric; Ko, Akiteru; Biolsi, Peter; Chae, Soo Doo; Hsieh, Chia-Yun; Kagaya, Munehito; Lee, Choongman; Moriya, Tsuyoshi; Tsujikawa, Shimpei; Suzuki, Yusuke; Okubo, Kazuya; Imai, Kiyotaka

    2018-04-01

    In integrated circuit and memory devices, size shrinkage has been the most effective method to reduce production cost and enable the steady increment of the number of transistors per unit area over the past few decades. In order to reduce the die size and feature size, it is necessary to minimize pattern formation in the advance node development. In the node of sub-10nm, extreme ultra violet lithography (EUV) and multi-patterning solutions based on 193nm immersionlithography are the two most common options to achieve the size requirement. In such small features of line and space pattern, line width roughness (LWR) and line edge roughness (LER) contribute significant amount of process variation that impacts both physical and electrical performances. In this paper, we focus on optimizing the line roughness performance by using wafer stress engineering on 30nm pitch line and space pattern. This pattern is generated by a self-aligned quadruple patterning (SAQP) technique for the potential application of fin formation. Our investigation starts by comparing film materials and stress levels in various processing steps and material selection on SAQP integration scheme. From the cross-matrix comparison, we are able to determine the best stack of film selection and stress combination in order to achieve the lowest line roughness performance while obtaining pattern validity after fin etch. This stack is also used to study the step-by-step line roughness performance from SAQP to fin etch. Finally, we will show a successful patterning of 30nm pitch line and space pattern SAQP scheme with 1nm line roughness performance.

  16. Design optimization and tolerance analysis of a spot-size converter for the taper-assisted vertical integration platform in InP.

    PubMed

    Tolstikhin, Valery; Saeidi, Shayan; Dolgaleva, Ksenia

    2018-05-01

    We report on the design optimization and tolerance analysis of a multistep lateral-taper spot-size converter based on indium phosphide (InP), performed using the Monte Carlo method. Being a natural fit to (and a key building block of) the regrowth-free taper-assisted vertical integration platform, such a spot-size converter enables efficient and displacement-tolerant fiber coupling to InP-based photonic integrated circuits at a wavelength of 1.31 μm. An exemplary four-step lateral-taper design featuring 0.35 dB coupling loss at optimal alignment of a standard single-mode fiber; ≥7  μm 1 dB displacement tolerance in any direction in a facet plane; and great stability against manufacturing variances is demonstrated.

  17. How many steps/day are enough? For older adults and special populations

    PubMed Central

    2011-01-01

    Older adults and special populations (living with disability and/or chronic illness that may limit mobility and/or physical endurance) can benefit from practicing a more physically active lifestyle, typically by increasing ambulatory activity. Step counting devices (accelerometers and pedometers) offer an opportunity to monitor daily ambulatory activity; however, an appropriate translation of public health guidelines in terms of steps/day is unknown. Therefore this review was conducted to translate public health recommendations in terms of steps/day. Normative data indicates that 1) healthy older adults average 2,000-9,000 steps/day, and 2) special populations average 1,200-8,800 steps/day. Pedometer-based interventions in older adults and special populations elicit a weighted increase of approximately 775 steps/day (or an effect size of 0.26) and 2,215 steps/day (or an effect size of 0.67), respectively. There is no evidence to inform a moderate intensity cadence (i.e., steps/minute) in older adults at this time. However, using the adult cadence of 100 steps/minute to demark the lower end of an absolutely-defined moderate intensity (i.e., 3 METs), and multiplying this by 30 minutes produces a reasonable heuristic (i.e., guiding) value of 3,000 steps. However, this cadence may be unattainable in some frail/diseased populations. Regardless, to truly translate public health guidelines, these steps should be taken over and above activities performed in the course of daily living, be of at least moderate intensity accumulated in minimally 10 minute bouts, and add up to at least 150 minutes over the week. Considering a daily background of 5,000 steps/day (which may actually be too high for some older adults and/or special populations), a computed translation approximates 8,000 steps on days that include a target of achieving 30 minutes of moderate-to-vigorous physical activity (MVPA), and approximately 7,100 steps/day if averaged over a week. Measured directly and including these background activities, the evidence suggests that 30 minutes of daily MVPA accumulated in addition to habitual daily activities in healthy older adults is equivalent to taking approximately 7,000-10,000 steps/day. Those living with disability and/or chronic illness (that limits mobility and or/physical endurance) display lower levels of background daily activity, and this will affect whole-day estimates of recommended physical activity. PMID:21798044

  18. Process for preparation of large-particle-size monodisperse latexes

    NASA Technical Reports Server (NTRS)

    Vanderhoff, J. W.; Micale, F. J.; El-Aasser, M. S.; Kornfeld, D. M. (Inventor)

    1981-01-01

    Monodisperse latexes having a particle size in the range of 2 to 40 microns are prepared by seeded emulsion polymerization in microgravity. A reaction mixture containing smaller monodisperse latex seed particles, predetermined amounts of monomer, emulsifier, initiator, inhibitor and water is placed in a microgravity environment, and polymerization is initiated by heating. The reaction is allowed to continue until the seed particles grow to a predetermined size, and the resulting enlarged particles are then recovered. A plurality of particle-growing steps can be used to reach larger sizes within the stated range, with enlarge particles from the previous steps being used as seed particles for the succeeding steps. Microgravity enables preparation of particles in the stated size range by avoiding gravity related problems of creaming and settling, and flocculation induced by mechanical shear that have precluded their preparation in a normal gravity environment.

  19. a Voxel-Based Filtering Algorithm for Mobile LIDAR Data

    NASA Astrophysics Data System (ADS)

    Qin, H.; Guan, G.; Yu, Y.; Zhong, L.

    2018-04-01

    This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.

  20. High efficient perovskite solar cell material CH3NH3PbI3: Synthesis of films and their characterization

    NASA Astrophysics Data System (ADS)

    Bera, Amrita Mandal; Wargulski, Dan Ralf; Unold, Thomas

    2018-04-01

    Hybrid organometal perovskites have been emerged as promising solar cell material and have exhibited solar cell efficiency more than 20%. Thin films of Methylammonium lead iodide CH3NH3PbI3 perovskite materials have been synthesized by two different (one step and two steps) methods and their morphological properties have been studied by scanning electron microscopy and optical microscope imaging. The morphology of the perovskite layer is one of the most important parameters which affect solar cell efficiency. The morphology of the films revealed that two steps method provides better surface coverage than the one step method. However, the grain sizes were smaller in case of two steps method. The films prepared by two steps methods on different substrates revealed that the grain size also depend on the substrate where an increase of the grain size was found from glass substrate to FTO with TiO2 blocking layer to FTO without any change in the surface coverage area. Present study reveals that an improved quality of films can be obtained by two steps method by an optimization of synthesis processes.

  1. Numerical Simulation of Nanostructure Growth

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Bose, Deepak; Govindan, T. R.; Meyyappan, M.

    2004-01-01

    Nanoscale structures, such as nanowires and carbon nanotubes (CNTs), are often grown in gaseous or plasma environments. Successful growth of these structures is defined by achieving a specified crystallinity or chirality, size or diameter, alignment, etc., which in turn depend on gas mixture ratios. pressure, flow rate, substrate temperature, and other operating conditions. To date, there has not been a rigorous growth model that addresses the specific concerns of crystalline nanowire growth, while demonstrating the correct trends of the processing conditions on growth rates. Most crystal growth models are based on the Burton, Cabrera, and Frank (BCF) method, where adatoms are incorporated into a growing crystal at surface steps or spirals. When the supersaturation of the vapor is high, islands nucleate to form steps, and these steps subsequently spread (grow). The overall bulk growth rate is determined by solving for the evolving motion of the steps. Our approach is to use a phase field model to simulate the growth of finite sized nanowire crystals, linking the free energy equation with the diffusion equation of the adatoms. The phase field method solves for an order parameter that defines the evolving steps in a concentration field. This eliminates the need for explicit front tracking/location, or complicated shadowing routines, both of which can be computationally expensive, particularly in higher dimensions. We will present results demonstrating the effect of process conditions, such as substrate temperature, vapor supersaturation, etc. on the evolving morphologies and overall growth rates of the nanostructures.

  2. On the stability of projection methods for the incompressible Navier-Stokes equations based on high-order discontinuous Galerkin discretizations

    NASA Astrophysics Data System (ADS)

    Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin

    2017-12-01

    The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.

  3. Fabrication of Size-Tunable Metallic Nanoparticles Using Plasmid DNA as a Biomolecular Reactor

    PubMed Central

    Samson, Jacopo; Piscopo, Irene; Yampolski, Alex; Nahirney, Patrick; Parpas, Andrea; Aggarwal, Amit; Saleh, Raihan; Drain, Charles Michael

    2011-01-01

    Plasmid DNA can be used as a template to yield gold, palladium, silver, and chromium nanoparticles of different sizes based on variations in incubation time at 70 °C with gold phosphine complexes, with the acetates of silver or palladium, or chromium acetylacetonate. The employment of mild synthetic conditions, minimal procedural steps, and aqueous solvents makes this method environmentally greener and ensures general feasibility. The use of plasmids exploits the capabilities of the biotechnology industry as a source of nanoreactor materials. PMID:28348280

  4. High-performance formamidinium-based perovskite solar cells via microstructure-mediated δ-to-α phase transformation

    DOE PAGES

    Liu, Tanghao; Zong, Yingxia; Zhou, Yuanyuan; ...

    2017-03-14

    The δ → α phase transformation is a crucial step in the solution-growth process of formamidinium-based lead triiodide (FAPbI 3) hybrid organic–inorganic perovskite (HOIP) thin films for perovskite solar cells (PSCs). Because the addition of cesium (Cs) stabilizes the α phase of FAPbI 3-based HOIPs, here our research focuses on FAPbI 3(Cs) thin films. We show that having a large grain size in the δ-FAPbI 3(Cs) non-perovskite intermediate films is essential for the growth of high-quality α-FAPbI 3(Cs) HOIP thin films. Here grain coarsening and phase transformation occur simultaneously during the thermal annealing step. A large starting grain size inmore » the δ-FAPbI 3(Cs) thin films suppresses grain coarsening, precluding the formation of voids at the final α-FAPbI 3(Cs)–substrate interfaces. PSCs based on the interface void-free α-FAPbI 3(Cs) HOIP thin films are much more efficient and stable in the ambient atmosphere. This interesting finding inspired us to develop a simple room-temperature aging method for preparing coarse-grained δ-FAPbI 3(Cs) intermediate films, which are subsequently converted to coarse-grained, high-quality α-FAPbI 3(Cs) HOIP thin films. As a result, this study highlights the importance of microstructure meditation in the processing of formamidinium-based PSCs.« less

  5. [Influence on microstructure of dental zirconia ceramics prepared by two-step sintering].

    PubMed

    Jian, Chao; Li, Ning; Wu, Zhikai; Teng, Jing; Yan, Jiazhen

    2013-10-01

    To investigate the microstructure of dental zirconia ceramics prepared by two-step sintering. Nanostructured zirconia powder was dry compacted, cold isostatic pressed, and pre-sintered. The pre-sintered discs were cut processed into samples. Conventional sintering, single-step sintering, and two-step sintering were carried out, and density and grain size of the samples were measured. Afterward, T1 and/or T2 of two-step sintering ranges were measured. Effects on microstructure of different routes, which consisted of two-step sintering and conventional sintering were discussed. The influence of T1 and/or T2 on density and grain size were analyzed as well. The range of T1 was between 1450 degrees C and 1550 degrees C, and the range of T2 was between 1250 degrees C and 1350 degrees C. Compared with conventional sintering, finer microstructure of higher density and smaller grain could be obtained by two-step sintering. Grain growth was dependent on T1, whereas density was not much related with T1. However, density was dependent on T2, and grain size was minimally influenced. Two-step sintering could ensure a sintering body with high density and small grain, which is good for optimizing the microstructure of dental zirconia ceramics.

  6. Flexible metasurface black nickel with stepped nanopillars.

    PubMed

    Qian, Qinyu; Yan, Ying; Wang, Chinhua

    2018-03-15

    We report on a monolithic, all-metallic, and flexible metasurface perfect absorber [black nickel (Ni)] based on coupled Mie resonances originated from vertically stepped Ni nanopillars homoepitaxially grown on an Ni substrate. Coupled Mie resonances are generated from Ni nanopillars with different sizes such that Mie resonances of the stepped two sets of Ni nanopillars occur complementarily at different wavelengths to realize polarization-independent broadband absorption over the entire visible wavelength band (400-760 nm) within an ultra-thin surface layer of only 162 nm thick in total. Two-step double-beam interference lithography and electroplating are utilized to fabricate the proposed monolithic metasurface that can be arbitrarily bent and pressed. A black nickel metasurface is experimentally demonstrated in which an average polarization-independent absorption of 0.972 (0.961, experiment) in the entire visible band is achieved and remains 0.838 (0.815, experiment) when the incident angle increases to 70°.

  7. NMR diffusion simulation based on conditional random walk.

    PubMed

    Gudbjartsson, H; Patz, S

    1995-01-01

    The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.

  8. Real-time feedback control of twin-screw wet granulation based on image analysis.

    PubMed

    Madarász, Lajos; Nagy, Zsombor Kristóf; Hoffer, István; Szabó, Barnabás; Csontos, István; Pataki, Hajnalka; Démuth, Balázs; Szabó, Bence; Csorba, Kristóf; Marosi, György

    2018-06-04

    The present paper reports the first dynamic image analysis-based feedback control of continuous twin-screw wet granulation process. Granulation of the blend of lactose and starch was selected as a model process. The size and size distribution of the obtained particles were successfully monitored by a process camera coupled with an image analysis software developed by the authors. The validation of the developed system showed that the particle size analysis tool can determine the size of the granules with an error of less than 5 µm. The next step was to implement real-time feedback control of the process by controlling the liquid feeding rate of the pump through a PC, based on the real-time determined particle size results. After the establishment of the feedback control, the system could correct different real-life disturbances, creating a Process Analytically Controlled Technology (PACT), which guarantees the real-time monitoring and controlling of the quality of the granules. In the event of changes or bad tendencies in the particle size, the system can automatically compensate the effect of disturbances, ensuring proper product quality. This kind of quality assurance approach is especially important in the case of continuous pharmaceutical technologies. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Detection limits for nanoparticles in solution with classical turbidity spectra

    NASA Astrophysics Data System (ADS)

    Le Blevennec, G.

    2013-09-01

    Detection of nanoparticles in solution is required to manage safety and environmental problems. Spectral transmission turbidity method has now been known for a long time. It is derived from the Mie Theory and can be applied to any number of spheres, randomly distributed and separated by large distance compared to wavelength. Here, we describe a method for determination of size, distribution and concentration of nanoparticles in solution using UV-Vis transmission measurements. The method combines Mie and Beer Lambert computation integrated in a best fit approximation. In a first step, a validation of the approach is completed on silver nanoparticles solution. Verification of results is realized with Transmission Electronic Microscopy measurements for size distribution and an Inductively Coupled Plasma Mass Spectrometry for concentration. In view of the good agreement obtained, a second step of work focuses on how to manage the concentration to be the most accurate on the size distribution. Those efficient conditions are determined by simple computation. As we are dealing with nanoparticles, one of the key points is to know what the size limits reachable are with that kind of approach based on classical electromagnetism. In taking into account the transmission spectrometer accuracy limit we determine for several types of materials, metals, dielectrics, semiconductors the particle size limit detectable by such a turbidity method. These surprising results are situated at the quantum physics frontier.

  10. Evolution of size distribution, optical properties, and structure of Si nanoparticles obtained by laser-assisted fragmentation

    NASA Astrophysics Data System (ADS)

    Plautz, G. L.; Graff, I. L.; Schreiner, W. H.; Bezerra, A. G.

    2017-05-01

    We investigate the physical properties of Si-based nanoparticles produced by an environment-friendly three-step method relying on: (1) laser ablation of a solid target immersed in water, (2) centrifugation and separation, and (3) laser-assisted fragmentation. The evolution of size distribution is followed after each step by means of dynamic light scattering (DLS) measurements and crosschecked by transmission electron microscopy (TEM). The as-ablated colloidal suspension of Si nanoparticles presents a large size distribution, ranging from a few to hundreds of nanometers. Centrifugation drives the very large particles to the bottom eliminating them from the remaining suspension. Subsequent irradiation of height-separated suspensions with a second high-fluence (40 mJ/pulse) Nd:YAG laser operating at the fourth harmonic (λ =266 nm) leads to size reduction and ultra-small nanoparticles are obtainable depending on the starting size. Si nanoparticles as small as 1.5 nm with low dispersion (± 0.7 nm) are observed for the uppermost part after irradiation. These nanoparticles present a strong blue photoluminescence that remains stable for at least 8 weeks. Optical absorption (UV-Vis) measurements demonstrate an optical gap widening as a consequence of size decrease. Raman spectra present features related to pure silicon and silicon oxides for the irradiated sample. Interestingly, a defect band associated with silicon oxide is also identified, indicating the possible formation of defect states, which, in turn, supports the idea that the blue photoluminescence has its origin in defects.

  11. Double emulsion formation through hierarchical flow-focusing microchannel

    NASA Astrophysics Data System (ADS)

    Azarmanesh, Milad; Farhadi, Mousa; Azizian, Pooya

    2016-03-01

    A microfluidic device is presented for creating double emulsions, controlling their sizes and also manipulating encapsulation processes. As a result of three immiscible liquids' interaction using dripping instability, double emulsions can be produced elegantly. Effects of dimensionless numbers are investigated which are Weber number of the inner phase (Wein), Capillary number of the inner droplet (Cain), and Capillary number of the outer droplet (Caout). They affect the formation process, inner and outer droplet size, and separation frequency. Direct numerical simulation of governing equations was done using volume of fluid method and adaptive mesh refinement technique. Two kinds of double emulsion formation, the two-step and the one-step, were simulated in which the thickness of the sheath of double emulsions can be adjusted. Altering each dimensionless number will change detachment location, outer droplet size and droplet formation period. Moreover, the decussate regime of the double-emulsion/empty-droplet is observed in low Wein. This phenomenon can be obtained by adjusting the Wein in which the maximum size of the sheath is discovered. Also, the results show that Cain has significant influence on the outer droplet size in the two-step process, while Caout affects the sheath in the one-step formation considerably.

  12. Estimating regional centile curves from mixed data sources and countries.

    PubMed

    van Buuren, Stef; Hayes, Daniel J; Stasinopoulos, D Mikis; Rigby, Robert A; ter Kuile, Feiko O; Terlouw, Dianne J

    2009-10-15

    Regional or national growth distributions can provide vital information on the health status of populations. In most resource poor countries, however, the required anthropometric data from purpose-designed growth surveys are not readily available. We propose a practical method for estimating regional (multi-country) age-conditional weight distributions based on existing survey data from different countries. We developed a two-step method by which one is able to model data with widely different age ranges and sample sizes. The method produces references both at the country level and at the regional (multi-country) level. The first step models country-specific centile curves by Box-Cox t and Box-Cox power exponential distributions implemented in generalized additive model for location, scale and shape through a common model. Individual countries may vary in location and spread. The second step defines the regional reference from a finite mixture of the country distributions, weighted by population size. To demonstrate the method we fitted the weight-for-age distribution of 12 countries in South East Asia and the Western Pacific, based on 273 270 observations. We modeled both the raw body weight and the corresponding Z score, and obtained a good fit between the final models and the original data for both solutions. We briefly discuss an application of the generated regional references to obtain appropriate, region specific, age-based dosing regimens of drugs used in the tropics. The method is an affordable and efficient strategy to estimate regional growth distributions where the standard costly alternatives are not an option. Copyright (c) 2009 John Wiley & Sons, Ltd.

  13. Step-by-step magic state encoding for efficient fault-tolerant quantum computation

    PubMed Central

    Goto, Hayato

    2014-01-01

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387

  14. Step-by-step magic state encoding for efficient fault-tolerant quantum computation.

    PubMed

    Goto, Hayato

    2014-12-16

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.

  15. Double-plasma enhanced carbon shield for spatial/interfacial controlled electrodes in lithium ion batteries via micro-sized silicon from wafer waste

    NASA Astrophysics Data System (ADS)

    Chen, Bing-Hong; Chuang, Shang-I.; Duh, Jenq-Gong

    2016-11-01

    Using spatial and interfacial control, the micro-sized silicon waste from wafer slurry could greatly increase its retention potential as a green resource for silicon-based anode in lithium ion batteries. Through step by step spatial and interfacial control for electrode, the cyclability of recycled waste gains potential performance from its original poor retention property. In the stages of spatial control, the electrode stabilizers of active, inactive and conductive additives were mixed into slurries for maintaining architecture and conductivity of electrode. In addition, a fusion electrode modification of interfacial control combines electrolyte additive, technique of double-plasma enhanced carbon shield (D-PECS) to convert the chemical bond states and to alter the formation of solid electrolyte interphases (SEIs) in the first cycle. The depth profiles of chemical composition from external into internal electrode illustrate that the fusion electrode modification not only forms a boundary to balance the interface between internal and external electrodes but also stabilizes the SEIs formation and soothe the expansion of micro-sized electrode. Through these effect approaches, the performance of micro-sized Si waste electrode can be boosted from its serious capacity degradation to potential retention (200 cycles, 1100 mAh/g) and better meet the requirements for facile and cost-effective in industrial production.

  16. Description of the Baudet Surgical Technique and Introduction of a Systematic Method for Training Surgeons to Perform Male-to-Female Sex Reassignment Surgery.

    PubMed

    Leclère, Franck Marie; Casoli, Vincent; Baudet, Jacques; Weigert, Romain

    2015-12-01

    Male-to-female sex reassignment surgery involves three main procedures, namely, clitoroplasty, new urethral meatoplasty and vaginopoiesis. Herein we describe the key steps of our surgical technique. Male-to-female sex reassignment surgery includes the following 14 key steps which are documented in this article: (1) patient installation and draping, (2) urethral catheter placement, (3) scrotal incision and vaginal cavity formation, (4) bilateral orchidectomy, (5) penile skin inversion, (6) dismembering of the urethra from the corpora, (7) neoclitoris formation, (8) neoclitoris refinement, (9) neovaginalphallic cylinder formation, (10) fixation of the neoclitoris, (11) neovaginalphallic cylinder insertion, (12) contouring of the labia majora and positioning the neoclitoris and urethra, (13) tie-over dressing and (14) compression dressing. The size and position of the neoclitoris, position of the urethra, adequacy of the neovaginal cavity, position and tension on the triangular flap, size of the neo labia minora, size of the labia majora, symmetry and ease of intromission are important factors when considering the immediate results of the surgery. We present our learning process of graduated responsibility for optimisation of these results. We describe our postoperative care and the possible complications. Herein, we have described the 14 steps of the Baudet technique for male-to-female sex reassignment surgery which include clitoroplasty, new urethral meatoplasty and vaginopoiesis. The review of each key stage of the procedure represents the first step of our global teaching process. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.

  17. Remote sensing of environmental particulate pollutants - Optical methods for determinations of size distribution and complex refractive index

    NASA Technical Reports Server (NTRS)

    Fymat, A. L.

    1978-01-01

    A unifying approach, based on a generalization of Pearson's differential equation of statistical theory, is proposed for both the representation of particulate size distribution and the interpretation of radiometric measurements in terms of this parameter. A single-parameter gamma-type distribution is introduced, and it is shown that inversion can only provide the dimensionless parameter, r/ab (where r = particle radius, a = effective radius, b = effective variance), at least when the distribution vanishes at both ends. The basic inversion problem in reconstructing the particle size distribution is analyzed, and the existing methods are reviewed (with emphasis on their capabilities) and classified. A two-step strategy is proposed for simultaneously determining the complex refractive index and reconstructing the size distribution of atmospheric particulates.

  18. Study of micro piezoelectric vibration generator with added mass and capacitance suitable for broadband vibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Qing, E-mail: hqng@163.com; Mao, Xinhua, E-mail: 30400414@qq.com; Chu, Dongliang, E-mail: 569256386@qq.com

    This study proposes an optimized frequency adjustment method that uses a micro-cantilever beam-based piezoelectric vibration generator based on a combination of added mass and capacitance. The most important concept of the proposed method is that the frequency adjustment process is divided into two steps: the first is a rough adjustment step that changes the size of the mass added at the end of cantilever to adjust the frequency in a large-scale and discontinuous manner; the second step is a continuous but short-range frequency adjustment via the adjustable added capacitance. Experimental results show that when the initial natural frequency of amore » micro piezoelectric vibration generator is 69.8 Hz, then this natural frequency can be adjusted to any value in the range from 54.2 Hz to 42.1 Hz using the combination of the added mass and the capacitance. This method simply and effectively matches a piezoelectric vibration generator’s natural frequency to the vibration source frequency.« less

  19. Visual saliency-based fast intracoding algorithm for high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin

    2017-01-01

    Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.

  20. Evaluation of several two-step scoring functions based on linear interaction energy, effective ligand size, and empirical pair potentials for prediction of protein-ligand binding geometry and free energy.

    PubMed

    Rahaman, Obaidur; Estrada, Trilce P; Doren, Douglas J; Taufer, Michela; Brooks, Charles L; Armen, Roger S

    2011-09-26

    The performances of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for "step 2 discrimination" were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only "interacting" ligand atoms as the "effective size" of the ligand and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and 5-fold cross-validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new data set (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ data set where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand efficiencies is most relevant to real-world drug design efforts.

  1. Thermodynamics, kinetics, and catalytic effect of dehydrogenation from MgH2 stepped surfaces and nanocluster: a DFT study

    NASA Astrophysics Data System (ADS)

    Reich, Jason; Wang, Linlin; Johnson, Duane

    2013-03-01

    We detail the results of a Density Functional Theory (DFT) based study of hydrogen desorption, including thermodynamics and kinetics with(out) catalytic dopants, on stepped (110) rutile and nanocluster MgH2. We investigate competing configurations (optimal surface and nanoparticle configurations) using simulated annealing with additional converged results at 0 K, necessary for finding the low-energy, doped MgH2 nanostructures. Thermodynamics of hydrogen desorption from unique dopant sites will be shown, as well as activation energies using the Nudged Elastic Band algorithm. To compare to experiment, both stepped structures and nanoclusters are required to understanding and predict the effects of ball milling. We demonstrate how these model systems relate to the intermediary sized structures typically seen in ball milling experiments.

  2. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    PubMed

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  3. An improved VSS NLMS algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan

    2017-08-01

    In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.

  4. Seismic Travel Time Tomography in Modeling Low Velocity Anomalies between the Boreholes

    NASA Astrophysics Data System (ADS)

    Octova, A.; Sule, R.

    2018-04-01

    Travel time cross-hole seismic tomography is applied to describing the structure of the subsurface. The sources are placed at one borehole and some receivers are placed in the others. First arrival travel time data that received by each receiver is used as the input data in seismic tomography method. This research is devided into three steps. The first step is reconstructing the synthetic model based on field parameters. Field parameters are divided into 24 receivers and 45 receivers. The second step is applying inversion process for the field data that consists of five pairs bore holes. The last step is testing quality of tomogram with resolution test. Data processing using FAST software produces an explicit shape and resemble the initial model reconstruction of synthetic model with 45 receivers. The tomography processing in field data indicates cavities in several place between the bore holes. Cavities are identified on BH2A-BH1, BH4A-BH2A and BH4A-BH5 with elongated and rounded structure. In resolution tests using a checker-board, anomalies still can be identified up to 2 meter x 2 meter size. Travel time cross-hole seismic tomography analysis proves this mothod is very good to describing subsurface structure and boundary layer. Size and anomalies position can be recognized and interpreted easily.

  5. Step-height standards based on the rapid formation of monolayer steps on the surface of layered crystals

    NASA Astrophysics Data System (ADS)

    Komonov, A. I.; Prinz, V. Ya.; Seleznev, V. A.; Kokh, K. A.; Shlegel, V. N.

    2017-07-01

    Metrology is essential for nanotechnology, especially for structures and devices with feature sizes going down to nm. Scanning probe microscopes (SPMs) permits measurement of nanometer- and subnanometer-scale objects. Accuracy of size measurements performed using SPMs is largely defined by the accuracy of used calibration measures. In the present publication, we demonstrate that height standards of monolayer step (∼1 and ∼0.6 nm) can be easily prepared by cleaving Bi2Se3 and ZnWO4 layered single crystals. It was shown that the conducting surface of Bi2Se3 crystals offers height standard appropriate for calibrating STMs and for testing conductive SPM probes. Our AFM study of the morphology of freshly cleaved (0001) Bi2Se3 surfaces proved that such surfaces remained atomically smooth during a period of at least half a year. The (010) surfaces of ZnWO4 crystals remained atomically smooth during one day, but already two days later an additional nanorelief of amplitude ∼0.3 nm appeared on those surfaces. This relief, however, did not further grow in height, and it did not hamper the calibration. Simplicity and the possibility of rapid fabrication of the step-height standards, as well as their high stability, make these standards available for a great, permanently growing number of users involved in 3D printing activities.

  6. Automatic extraction of blocks from 3D point clouds of fractured rock

    NASA Astrophysics Data System (ADS)

    Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen

    2017-12-01

    This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.

  7. A diffusive information preservation method for small Knudsen number flows

    NASA Astrophysics Data System (ADS)

    Fei, Fei; Fan, Jing

    2013-06-01

    The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.

  8. Analysis Techniques for Microwave Dosimetric Data.

    DTIC Science & Technology

    1985-10-01

    the number of steps in the frequency list . 0062 C ----------------------------------------------------------------------- 0063 CALL FILE2() 0064...starting frequency, 0061 C the step size, and the number of steps in the frequency list . 0062 C

  9. Testing electroexplosive devices by programmed pulsing techniques

    NASA Technical Reports Server (NTRS)

    Rosenthal, L. A.; Menichelli, V. J.

    1976-01-01

    A novel method for testing electroexplosive devices is proposed wherein capacitor discharge pulses, with increasing energy in a step-wise fashion, are delivered to the device under test. The size of the energy increment can be programmed so that firing takes place after many, or after only a few, steps. The testing cycle is automatically terminated upon firing. An energy-firing contour relating the energy required to the programmed step size describes the single-pulse firing energy and the possible sensitization or desensitization of the explosive device.

  10. Influence of Age, Maturity, and Body Size on the Spatiotemporal Determinants of Maximal Sprint Speed in Boys.

    PubMed

    Meyers, Robert W; Oliver, Jon L; Hughes, Michael G; Lloyd, Rhodri S; Cronin, John B

    2017-04-01

    Meyers, RW, Oliver, JL, Hughes, MG, Lloyd, RS, and Cronin, JB. Influence of age, maturity, and body size on the spatiotemporal determinants of maximal sprint speed in boys. J Strength Cond Res 31(4): 1009-1016, 2017-The aim of this study was to investigate the influence of age, maturity, and body size on the spatiotemporal determinants of maximal sprint speed in boys. Three-hundred and seventy-five boys (age: 13.0 ± 1.3 years) completed a 30-m sprint test, during which maximal speed, step length, step frequency, contact time, and flight time were recorded using an optical measurement system. Body mass, height, leg length, and a maturity offset represented somatic variables. Step frequency accounted for the highest proportion of variance in speed (∼58%) in the pre-peak height velocity (pre-PHV) group, whereas step length explained the majority of the variance in speed (∼54%) in the post-PHV group. In the pre-PHV group, mass was negatively related to speed, step length, step frequency, and contact time; however, measures of stature had a positive influence on speed and step length yet a negative influence on step frequency. Speed and step length were also negatively influence by mass in the post-PHV group, whereas leg length continued to positively influence step length. The results highlighted that pre-PHV boys may be deemed step frequency reliant, whereas those post-PHV boys may be marginally step length reliant. Furthermore, the negative influence of body mass, both pre-PHV and post-PHV, suggests that training to optimize sprint performance in youth should include methods such as plyometric and strength training, where a high neuromuscular focus and the development force production relative to body weight are key foci.

  11. Full-waveform data for building roof step edge localization

    NASA Astrophysics Data System (ADS)

    Słota, Małgorzata

    2015-08-01

    Airborne laser scanning data perfectly represent flat or gently sloped areas; to date, however, accurate breakline detection is the main drawback of this technique. This issue becomes particularly important in the case of modeling buildings, where accuracy higher than the footprint size is often required. This article covers several issues related to full-waveform data registered on building step edges. First, the full-waveform data simulator was developed and presented in this paper. Second, this article provides a full description of the changes in echo amplitude, echo width and returned power caused by the presence of edges within the laser footprint. Additionally, two important properties of step edge echoes, peak shift and echo asymmetry, were noted and described. It was shown that these properties lead to incorrect echo positioning along the laser center line and can significantly reduce the edge points' accuracy. For these reasons and because all points are aligned with the center of the beam, regardless of the actual target position within the beam footprint, we can state that step edge points require geometric corrections. This article presents a novel algorithm for the refinement of step edge points. The main distinguishing advantage of the developed algorithm is the fact that none of the additional data, such as emitted signal parameters, beam divergence, approximate edge geometry or scanning settings, are required. The proposed algorithm works only on georeferenced profiles of reflected laser energy. Another major advantage is the simplicity of the calculation, allowing for very efficient data processing. Additionally, the developed method of point correction allows for the accurate determination of points lying on edges and edge point densification. For this reason, fully automatic localization of building roof step edges based on LiDAR full-waveform data with higher accuracy than the size of the lidar footprint is feasible.

  12. Scanning tunneling microscope with a rotary piezoelectric stepping motor

    NASA Astrophysics Data System (ADS)

    Yakimov, V. N.

    1996-02-01

    A compact scanning tunneling microscope (STM) with a novel rotary piezoelectric stepping motor for coarse positioning has been developed. An inertial method for rotating of the rotor by the pair of piezoplates has been used in the piezomotor. Minimal angular step size was about several arcsec with the spindle working torque up to 1 N×cm. Design of the STM was noticeably simplified by utilization of the piezomotor with such small step size. A shaft eccentrically attached to the piezomotor spindle made it possible to push and pull back the cylindrical bush with the tubular piezoscanner. A linear step of coarse positioning was about 50 nm. STM resolution in vertical direction was better than 0.1 nm without an external vibration isolation.

  13. Development of 3D Oxide Fuel Mechanics Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, B. W.; Casagranda, A.; Pitts, S. A.

    This report documents recent work to improve the accuracy and robustness of the mechanical constitutive models used in the BISON fuel performance code. These developments include migration of the fuel mechanics models to be based on the MOOSE Tensor Mechanics module, improving the robustness of the smeared cracking model, implementing a capability to limit the time step size based on material model response, and improving the robustness of the return mapping iterations used in creep and plasticity models.

  14. Automated nodule location and size estimation using a multi-scale Laplacian of Gaussian filtering approach.

    PubMed

    Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I

    2009-01-01

    Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.

  15. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.

    PubMed

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.

  16. Effects of Turbulence Model and Numerical Time Steps on Von Karman Flow Behavior and Drag Accuracy of Circular Cylinder

    NASA Astrophysics Data System (ADS)

    Amalia, E.; Moelyadi, M. A.; Ihsan, M.

    2018-04-01

    The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.

  17. Simulation methods with extended stability for stiff biochemical Kinetics.

    PubMed

    Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin

    2010-08-11

    With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.

  18. Simultaneous digital quantification and fluorescence-based size characterization of massively parallel sequencing libraries.

    PubMed

    Laurie, Matthew T; Bertout, Jessica A; Taylor, Sean D; Burton, Joshua N; Shendure, Jay A; Bielas, Jason H

    2013-08-01

    Due to the high cost of failed runs and suboptimal data yields, quantification and determination of fragment size range are crucial steps in the library preparation process for massively parallel sequencing (or next-generation sequencing). Current library quality control methods commonly involve quantification using real-time quantitative PCR and size determination using gel or capillary electrophoresis. These methods are laborious and subject to a number of significant limitations that can make library calibration unreliable. Herein, we propose and test an alternative method for quality control of sequencing libraries using droplet digital PCR (ddPCR). By exploiting a correlation we have discovered between droplet fluorescence and amplicon size, we achieve the joint quantification and size determination of target DNA with a single ddPCR assay. We demonstrate the accuracy and precision of applying this method to the preparation of sequencing libraries.

  19. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.

  20. Comparison of theoretical and experimental determination of the flexing of scratch drive actuator plates

    NASA Astrophysics Data System (ADS)

    Li, Lijie; Brown, James G.; Uttamchandani, Deepak G.

    2002-09-01

    The scratch drive actuator (SDA) is a key element in microelectromechanical System (MEMS) technology. The actuator can be designed to travel very long distance with precise step size. Various articles describe the characteristics of scratch drive actuators.3, 6, 8 The MEMS designer needs models of SDA in order to incorporate them into their Microsystems applications. The objective of our effort is to develop models for SDA when it is in the working state. In this paper, a suspended SDA plate actuated by electrostatic force is analyzed. A mathematical model is established based on electrostatic coupled mechanical theory. Two phases have been calculated because the plate will contact the bottom surface due to the electrostatic force. One phase is named non-contact mode, and another is named contact mode. From these two models, the relationship between applied voltage and contact distance has been obtained. The geometrical model of bending plate is established to determine the relationship between contact distance and step size. Therefore we can use those two results to obtain the result of step size versus applied voltage that we expect. Finally, couple-field electro-mechanical simulation has been done by commercial software IntelliSuite. We assume that the dimension of SDA plate and bushing are fixed. All the material properties are from JDSU Cronos MUMPs. A Veeco NT1000 surface profiling tool has been used to investigate the bending of SDA plate. The results of experimental and theoretical are compared.

  1. Process Parameters Optimization in Single Point Incremental Forming

    NASA Astrophysics Data System (ADS)

    Gulati, Vishal; Aryal, Ashmin; Katyal, Puneet; Goswami, Amitesh

    2016-04-01

    This work aims to optimize the formability and surface roughness of parts formed by the single-point incremental forming process for an Aluminium-6063 alloy. The tests are based on Taguchi's L18 orthogonal array selected on the basis of DOF. The tests have been carried out on vertical machining center (DMC70V); using CAD/CAM software (SolidWorks V5/MasterCAM). Two levels of tool radius, three levels of sheet thickness, step size, tool rotational speed, feed rate and lubrication have been considered as the input process parameters. Wall angle and surface roughness have been considered process responses. The influential process parameters for the formability and surface roughness have been identified with the help of statistical tool (response table, main effect plot and ANOVA). The parameter that has the utmost influence on formability and surface roughness is lubrication. In the case of formability, lubrication followed by the tool rotational speed, feed rate, sheet thickness, step size and tool radius have the influence in descending order. Whereas in surface roughness, lubrication followed by feed rate, step size, tool radius, sheet thickness and tool rotational speed have the influence in descending order. The predicted optimal values for the wall angle and surface roughness are found to be 88.29° and 1.03225 µm. The confirmation experiments were conducted thrice and the value of wall angle and surface roughness were found to be 85.76° and 1.15 µm respectively.

  2. One-step synthesis of hydrothermally stable mesoporous aluminosilicates with strong acidity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Dongjiang; School of Physical and Chemical Sciences, Queensland University of Technology, Brisbane, QLD 4001; Xu Yao

    2008-09-15

    Using tetraethylorthosilicate (TEOS), polymethylhydrosiloxane (PMHS) and aluminium isopropoxide (AIP) as the reactants, through a one-step nonsurfactant route based on PMHS-TEOS-AIP co-polycondensation, hydrothermally stable mesoporous aluminosilicates with different Si/Al molar ratios were successfully prepared. All samples exclusively showed narrow pore size distribution centered at 3.6 nm. To assess the hydrothermal stability, samples were subjected to 100 deg. C distilled water for 300 h. The boiled mesoporous aluminosilicates have nearly the same N{sub 2} adsorption-desorption isotherms and the same pore size distributions as those newly synthesized ones, indicating excellent hydrothermal stability. The {sup 29}Si MAS NMR spectra confirmed that PMHS and TEOSmore » have jointly condensed and CH{sub 3} groups have been introduced into the materials. The {sup 27}Al MAS NMR spectra indicated that Al atoms have been incorporated in the mesopore frameworks. The NH{sub 3} temperature-programmed desorption showed strong acidity. Due to the existence of large amount of CH{sub 3} groups, the mesoporous aluminosilicates obtained good hydrophobicity. Owing to the relatively large pore and the strong acidity provided by the uniform four-coordinated Al atoms, the excellent catalytic performance for 1,3,5-triisopropylbenzene cracking was acquired easily. The materials may be a profitable complement for the synthesis of solid acid catalysts. - Graphical abstract: Based on the nonsurfactant method, a facile one-step synthesis route has been developed to prepare methyl-modified mesoporous aluminosilicates that possessed hydrothermal stability and strong acidity.« less

  3. Using Intervention Mapping to Develop an Oral Health e-Curriculum for Secondary Prevention of Eating Disorders.

    PubMed

    DeBate, Rita D; Bleck, Jennifer R; Raven, Jessica; Severson, Herb

    2017-06-01

    Preventing oral-systemic health issues relies on evidence-based interventions across various system-level target groups. Although the use of theory- and evidence-based approaches has been encouraged in developing oral health behavior change programs, the translation of theoretical constructs and principles to behavior change interventions has not been well described. Based on a series of six systematic steps, Intervention Mapping provides a framework for effective decision making with regard to developing, implementing, and evaluating theory- and evidence-informed, system-based behavior change programs. This article describes the application of the Intervention Mapping framework to develop the EAT (evaluating, assessing, and treating) evidence-based intervention with the goal of increasing the capacity of oral health providers to engage in secondary prevention of oral-systemic issues associated with disordered eating behaviors. Examples of data and deliverables for each step are described. In addition, results from evaluation of the intervention via randomized control trial are described, with statistically significant differences observed in behavioral outcomes in the intervention group with effect sizes ranging from r=0.62 to 0.83. These results suggest that intervention mapping, via the six systematic steps, can be useful as a framework for continued development of preventive interventions.

  4. Scalable Wavelet-Based Active Network Stepping Stone Detection

    DTIC Science & Technology

    2012-03-22

    47 4.2.2 Synchronization Frame . . . . . . . . . . . . . . . . . . . . . . . . 49 4.2.3 Frame Size...the vector. Pilot experiments result in the final algorithm shown in Figure 3.4 and the detector in Figure 3.5. Note that the synchronization frame and... synchronization frames divided by the number of total frames. Comparing this statistic to the detection threshold γ determines whether a watermark is

  5. Determination of the structures of small gold clusters on stepped magnesia by density functional calculations.

    PubMed

    Damianos, Konstantina; Ferrando, Riccardo

    2012-02-21

    The structural modifications of small supported gold clusters caused by realistic surface defects (steps) in the MgO(001) support are investigated by computational methods. The most stable gold cluster structures on a stepped MgO(001) surface are searched for in the size range up to 24 Au atoms, and locally optimized by density-functional calculations. Several structural motifs are found within energy differences of 1 eV: inclined leaflets, arched leaflets, pyramidal hollow cages and compact structures. We show that the interaction with the step clearly modifies the structures with respect to adsorption on the flat defect-free surface. We find that leaflet structures clearly dominate for smaller sizes. These leaflets are either inclined and quasi-horizontal, or arched, at variance with the case of the flat surface in which vertical leaflets prevail. With increasing cluster size pyramidal hollow cages begin to compete against leaflet structures. Cage structures become more and more favourable as size increases. The only exception is size 20, at which the tetrahedron is found as the most stable isomer. This tetrahedron is however quite distorted. The comparison of two different exchange-correlation functionals (Perdew-Burke-Ernzerhof and local density approximation) show the same qualitative trends. This journal is © The Royal Society of Chemistry 2012

  6. GIS-based approach for quantifying landscape connectivity of Javan Hawk-Eagle habitat

    NASA Astrophysics Data System (ADS)

    Nurfatimah, C.; Syartinilia; Mulyani, Y. A.

    2018-05-01

    Javan Hawk-Eagle (Nisaetus bartelsi; JHE) is a law-protected endemic raptor which currently faced the decreased in number and size of habitat patches that will lead to patch isolation and species extinction. This study assessed the degree of connectivity between remnant habitat patches in central part of Java by utilizing Conefor Sensinode software as an additional tool for ArcGIS. The connectivity index was determined by three fractions which are infra, flux and connector. Using connectivity indices successfully identified 4 patches as core habitat, 9 patches as stepping-stone habitat and 6 patches as isolated habitat were derived from those connectivity indices. Those patches then being validated with land cover map derived from Landsat 8 of August 2014. 36% of core habitat covered by natural forest, meanwhile stepping stone habitat has 55% natural forest and isolated habitat covered by 59% natural forest. Isolated patches were caused by zero connectivity (PCcon = 0) and the patch size which too small to support viable JHE population. Yet, the condition of natural forest and the surrounding matrix landscape in isolated patches actually support the habitat need. Thus, it is very important to conduct the right conservation management system based on the condition of each patches.

  7. The effect of external forces on discrete motion within holographic optical tweezers.

    PubMed

    Eriksson, E; Keen, S; Leach, J; Goksör, M; Padgett, M J

    2007-12-24

    Holographic optical tweezers is a widely used technique to manipulate the individual positions of optically trapped micron-sized particles in a sample. The trap positions are changed by updating the holographic image displayed on a spatial light modulator. The updating process takes a finite time, resulting in a temporary decrease of the intensity, and thus the stiffness, of the optical trap. We have investigated this change in trap stiffness during the updating process by studying the motion of an optically trapped particle in a fluid flow. We found a highly nonlinear behavior of the change in trap stiffness vs. changes in step size. For step sizes up to approximately 300 nm the trap stiffness is decreasing. Above 300 nm the change in trap stiffness remains constant for all step sizes up to one particle radius. This information is crucial for optical force measurements using holographic optical tweezers.

  8. Finite-difference modeling with variable grid-size and adaptive time-step in porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Wu, Guochen

    2014-04-01

    Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.

  9. Rock sampling. [method for controlling particle size distribution

    NASA Technical Reports Server (NTRS)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  10. (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research

    PubMed Central

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358

  11. A Coarse-to-Fine Geometric Scale-Invariant Feature Transform for Large Size High Resolution Satellite Image Registration

    PubMed Central

    Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui

    2018-01-01

    Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589

  12. Bio-Inspired Aggregation Control of Carbon Nanotubes for Ultra-Strong Composites

    PubMed Central

    Han, Yue; Zhang, Xiaohua; Yu, Xueping; Zhao, Jingna; Li, Shan; Liu, Feng; Gao, Peng; Zhang, Yongyi; Zhao, Tong; Li, Qingwen

    2015-01-01

    High performance nanocomposites require well dispersion and high alignment of the nanometer-sized components, at a high mass or volume fraction as well. However, the road towards such composite structure is severely hindered due to the easy aggregation of these nanometer-sized components. Here we demonstrate a big step to approach the ideal composite structure for carbon nanotube (CNT) where all the CNTs were highly packed, aligned, and unaggregated, with the impregnated polymers acting as interfacial adhesions and mortars to build up the composite structure. The strategy was based on a bio-inspired aggregation control to limit the CNT aggregation to be sub 20–50 nm, a dimension determined by the CNT growth. After being stretched with full structural relaxation in a multi-step way, the CNT/polymer (bismaleimide) composite yielded super-high tensile strengths up to 6.27–6.94 GPa, more than 100% higher than those of carbon fiber/epoxy composites, and toughnesses up to 117–192 MPa. We anticipate that the present study can be generalized for developing multifunctional and smart nanocomposites where all the surfaces of nanometer-sized components can take part in shear transfer of mechanical, thermal, and electrical signals. PMID:26098627

  13. Fine tuning of magnetite nanoparticle size distribution using dissymmetric potential pulses in the presence of biocompatible surfactants and the electrochemical characterization of the nanoparticles.

    PubMed

    Rodríguez-López, A; Cruz-Rivera, J J; Elías-Alfaro, C G; Betancourt, I; Ruiz-Silva, H; Antaño-López, R

    2015-01-01

    The effects of varying the surfactant concentration and the anodic pulse potential on the properties and electrochemical behaviors of magnetite nanoparticles were investigated. The nanoparticles were synthesized with an electrochemical method based on applying dissymmetric potential pulses, which offers the advantage that can be used to tune the particle size distribution very precisely in the range of 10 to 50 nm. Under the conditions studied, the surfactant concentration directly affects the size distribution, with higher concentrations producing narrower distributions. Linear voltammetry was used to characterize the electrochemical behavior of the synthesized nanoparticles in both the anodic and cathodic regions, which are attributed to the oxidation of Fe(2+) and the reduction of Fe(3+); these species are part of the spinel structure of magnetite. Electrochemical impedance spectroscopy data indicated that the reduction and oxidation reactions of the nanoparticles are not controlled by the mass transport step, but by the charge transfer step. The sample with the highest saturation magnetization was that synthesized in the presence of polyethylene glycol. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Soft Landing of Bare Nanoparticles with Controlled Size, Composition, and Morphology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Grant E.; Colby, Robert J.; Laskin, Julia

    2015-01-01

    A kinetically-limited physical synthesis method based on magnetron sputtering and gas aggregation has been coupled with size-selection and ion soft landing to prepare bare metal nanoparticles on surfaces with controlled coverage, size, composition, and morphology. Employing atomic force microscopy (AFM) and scanning electron microscopy (SEM), it is demonstrated that the size and coverage of bare nanoparticles soft landed onto flat glassy carbon and silicon as well as stepped graphite surfaces may be controlled through size-selection with a quadrupole mass filter and the length of deposition, respectively. The bare nanoparticles are observed with AFM to bind randomly to the flat glassymore » carbon surface when soft landed at relatively low coverage (1012 ions). In contrast, on stepped graphite surfaces at intermediate coverage (1013 ions) the soft landed nanoparticles are shown to bind preferentially along step edges forming extended linear chains of particles. At the highest coverage (5 x 1013 ions) examined in this study the nanoparticles are demonstrated with both AFM and SEM to form a continuous film on flat glassy carbon and silicon surfaces. On a graphite surface with defects, however, it is shown with SEM that the presence of localized surface imperfections results in agglomeration of nanoparticles onto these features and the formation of neighboring depletion zones that are devoid of particles. Employing high resolution scanning transmission electron microscopy in the high angular annular dark field imaging mode (STEM-HAADF) and electron energy loss spectroscopy (EELS) it is demonstrated that the magnetron sputtering/gas aggregation synthesis technique produces single metal particles with controlled morphology as well as bimetallic alloy nanoparticles with clearly defined core-shell structure. Therefore, this kinetically-limited physical synthesis technique, when combined with ion soft landing, is a versatile complementary method for preparing a wide range of bare supported nanoparticles with selected properties that are free of the solvent, organic capping agents, and residual reactants present with nanoparticles synthesized in solution.« less

  15. Impact of implementation choices on quantitative predictions of cell-based computational models

    NASA Astrophysics Data System (ADS)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  16. Does acid-base equilibrium correlate with remnant liver volume during stepwise liver resection?

    PubMed

    Golriz, Mohammad; Abbasi, Sepehr; Fathi, Parham; Majlesara, Ali; Brenner, Thorsten; Mehrabi, Arianeb

    2017-10-01

    Small for size and flow syndrome (SFSF) is one of the most challenging complications following extended hepatectomy (EH). After EH, hepatic artery flow decreases and portal vein flow increases per 100 g of remnant liver volume (RLV). This causes hypoxia followed by metabolic acidosis. A correlation between acidosis and posthepatectomy liver failure has been postulated but not studied systematically in a large animal model or clinical setting. In our study, we performed stepwise liver resections on nine pigs to defined SFSF limits as follows: step 1: segment II/III resection, step 2: segment IV resection, step 3: segment V/VIII resection (RLV: 75, 50, and 25%, respectively). Blood gas values were measured before and after each step using four catheters inserted into the carotid artery, internal jugular vein, hepatic artery, and portal vein. The pH, [Formula: see text], and base excess (BE) decreased, but [Formula: see text] values increased after 75% resection in the portal and jugular veins. EH correlated with reduced BE in the hepatic artery. Pco 2 values increased after 75% resection in the jugular vein. In contrast, arterial Po 2 increased after every resection, whereas the venous Po 2 decreased slightly. There were differences in venous [Formula: see text], BE in the hepatic artery, and Pco 2 in the jugular vein after 75% liver resection. Because 75% resection is the limit for SFSF, these noninvasive blood evaluations may be used to predict SFSF. Further studies with long-term follow-up are required to validate this correlation. NEW & NOTEWORTHY This is the first study to evaluate acid-base parameters in major central and hepatic vessels during stepwise liver resection. The pH, [Formula: see text], and base excess (BE) decreased, but [Formula: see text] values increased after 75% resection in the portal and jugular veins. Extended hepatectomy correlated with reduced BE in the hepatic artery. Because 75% resection is the limit for small for size and flow syndrome (SFSF), postresection blood gas evaluations may be used to predict SFSF. Copyright © 2017 the American Physiological Society.

  17. Study on the Effect of Diamond Grain Size on Wear of Polycrystalline Diamond Compact Cutter

    NASA Astrophysics Data System (ADS)

    Abdul-Rani, A. M.; Che Sidid, Adib Akmal Bin; Adzis, Azri Hamim Ab

    2018-03-01

    Drilling operation is one of the most crucial step in oil and gas industry as it proves the availability of oil and gas under the ground. Polycrystalline Diamond Compact (PDC) bit is a type of bit which is gaining popularity due to its high Rate of Penetration (ROP). However, PDC bit can easily wear off especially when drilling hard rock. The purpose of this study is to identify the relationship between the grain sizes of the diamond and wear rate of the PDC cutter using simulation-based study with FEA software (ABAQUS). The wear rates of a PDC cutter with a different diamond grain sizes were calculated from simulated cuttings of cutters against granite. The result of this study shows that the smaller the diamond grain size, the higher the wear resistivity of PDC cutter.

  18. Microstickies agglomeration by electric field.

    PubMed

    Du, Xiaotang Tony; Hsieh, Jeffery S

    2016-01-01

    Microstickies deposits on both paper machine and paper products when it agglomerates under step change in ionic strength, pH, temperature and chemical additives. These stickies increase the down time of the paper mill and decrease the quality of paper. The key property of microstickies is its smaller size, which leads to low removal efficiency and difficulties in measurement. Thus the increase of microstickies size help improve both removal efficiency and reduce measurement difficulty. In this paper, a new agglomeration technology based on electric field was investigated. The electric treatment could also increase the size of stickies particles by around 100 times. The synergetic effect between electric field treatment and detacky chemicals/dispersants, including polyvinyl alcohol, poly(diallylmethylammonium chloride) and lignosulfonate, was also studied.

  19. Analysis of the track- and dose-averaged LET and LET spectra in proton therapy using the GEANT4 Monte Carlo code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Fada; Peeler, Christopher; Taleei, Reza

    Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the GEANT 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from GEANT 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LET{sub t} and dose-averaged LET, LET{sub d}) using GEANT 4 for different tracking stepmore » size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LET{sub t} and LET{sub d} of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LET{sub t} but significant for LET{sub d}. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in GEANT 4 can result in incorrect LET{sub d} calculation results in the dose plateau region for small step limits. The erroneous LET{sub d} results can be attributed to the algorithm to determine fluctuations in energy deposition along the tracking step in GEANT 4. The incorrect LET{sub d} values lead to substantial differences in the calculated RBE. Conclusions: When the GEANT 4 particle tracking method is used to calculate the average LET values within targets with a small step limit, such as smaller than 500 μm, the authors recommend the use of LET{sub t} in the dose plateau region and LET{sub d} around the Bragg peak. For a large step limit, i.e., 500 μm, LET{sub d} is recommended along the whole Bragg curve. The transition point depends on beam parameters and can be found by determining the location where the gradient of the ratio of LET{sub d} and LET{sub t} becomes positive.« less

  20. Revision of the design of a standard for the dimensions of school furniture.

    PubMed

    Molenbroek, J F M; Kroon-Ramaekers, Y M T; Snijders, C J

    2003-06-10

    In this study an anthropometric design process was followed. The aim was to improve the fit of school furniture sizes for European children. It was demonstrated statistically that the draft of a European standard does not cover the target population. No literature on design criteria for sizes exists, and in practice it is common to calculate the fit for only the mean values (P50). The calculations reported here used body dimensions of Dutch children, measured by the authors' Department, and used data from German and British national standards. A design process was followed that contains several steps, including: Target group, Anthropometric model and Percentage exclusion. The criteria developed in this study are (1) a fit on the basis of 1% exclusion (P1 or P99), and (2) a prescription based on popliteal height. Based on this new approach it was concluded that prescription of a set size should be based on popliteal height rather than body height. The drafted standard, Pren 1729, can be improved with this approach. A European standard for school furniture should include the exception that for Dutch children an extra large size is required.

  1. Surface treated carbon catalysts produced from waste tires for fatty acids to biofuel conversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hood, Zachary D.; Adhikari, Shiba P.; Wright, Marcus W.

    A method of making solid acid catalysts includes the step of sulfonating waste tire pieces in a first sulfonation step. The sulfonated waste tire pieces are pyrolyzed to produce carbon composite pieces having a pore size less than 10 nm. The carbon composite pieces are then ground to produce carbon composite powders having a size less than 50 .mu.m. The carbon composite particles are sulfonated in a second sulfonation step to produce sulfonated solid acid catalysts. A method of making biofuels and solid acid catalysts are also disclosed.

  2. Programmable Digital Controller

    NASA Technical Reports Server (NTRS)

    Wassick, Gregory J.

    2012-01-01

    An existing three-channel analog servo loop controller has been redesigned for piezoelectric-transducer-based (PZT-based) etalon control applications to a digital servo loop controller. This change offers several improvements over the previous analog controller, including software control over proportional-integral-derivative (PID) parameters, inclusion of other data of interest such as temperature and pressure in the control laws, improved ability to compensate for PZT hysteresis and mechanical mount fluctuations, ability to provide pre-programmed scanning and stepping routines, improved user interface, expanded data acquisition, and reduced size, weight, and power.

  3. Method and apparatus for sizing and separating warp yarns using acoustical energy

    DOEpatents

    Sheen, Shuh-Haw; Chien, Hual-Te; Raptis, Apostolos C.; Kupperman, David S.

    1998-01-01

    A slashing process for preparing warp yarns for weaving operations including the steps of sizing and/or desizing the yarns in an acoustic resonance box and separating the yarns with a leasing apparatus comprised of a set of acoustically agitated lease rods. The sizing step includes immersing the yarns in a size solution contained in an acoustic resonance box. Acoustic transducers are positioned against the exterior of the box for generating an acoustic pressure field within the size solution. Ultrasonic waves that result from the acoustic pressure field continuously agitate the size solution to effect greater mixing and more uniform application and penetration of the size onto the yarns. The sized yarns are then separated by passing the warp yarns over and under lease rods. Electroacoustic transducers generate acoustic waves along the longitudinal axis of the lease rods, creating a shearing motion on the surface of the rods for splitting the yarns.

  4. Fundamental Fractal Antenna Design Process

    NASA Astrophysics Data System (ADS)

    Zhu, L. P.; Kim, T. C.; Kakas, G. D.

    2017-12-01

    Antenna designers are always looking to come up with new ideas to push the envelope for new antennas, using a smaller volume while striving for higher bandwidth, wider bandwidth, and antenna gain. One proposed method of increasing bandwidth or shrinking antenna size is via the use of fractal geometry, which gives rise to fractal antennas. Fractals are those fun shapes that if one zooms in or zoom out, the structure is always the same. Design a new type of antenna based on fractal antenna design by utilize the Design of Experiment (DOE) will be shown in fractal antenna design process. Investigate conformal fractal antenna design for patterns, dimensions, and size, of the antenna but maintaining or improving the antenna performance. Research shows an antenna designer how to create basic requirements of the fractal antenna through a step by step process, and provides how to optimize the antenna design with the model prediction, lab measurement, and actual results from the compact range measurement on the antenna patterns.

  5. Role of codeposited impurities during growth. II. Dependence of morphology on binding and barrier energies

    NASA Astrophysics Data System (ADS)

    Sathiyanarayanan, Rajesh; Hamouda, Ajmi Bh.; Pimpinelli, A.; Einstein, T. L.

    2011-01-01

    In an accompanying article we showed that surface morphologies obtained through codeposition of a small quantity (2%) of impurities with Cu during growth (step-flow mode, θ = 40 ML) significantly depends on the lateral nearest-neighbor binding energy (ENN) to Cu adatom and the diffusion barrier (Ed) of the impurity atom on Cu(0 0 1). Based on these two energy parameters, ENN and Ed, we classify impurity atoms into four sets. We study island nucleation and growth in the presence of codeposited impurities from different sets in the submonolayer (θ⩽ 0.7 ML) regime. Similar to growth in the step-flow mode, we find different nucleation and growth behavior for impurities from different sets. We characterize these differences through variations of the number of islands (Ni) and the average island size with coverage (θ). Further, we compute the critical nucleus size (i) for all of these cases from the distribution of capture-zone areas using the generalized Wigner distribution.

  6. One-step femtosecond laser welding and internal machining of three glass substrates

    NASA Astrophysics Data System (ADS)

    Tan, Hua; Duan, Ji'an

    2017-05-01

    In this paper, it demonstrated one-step femtosecond laser welding and internal machining of three fused silica substrates in the optical- and non-optical-contact regimes by focusing 1030-nm laser pulses at the middle of the second substrate. Focusing laser pulses within the second glass in optical-contact and non-optical-contact samples induces permanent internal structural modification, leading to the three glass substrates bonding together simultaneously. The bonding mechanism is based on the internal modification of glass, and this mechanism is different from that of ordinary glass welding at the interface. Welding-spot size is affected by not only the gap distance (ablation effect) and heat transmission, but also by gravity through examining the sizes of the welding spots on the four contact welding surfaces. The maximum bonding strength of the lower interface (56.2 MPa) in the optical-contact regime is more than double that (27.6 MPa) in the non-optical-contact regime.

  7. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Applicability of corrosion control treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper...

  8. An automated inner dimensional measurement system based on a laser displacement sensor for long-stepped pipes.

    PubMed

    Zhang, Fumin; Qu, Xinghua; Ouyang, Jianfei

    2012-01-01

    A novel measurement prototype based on a mobile vehicle that carries a laser scanning sensor is proposed. The prototype is intended for the automated measurement of the interior 3D geometry of large-diameter long-stepped pipes. The laser displacement sensor, which has a small measurement range, is mounted on an extended arm of known length. It is scanned to improve the measurement accuracy for large-sized pipes. A fixing mechanism based on two sections is designed to ensure that the stepped pipe is concentric with the axis of rotation of the system. Data are acquired in a cylindrical coordinate system and fitted in a circle to determine diameter. Systematic errors covering arm length, tilt, and offset errors are analyzed and calibrated. The proposed system is applied to sample parts and the results are discussed to verify its effectiveness. This technique measures a diameter of 600 mm with an uncertainty of 0.02 mm at a 95% confidence probability. A repeatability test is performed to examine precision, which is 1.1 μm. A laser tracker is used to verify the measurement accuracy of the system, which is evaluated as 9 μm within a diameter of 600 mm.

  9. Effect of time step size and turbulence model on the open water hydrodynamic performance prediction of contra-rotating propellers

    NASA Astrophysics Data System (ADS)

    Wang, Zhan-zhi; Xiong, Ying

    2013-04-01

    A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.

  10. Tungsten Carbide Grain Size Computation for WC-Co Dissimilar Welds

    NASA Astrophysics Data System (ADS)

    Zhou, Dongran; Cui, Haichao; Xu, Peiquan; Lu, Fenggui

    2016-06-01

    A "two-step" image processing method based on electron backscatter diffraction in scanning electron microscopy was used to compute the tungsten carbide (WC) grain size distribution for tungsten inert gas (TIG) welds and laser welds. Twenty-four images were collected on randomly set fields per sample located at the top, middle, and bottom of a cross-sectional micrograph. Each field contained 500 to 1500 WC grains. The images were recognized through clustering-based image segmentation and WC grain growth recognition. According to the WC grain size computation and experiments, a simple WC-WC interaction model was developed to explain the WC dissolution, grain growth, and aggregation in welded joints. The WC-WC interaction and blunt corners were characterized using scanning and transmission electron microscopy. The WC grain size distribution and the effects of heat input E on grain size distribution for the laser samples were discussed. The results indicate that (1) the grain size distribution follows a Gaussian distribution. Grain sizes at the top of the weld were larger than those near the middle and weld root because of power attenuation. (2) Significant WC grain growth occurred during welding as observed in the as-welded micrographs. The average grain size was 11.47 μm in the TIG samples, which was much larger than that in base metal 1 (BM1 2.13 μm). The grain size distribution curves for the TIG samples revealed a broad particle size distribution without fine grains. The average grain size (1.59 μm) in laser samples was larger than that in base metal 2 (BM2 1.01 μm). (3) WC-WC interaction exhibited complex plane, edge, and blunt corner characteristics during grain growth. A WC ( { 1 {bar{{1}}}00} ) to WC ( {0 1 1 {bar{{0}}}} ) edge disappeared and became a blunt plane WC ( { 10 1 {bar{{0}}}} ) , several grains with two- or three-sided planes and edges disappeared into a multi-edge, and a WC-WC merged.

  11. Stable and pH-responsive core-shell nanoparticles based on HEC and PMAA networks via template copolymerization

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Jin, Q.; Chen, Y.; Zhao, J.

    2011-10-01

    Taking advantage of the specific hydrogen bonding interactions, stable and pH-responsive core-shell nanoparticles based on hydroxyethyl cellulose (HEC) and polymethacrylic acid (PMAA) networks, with a < D h > size ranging from 190 to 250 nm, can be efficiently prepared via facile one-step co-polymerization of methacrylic acid (MAA) and N, N'-methylenebisacrylamide (MBA) on HEC template in water. Using dynamic light scattering, electrophoretic light scattering, fluorescence spectrometry, thermo-gravimetric analysis, TEM, and AFM observations, the influence of crosslinker MBA as well as the reaction parameters were studied. The results show that after the introduction of crosslinker MBA, the nanoparticles became less compact; their size exhibited a smaller pH sensitivity, and their stability against pH value was improved greatly. Furthermore, the size, structure, and pH response of the nanoparticles can be adjusted via varying the reaction parameters: nanoparticles of smaller size, more compact structure, and higher swelling capacity were produced as pH value of the reaction medium increased or the HEC/MAA ratio decreased; while nanoparticles of smaller size, less compact structure and smaller swelling capacity were produced as the total feeding concentration increased.

  12. Two-step size reduction and post-washing of steam exploded corn stover improving simultaneous saccharification and fermentation for ethanol production.

    PubMed

    Liu, Zhi-Hua; Chen, Hong-Zhang

    2017-01-01

    The simultaneous saccharification and fermentation (SSF) of corn stover biomass for ethanol production was performed by integrating steam explosion (SE) pretreatment, hydrolysis and fermentation. Higher SE pretreatment severity and two-step size reduction increased the specific surface area, swollen volume and water holding capacity of steam exploded corn stover (SECS) and hence facilitated the efficiency of hydrolysis and fermentation. The ethanol production and yield in SSF increased with the decrease of particle size and post-washing of SECS prior to fermentation to remove the inhibitors. Under the SE conditions of 1.5MPa and 9min using 2.0cm particle size, glucan recovery and conversion to glucose by enzymes were 86.2% and 87.2%, respectively. The ethanol concentration and yield were 45.0g/L and 85.6%, respectively. With this two-step size reduction and post-washing strategy, the water utilization efficiency, sugar recovery and conversion, and ethanol concentration and yield by the SSF process were improved. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Selectively Sized Graphene-Based Nanopores for in Situ Single Molecule Sensing

    PubMed Central

    2015-01-01

    The use of nanopore biosensors is set to be extremely important in developing precise single molecule detectors and providing highly sensitive advanced analysis of biological molecules. The precise tailoring of nanopore size is a significant step toward achieving this, as it would allow for a nanopore to be tuned to a corresponding analyte. The work presented here details a methodology for selectively opening nanopores in real-time. The tunable nanopores on a quartz nanopipette platform are fabricated using the electroetching of a graphene-based membrane constructed from individual graphene nanoflakes (ø ∼30 nm). The device design allows for in situ opening of the graphene membrane, from fully closed to fully opened (ø ∼25 nm), a feature that has yet to be reported in the literature. The translocation of DNA is studied as the pore size is varied, allowing for subfeatures of DNA to be detected with slower DNA translocations at smaller pore sizes, and the ability to observe trends as the pore is opened. This approach opens the door to creating a device that can be target to detect specific analytes. PMID:26204996

  14. Self-Templated Stepwise Synthesis of Monodispersed Nanoscale Metalated Covalent Organic Polymers for In Vivo Bioimaging and Photothermal Therapy.

    PubMed

    Shi, Yanshu; Deng, Xiaoran; Bao, Shouxin; Liu, Bei; Liu, Bin; Ma, Ping'an; Cheng, Ziyong; Pang, Maolin; Lin, Jun

    2017-09-05

    Size- and shape-controlled growth of nanoscale microporous organic polymers (MOPs) is a big challenge scientists are confronted with; meanwhile, rendering these materials for in vivo biomedical applications is still scarce. In this study, a monodispersed nanometalated covalent organic polymer (MCOP, M=Fe, Gd) with sizes around 120 nm was prepared by a self-templated two-step solution-phase synthesis method. The metal ions (Fe 3+ , Gd 3+ ) played important roles in generating a small particle size and in the functionalization of the products during the reaction with p-phenylenediamine (Pa). The resultant Fe-Pa complex was used as a template for the subsequent formation of MCOP following the Schiff base reaction with 1,3,5-triformylphloroglucinol (Tp). A high tumor suppression efficiency for this Pa-based COP is reported for the first time. This study demonstrates the potential use of MCOP as a photothermal agent for photothermal therapy (PTT) and also provides an alternative route to fabricate nano-sized MCOPs. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    NASA Astrophysics Data System (ADS)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  16. Optimized spray drying process for preparation of one-step calcium-alginate gel microspheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popeski-Dimovski, Riste

    Calcium-alginate micro particles have been used extensively in drug delivery systems. Therefore we establish a one-step method for preparation of internally gelated micro particles with spherical shape and narrow size distribution. We use four types of alginate with different G/M ratio and molar weight. The size of the particles is measured using light diffraction and scanning electron microscopy. Measurements showed that with this method, micro particles with size distribution around 4 micrometers can be prepared, and SEM imaging showed that those particles are spherical in shape.

  17. Shear Melting of a Colloidal Glass

    NASA Astrophysics Data System (ADS)

    Eisenmann, Christoph; Kim, Chanjoong; Mattsson, Johan; Weitz, David A.

    2010-01-01

    We use confocal microscopy to explore shear melting of colloidal glasses, which occurs at strains of ˜0.08, coinciding with a strongly non-Gaussian step size distribution. For larger strains, the particle mean square displacement increases linearly with strain and the step size distribution becomes Gaussian. The effective diffusion coefficient varies approximately linearly with shear rate, consistent with a modified Stokes-Einstein relationship in which thermal energy is replaced by shear energy and the length scale is set by the size of cooperatively moving regions consisting of ˜3 particles.

  18. Ferroelectric properties of composites containing BaTiO 3 nanoparticles of various sizes

    NASA Astrophysics Data System (ADS)

    Adam, Jens; Lehnert, Tobias; Klein, Gabi; McMeeking, Robert M.

    2014-01-01

    Size effects, including the occurrence of superparaelectric phases associated with small scale, are a significant research topic for ferroelectrics. Relevant phenomena have been explored in detail, e.g. for homogeneous, thin ferroelectric films, but the related effects associated with nanoparticles are usually only inferred from their structural properties. In contrast, this paper describes all the steps and concepts necessary for the direct characterization and quantitative assessment of the ferroelectric properties of as-synthesized and as-received nanoparticles. The method adopted uses electrical polarization measurements on polymer matrix composites containing ferroelectric nanoparticles. It is applied to ten different BaTiO3 particle types covering a size range from 10 nm to 0.8 μm. The influence of variations of particle characteristics such as tetragonality and dielectric constant is considered based on measurements of these properties. For composites containing different particle types a clearly differing polarization behaviour is found. For decreasing particle size, increasing electric field is required to achieve a given level of polarization. The size dependence of a measure related to the coercive field revealed by this work is qualitatively in line with the state of the knowledge for ferroelectrics having small dimensions. For the first time, such results and size effects are described based on data from experiments on collections of actual nanoparticles.

  19. Assessment of an electronic learning system for colon capsule endoscopy: a pilot study.

    PubMed

    Watabe, Hirotsugu; Nakamura, Tetsuya; Yamada, Atsuo; Kakugawa, Yasuo; Nouda, Sadaharu; Terano, Akira

    2016-06-01

    Training for colon capsule endoscopy (CCE) procedures is currently performed as a lecture and hands-on seminar. The aims of this pilot study were to assess the utility of an electronic learning system for CCE (ELCCE) designed for the Japanese Association for Capsule Endoscopy using an objective scoring engine, and to evaluate the efficacy of ELCCE on the acquisition of CCE reading competence. ELCCE is an Internet-based learning system with the following steps: step 1, introduction; step 2, CCE reading competence assessment test (CCAT), which evaluates the competence of CCE reading prior to training; step 3, learning reading theory; step 4, training with guidance; step 5, training without guidance; step 6, final assessment; and step 7, the same as step 2. The CCAT, step 5 and step 6 were scored automatically according to: lesion detection, diagnosis (location, size, shape of lesion), management recommendations, and quality of view. Ten trainee endoscopists were initially recruited (cohort 1), followed by a validating cohort of 11 trainee endoscopists (cohort 2). All but one participant finished ELCCE training within 7 weeks. In step 6, accuracy ranged from 53 to 98 % and was not impacted by step 2 pretest scores. The average CCAT scores significantly increased between step 2 pretest and step 7 in both cohorts, from 42 ± 18 % to 79 ± 15 % in cohort 1 (p = 0.0004), and from 52 ± 15 % to 79 ± 14 % in cohort 2 (p = 0.0003). ELCCE is useful and effective for improving CCE reading competence.

  20. Impurity effects in crystal growth from solutions: Steady states, transients and step bunch motion

    NASA Astrophysics Data System (ADS)

    Ranganathan, Madhav; Weeks, John D.

    2014-05-01

    We analyze a recently formulated model in which adsorbed impurities impede the motion of steps in crystals grown from solutions, while moving steps can remove or deactivate adjacent impurities. In this model, the chemical potential change of an atom on incorporation/desorption to/from a step is calculated for different step configurations and used in the dynamical simulation of step motion. The crucial difference between solution growth and vapor growth is related to the dependence of the driving force for growth of the main component on the size of the terrace in front of the step. This model has features resembling experiments in solution growth, which yields a dead zone with essentially no growth at low supersaturation and the motion of large coherent step bunches at larger supersaturation. The transient behavior shows a regime wherein steps bunch together and move coherently as the bunch size increases. The behavior at large line tension is reminiscent of the kink-poisoning mechanism of impurities observed in calcite growth. Our model unifies different impurity models and gives a picture of nonequilibrium dynamics that includes both steady states and time dependent behavior and shows similarities with models of disordered systems and the pinning/depinning transition.

  1. Statistical Analyses of Femur Parameters for Designing Anatomical Plates.

    PubMed

    Wang, Lin; He, Kunjin; Chen, Zhengming

    2016-01-01

    Femur parameters are key prerequisites for scientifically designing anatomical plates. Meanwhile, individual differences in femurs present a challenge to design well-fitting anatomical plates. Therefore, to design anatomical plates more scientifically, analyses of femur parameters with statistical methods were performed in this study. The specific steps were as follows. First, taking eight anatomical femur parameters as variables, 100 femur samples were classified into three classes with factor analysis and Q-type cluster analysis. Second, based on the mean parameter values of the three classes of femurs, three sizes of average anatomical plates corresponding to the three classes of femurs were designed. Finally, based on Bayes discriminant analysis, a new femur could be assigned to the proper class. Thereafter, the average anatomical plate suitable for that new femur was selected from the three available sizes of plates. Experimental results showed that the classification of femurs was quite reasonable based on the anatomical aspects of the femurs. For instance, three sizes of condylar buttress plates were designed. Meanwhile, 20 new femurs are judged to which classes the femurs belong. Thereafter, suitable condylar buttress plates were determined and selected.

  2. Knowledge Discovery from Vibration Measurements

    PubMed Central

    Li, Jian; Wang, Daoyao

    2014-01-01

    The framework as well as the particular algorithms of pattern recognition process is widely adopted in structural health monitoring (SHM). However, as a part of the overall process of knowledge discovery from data bases (KDD), the results of pattern recognition are only changes and patterns of changes of data features. In this paper, based on the similarity between KDD and SHM and considering the particularity of SHM problems, a four-step framework of SHM is proposed which extends the final goal of SHM from detecting damages to extracting knowledge to facilitate decision making. The purposes and proper methods of each step of this framework are discussed. To demonstrate the proposed SHM framework, a specific SHM method which is composed by the second order structural parameter identification, statistical control chart analysis, and system reliability analysis is then presented. To examine the performance of this SHM method, real sensor data measured from a lab size steel bridge model structure are used. The developed four-step framework of SHM has the potential to clarify the process of SHM to facilitate the further development of SHM techniques. PMID:24574933

  3. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model

    PubMed Central

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well. PMID:25484854

  4. Effective size selection of MoS2 nanosheets by a novel liquid cascade centrifugation: Influences of the flakes dimensions on electrochemical and photoelectrochemical applications.

    PubMed

    Kajbafvala, Marzieh; Farbod, Mansoor

    2018-05-14

    Although liquid phase exfoliation is a powerful method to produce MoS 2 nanosheets in large scale, but its effectiveness is limited by the diversity of produced nanosheets sizes. Here a novel approach for separation of MoS 2 flakes having various lateral sizes and thicknesses based on the cascaded centrifugation has been introduced. This method involves a pre-separation step which is performed through low-speed centrifugation to avoid the deposition of large area single and few-layers by the heavier particles. The bulk MoS 2 powders were dispersed in an aqueous solution of sodium cholate (SC) and sonicated for 12 h. The main separation step was performed using different speed centrifugation intervals of 10-11, 8-10, 6-8, 4-6, 2-4 and 0.5-2 krpm by which nanosheets containing 2, 4, 7, 8, 14, 18 and 29 layers were obtained respectively. The samples were characterized using XRD, FESEM, AFM, TEM, DLS and also UV-vis, Raman and PL spectroscopy measurements. Dynamic light scattering (DLS) measurements have confirmed the existence of a larger number of single or few-layers MoS 2 nanosheets compared to when the pre-separation step was not used. Finally, Photocurrent and cyclic voltammetry of different samples were measured and found that the flakes with bigger surface area had larger CV loop area. Our results provide a method for the preparation of a MoS 2 monolayer enriched suspension which can be used for different applications. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Functionality in Electrospun Nanofibrous Membranes Based on Fiber's Size, Surface Area, and Molecular Orientation

    PubMed Central

    Matsumoto, Hidetoshi; Tanioka, Akihiko

    2011-01-01

    Electrospinning is a versatile method for forming continuous thin fibers based on an electrohydrodynamic process. This method has the following advantages: (i) the ability to produce thin fibers with diameters in the micrometer and nanometer ranges; (ii) one-step forming of the two- or three-dimensional nanofiber network assemblies (nanofibrous membranes); and (iii) applicability for a broad spectrum of molecules, such as synthetic and biological polymers and polymerless sol-gel systems. Electrospun nanofibrous membranes have received significant attention in terms of their practical applications. The major advantages of nanofibers or nanofibrous membranes are the functionalities based on their nanoscaled-size, highly specific surface area, and highly molecular orientation. These functionalities of the nanofibrous membranes can be controlled by their fiber diameter, surface chemistry and topology, and internal structure of the nanofibers. This report focuses on our studies and describes fundamental aspects and applications of electrospun nanofibrous membranes. PMID:24957735

  6. Model calibration and validation for OFMSW and sewage sludge co-digestion reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esposito, G., E-mail: giovanni.esposito@unicas.it; Frunzo, L., E-mail: luigi.frunzo@unina.it; Panico, A., E-mail: anpanico@unina.it

    2011-12-15

    Highlights: > Disintegration is the limiting step of the anaerobic co-digestion process. > Disintegration kinetic constant does not depend on the waste particle size. > Disintegration kinetic constant depends only on the waste nature and composition. > The model calibration can be performed on organic waste of any particle size. - Abstract: A mathematical model has recently been proposed by the authors to simulate the biochemical processes that prevail in a co-digestion reactor fed with sewage sludge and the organic fraction of municipal solid waste. This model is based on the Anaerobic Digestion Model no. 1 of the International Watermore » Association, which has been extended to include the co-digestion processes, using surface-based kinetics to model the organic waste disintegration and conversion to carbohydrates, proteins and lipids. When organic waste solids are present in the reactor influent, the disintegration process is the rate-limiting step of the overall co-digestion process. The main advantage of the proposed modeling approach is that the kinetic constant of such a process does not depend on the waste particle size distribution (PSD) and rather depends only on the nature and composition of the waste particles. The model calibration aimed to assess the kinetic constant of the disintegration process can therefore be conducted using organic waste samples of any PSD, and the resulting value will be suitable for all the organic wastes of the same nature as the investigated samples, independently of their PSD. This assumption was proven in this study by biomethane potential experiments that were conducted on organic waste samples with different particle sizes. The results of these experiments were used to calibrate and validate the mathematical model, resulting in a good agreement between the simulated and observed data for any investigated particle size of the solid waste. This study confirms the strength of the proposed model and calibration procedure, which can thus be used to assess the treatment efficiency and predict the methane production of full-scale digesters.« less

  7. STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies.

    PubMed

    Hepburn, Iain; Chen, Weiliang; Wils, Stefan; De Schutter, Erik

    2012-05-10

    Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. We describe STEPS, a stochastic reaction-diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction-diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. STEPS simulates models of cellular reaction-diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/

  8. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies

    PubMed Central

    2012-01-01

    Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates models of cellular reaction–diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/ PMID:22574658

  9. Visually Lossless JPEG 2000 for Remote Image Browsing

    PubMed Central

    Oh, Han; Bilgin, Ali; Marcellin, Michael

    2017-01-01

    Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112

  10. Multi-passes warm rolling of AZ31 magnesium alloy, effect on evaluation of texture, microstructure, grain size and hardness

    NASA Astrophysics Data System (ADS)

    Kamran, J.; Hasan, B. A.; Tariq, N. H.; Izhar, S.; Sarwar, M.

    2014-06-01

    In this study the effect of multi-passes warm rolling of AZ31 magnesium alloy on texture, microstructure, grain size variation and hardness of as cast sample (A) and two rolled samples (B & C) taken from different locations of the as-cast ingot was investigated. The purpose was to enhance the formability of AZ31 alloy in order to help manufacturability. It was observed that multi-passes warm rolling (250°C to 350°C) of samples B & C with initial thickness 7.76mm and 7.73 mm was successfully achieved up to 85% reduction without any edge or surface cracks in ten steps with a total of 26 passes. The step numbers 1 to 4 consist of 5, 2, 11 and 3 passes respectively, the remaining steps 5 to 10 were single pass rolls. In each discrete step a fixed roll gap is used in a way that true strain per step increases very slowly from 0.0067 in the first step to 0.7118 in the 26th step. Both samples B & C showed very similar behavior after 26th pass and were successfully rolled up to 85% thickness reduction. However, during 10th step (27th pass) with a true strain value of 0.772 the sample B experienced very severe surface as well as edge cracks. Sample C was therefore not rolled for the 10th step and retained after 26 passes. Both samples were studied in terms of their basal texture, microstructure, grain size and hardness. Sample C showed an equiaxed grain structure after 85% total reduction. The equiaxed grain structure of sample C may be due to the effective involvement of dynamic recrystallization (DRX) which led to formation of these grains with relatively low misorientations with respect to the parent as cast grains. The sample B on the other hand showed a microstructure in which all the grains were elongated along the rolling direction (RD) after 90 % total reduction and DRX could not effectively play its role due to heavy strain and lack of plastic deformation systems. The microstructure of as cast sample showed a near-random texture (mrd 4.3), with average grain size of 44 & micro-hardness of 52 Hv. The grain size of sample B and C was 14μm and 27μm respectively and mrd intensity of basal texture was 5.34 and 5.46 respectively. The hardness of sample B and C came out to be 91 and 66 Hv respectively due to reduction in grain size and followed the well known Hall-Petch relationship.

  11. Social Networks of Lesbian, Gay, Bisexual, and Transgender Older Adults

    PubMed Central

    Erosheva, Elena A.; Kim, Hyun-Jun; Emlet, Charles; Fredriksen-Goldsen, Karen I.

    2015-01-01

    Purpose This study examines global social networks—including friendship, support, and acquaintance networks—of lesbian, gay, bisexual, and transgender (LGBT) older adults. Design and Methods Utilizing data from a large community-based study, we employ multiple regression analyses to examine correlates of social network size and diversity. Results Controlling for background characteristics, network size was positively associated with being female, transgender identity, employment, higher income, having a partner or a child, identity disclosure to a neighbor, engagement in religious activities, and service use. Controlling in addition for network size, network diversity was positively associated with younger age, being female, transgender identity, identity disclosure to a friend, religious activity, and service use. Implications According to social capital theory, social networks provide a vehicle for social resources that can be beneficial for successful aging and well-being. This study is a first step at understanding the correlates of social network size and diversity among LGBT older adults. PMID:25882129

  12. Correlation Equation of Fault Size, Moment Magnitude, and Height of Tsunami Case Study: Historical Tsunami Database in Sulawesi

    NASA Astrophysics Data System (ADS)

    Julius, Musa, Admiral; Pribadi, Sugeng; Muzli, Muzli

    2018-03-01

    Sulawesi, one of the biggest island in Indonesia, located on the convergence of two macro plate that is Eurasia and Pacific. NOAA and Novosibirsk Tsunami Laboratory show more than 20 tsunami data recorded in Sulawesi since 1820. Based on this data, determination of correlation between tsunami and earthquake parameter need to be done to proved all event in the past. Complete data of magnitudes, fault sizes and tsunami heights on this study sourced from NOAA and Novosibirsk Tsunami database, completed with Pacific Tsunami Warning Center (PTWC) catalog. This study aims to find correlation between moment magnitude, fault size and tsunami height by simple regression. The step of this research are data collecting, processing, and regression analysis. Result shows moment magnitude, fault size and tsunami heights strongly correlated. This analysis is enough to proved the accuracy of historical tsunami database in Sulawesi on NOAA, Novosibirsk Tsunami Laboratory and PTWC.

  13. Multiple stage miniature stepping motor

    DOEpatents

    Niven, William A.; Shikany, S. David; Shira, Michael L.

    1981-01-01

    A stepping motor comprising a plurality of stages which may be selectively activated to effect stepping movement of the motor, and which are mounted along a common rotor shaft to achieve considerable reduction in motor size and minimum diameter, whereby sequential activation of the stages results in successive rotor steps with direction being determined by the particular activating sequence followed.

  14. Capillary fluctuations of surface steps: An atomistic simulation study for the model Cu(111) system

    NASA Astrophysics Data System (ADS)

    Freitas, Rodrigo; Frolov, Timofey; Asta, Mark

    2017-10-01

    Molecular dynamics (MD) simulations are employed to investigate the capillary fluctuations of steps on the surface of a model metal system. The fluctuation spectrum, characterized by the wave number (k ) dependence of the mean squared capillary-wave amplitudes and associated relaxation times, is calculated for 〈110 〉 and 〈112 〉 steps on the {111 } surface of elemental copper near the melting temperature of the classical potential model considered. Step stiffnesses are derived from the MD results, yielding values from the largest system sizes of (37 ±1 ) meV/A ˚ for the different line orientations, implying that the stiffness is isotropic within the statistical precision of the calculations. The fluctuation lifetimes are found to vary by approximately four orders of magnitude over the range of wave numbers investigated, displaying a k dependence consistent with kinetics governed by step-edge mediated diffusion. The values for step stiffness derived from these simulations are compared to step free energies for the same system and temperature obtained in a recent MD-based thermodynamic-integration (TI) study [Freitas, Frolov, and Asta, Phys. Rev. B 95, 155444 (2017), 10.1103/PhysRevB.95.155444]. Results from the capillary-fluctuation analysis and TI calculations yield statistically significant differences that are discussed within the framework of statistical-mechanical theories for configurational contributions to step free energies.

  15. The ELPA library: scalable parallel eigenvalue solutions for electronic structure theory and computational science.

    PubMed

    Marek, A; Blum, V; Johanni, R; Havu, V; Lang, B; Auckenthaler, T; Heinecke, A; Bungartz, H-J; Lederer, H

    2014-05-28

    Obtaining the eigenvalues and eigenvectors of large matrices is a key problem in electronic structure theory and many other areas of computational science. The computational effort formally scales as O(N(3)) with the size of the investigated problem, N (e.g. the electron count in electronic structure theory), and thus often defines the system size limit that practical calculations cannot overcome. In many cases, more than just a small fraction of the possible eigenvalue/eigenvector pairs is needed, so that iterative solution strategies that focus only on a few eigenvalues become ineffective. Likewise, it is not always desirable or practical to circumvent the eigenvalue solution entirely. We here review some current developments regarding dense eigenvalue solvers and then focus on the Eigenvalue soLvers for Petascale Applications (ELPA) library, which facilitates the efficient algebraic solution of symmetric and Hermitian eigenvalue problems for dense matrices that have real-valued and complex-valued matrix entries, respectively, on parallel computer platforms. ELPA addresses standard as well as generalized eigenvalue problems, relying on the well documented matrix layout of the Scalable Linear Algebra PACKage (ScaLAPACK) library but replacing all actual parallel solution steps with subroutines of its own. For these steps, ELPA significantly outperforms the corresponding ScaLAPACK routines and proprietary libraries that implement the ScaLAPACK interface (e.g. Intel's MKL). The most time-critical step is the reduction of the matrix to tridiagonal form and the corresponding backtransformation of the eigenvectors. ELPA offers both a one-step tridiagonalization (successive Householder transformations) and a two-step transformation that is more efficient especially towards larger matrices and larger numbers of CPU cores. ELPA is based on the MPI standard, with an early hybrid MPI-OpenMPI implementation available as well. Scalability beyond 10,000 CPU cores for problem sizes arising in the field of electronic structure theory is demonstrated for current high-performance computer architectures such as Cray or Intel/Infiniband. For a matrix of dimension 260,000, scalability up to 295,000 CPU cores has been shown on BlueGene/P.

  16. SEMICONDUCTOR TECHNOLOGY: An efficient dose-compensation method for proximity effect correction

    NASA Astrophysics Data System (ADS)

    Ying, Wang; Weihua, Han; Xiang, Yang; Renping, Zhang; Yang, Zhang; Fuhua, Yang

    2010-08-01

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved.

  17. Phase-contrast x-ray computed tomography for biological imaging

    NASA Astrophysics Data System (ADS)

    Momose, Atsushi; Takeda, Tohoru; Itai, Yuji

    1997-10-01

    We have shown so far that 3D structures in biological sot tissues such as cancer can be revealed by phase-contrast x- ray computed tomography using an x-ray interferometer. As a next step, we aim at applications of this technique to in vivo observation, including radiographic applications. For this purpose, the size of view field is desired to be more than a few centimeters. Therefore, a larger x-ray interferometer should be used with x-rays of higher energy. We have evaluated the optimal x-ray energy from an aspect of does as a function of sample size. Moreover, desired spatial resolution to an image sensor is discussed as functions of x-ray energy and sample size, basing on a requirement in the analysis of interference fringes.

  18. Vitrification of zona-free rabbit expanded or hatching blastocysts: a possible model for human blastocysts.

    PubMed

    Cervera, R P; Garcia-Ximénez, F

    2003-10-01

    The purpose of this study was to test the effectiveness of one two-step (A) and two one-step (B1 and B2) vitrification procedures on denuded expanded or hatching rabbit blastocysts held in standard sealed plastic straws as a possible model for human blastocysts. The effect of blastocyst size was also studied on the basis of three size categories (I: diameter <200 micro m; II: diameter 200-299 micro m; III: diameter >/==" BORDER="0">300 micro m). Rabbit expanded or hatching blastocysts were vitrified at day 4 or 5. Before vitrification, the zona pellucida was removed using acidic phosphate buffered saline. For the two-step procedure, prior to vitrification, blastocysts were pre- equilibrated in a solution containing 10% dimethyl sulphoxide (DMSO) and 10% ethylene glycol (EG) for 1 min. Different final vitrification solutions were compared: 20% DMSO and 20% EG with (A and B1) or without (B2) 0.5 mol/l sucrose. Of 198 vitrified blastocysts, 181 (91%) survived, regardless of the vitrification procedure applied. Vitrification procedure A showed significantly higher re-expansion (88%), attachment (86%) and trophectoderm outgrowth (80%) rates than the two one-step vitrification procedures, B1 and B2 (46 and 21%, 20 and 33%, and 18 and 23%, respectively). After warming, blastocysts of greater size (II and III) showed significantly higher attachment (54 and 64%) and trophectoderm outgrowth (44 and 58%) rates than smaller blastocysts (I, attachment: 29%; trophectoderm outgrowth: 25%). These result demonstrate that denuded expanded or hatching rabbit blastocysts of greater size can be satisfactorily vitrified by use of a two-step procedure. The similarity of vitrification solutions used in humans could make it feasible to test such a procedure on human denuded blastocysts of different sizes.

  19. Aircraft conceptual design - an adaptable parametric sizing methodology

    NASA Astrophysics Data System (ADS)

    Coleman, Gary John, Jr.

    Aerospace is a maturing industry with successful and refined baselines which work well for traditional baseline missions, markets and technologies. However, when new markets (space tourism) or new constrains (environmental) or new technologies (composite, natural laminar flow) emerge, the conventional solution is not necessarily best for the new situation. Which begs the question "how does a design team quickly screen and compare novel solutions to conventional solutions for new aerospace challenges?" The answer is rapid and flexible conceptual design Parametric Sizing. In the product design life-cycle, parametric sizing is the first step in screening the total vehicle in terms of mission, configuration and technology to quickly assess first order design and mission sensitivities. During this phase, various missions and technologies are assessed. During this phase, the designer is identifying design solutions of concepts and configurations to meet combinations of mission and technology. This research undertaking contributes the state-of-the-art in aircraft parametric sizing through (1) development of a dedicated conceptual design process and disciplinary methods library, (2) development of a novel and robust parametric sizing process based on 'best-practice' approaches found in the process and disciplinary methods library, and (3) application of the parametric sizing process to a variety of design missions (transonic, supersonic and hypersonic transports), different configurations (tail-aft, blended wing body, strut-braced wing, hypersonic blended bodies, etc.), and different technologies (composite, natural laminar flow, thrust vectored control, etc.), in order to demonstrate the robustness of the methodology and unearth first-order design sensitivities to current and future aerospace design problems. This research undertaking demonstrates the importance of this early design step in selecting the correct combination of mission, technologies and configuration to meet current aerospace challenges. Overarching goal is to avoid the reoccurring situation of optimizing an already ill-fated solution.

  20. The accuracy of matrix population model projections for coniferous trees in the Sierra Nevada, California

    USGS Publications Warehouse

    van Mantgem, P.J.; Stephenson, N.L.

    2005-01-01

    1 We assess the use of simple, size-based matrix population models for projecting population trends for six coniferous tree species in the Sierra Nevada, California. We used demographic data from 16 673 trees in 15 permanent plots to create 17 separate time-invariant, density-independent population projection models, and determined differences between trends projected from initial surveys with a 5-year interval and observed data during two subsequent 5-year time steps. 2 We detected departures from the assumptions of the matrix modelling approach in terms of strong growth autocorrelations. We also found evidence of observation errors for measurements of tree growth and, to a more limited degree, recruitment. Loglinear analysis provided evidence of significant temporal variation in demographic rates for only two of the 17 populations. 3 Total population sizes were strongly predicted by model projections, although population dynamics were dominated by carryover from the previous 5-year time step (i.e. there were few cases of recruitment or death). Fractional changes to overall population sizes were less well predicted. Compared with a null model and a simple demographic model lacking size structure, matrix model projections were better able to predict total population sizes, although the differences were not statistically significant. Matrix model projections were also able to predict short-term rates of survival, growth and recruitment. Mortality frequencies were not well predicted. 4 Our results suggest that simple size-structured models can accurately project future short-term changes for some tree populations. However, not all populations were well predicted and these simple models would probably become more inaccurate over longer projection intervals. The predictive ability of these models would also be limited by disturbance or other events that destabilize demographic rates. ?? 2005 British Ecological Society.

  1. Newmark local time stepping on high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less

  2. Efficiency and flexibility using implicit methods within atmosphere dycores

    NASA Astrophysics Data System (ADS)

    Evans, K. J.; Archibald, R.; Norman, M. R.; Gardner, D. J.; Woodward, C. S.; Worley, P.; Taylor, M.

    2016-12-01

    A suite of explicit and implicit methods are evaluated for a range of configurations of the shallow water dynamical core within the spectral-element Community Atmosphere Model (CAM-SE) to explore their relative computational performance. The configurations are designed to explore the attributes of each method under different but relevant model usage scenarios including varied spectral order within an element, static regional refinement, and scaling to large problem sizes. The limitations and benefits of using explicit versus implicit, with different discretizations and parameters, are discussed in light of trade-offs such as MPI communication, memory, and inherent efficiency bottlenecks. For the regionally refined shallow water configurations, the implicit BDF2 method is about the same efficiency as an explicit Runge-Kutta method, without including a preconditioner. Performance of the implicit methods with the residual function executed on a GPU is also presented; there is speed up for the residual relative to a CPU, but overwhelming transfer costs motivate moving more of the solver to the device. Given the performance behavior of implicit methods within the shallow water dynamical core, the recommendation for future work using implicit solvers is conditional based on scale separation and the stiffness of the problem. The strong growth of linear iterations with increasing resolution or time step size is the main bottleneck to computational efficiency. Within the hydrostatic dynamical core, of CAM-SE, we present results utilizing approximate block factorization preconditioners implemented using the Trilinos library of solvers. They reduce the cost of linear system solves and improve parallel scalability. We provide a summary of the remaining efficiency considerations within the preconditioner and utilization of the GPU, as well as a discussion about the benefits of a time stepping method that provides converged and stable solutions for a much wider range of time step sizes. As more complex model components, for example new physics and aerosols, are connected in the model, having flexibility in the time stepping will enable more options for combining and resolving multiple scales of behavior.

  3. A step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy and minimization of gate fee.

    PubMed

    Kyriakis, Efstathios; Psomopoulos, Constantinos; Kokkotis, Panagiotis; Bourtsalas, Athanasios; Themelis, Nikolaos

    2017-06-23

    This study attempts the development of an algorithm in order to present a step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy, also considering the basic obstacle which is in many cases, the gate fee. Various parameters identified and evaluated in order to formulate the proposed decision making method in the form of an algorithm. The principle simulation input is the amount of municipal solid wastes (MSW) available for incineration and along with its net calorific value are the most important factors for the feasibility of the plant. Moreover, the research is focused both on the parameters that could increase the energy production and those that affect the R1 energy efficiency factor. Estimation of the final gate fee is achieved through the economic analysis of the entire project by investigating both expenses and revenues which are expected according to the selected site and outputs of the facility. In this point, a number of commonly revenue methods were included in the algorithm. The developed algorithm has been validated using three case studies in Greece-Athens, Thessaloniki, and Central Greece, where the cities of Larisa and Volos have been selected for the application of the proposed decision making tool. These case studies were selected based on a previous publication made by two of the authors, in which these areas where examined. Results reveal that the development of a «solid» methodological approach in selecting the site and the size of waste-to-energy (WtE) facility can be feasible. However, the maximization of the energy efficiency factor R1 requires high utilization factors while the minimization of the final gate fee requires high R1 and high metals recovery from the bottom ash as well as economic exploitation of recovered raw materials if any.

  4. Method and apparatus for sizing and separating warp yarns using acoustical energy

    DOEpatents

    Sheen, S.H.; Chien, H.T.; Raptis, A.C.; Kupperman, D.S.

    1998-05-19

    A slashing process is disclosed for preparing warp yarns for weaving operations including the steps of sizing and/or desizing the yarns in an acoustic resonance box and separating the yarns with a leasing apparatus comprised of a set of acoustically agitated lease rods. The sizing step includes immersing the yarns in a size solution contained in an acoustic resonance box. Acoustic transducers are positioned against the exterior of the box for generating an acoustic pressure field within the size solution. Ultrasonic waves that result from the acoustic pressure field continuously agitate the size solution to effect greater mixing and more uniform application and penetration of the size onto the yarns. The sized yarns are then separated by passing the warp yarns over and under lease rods. Electroacoustic transducers generate acoustic waves along the longitudinal axis of the lease rods, creating a shearing motion on the surface of the rods for splitting the yarns. 2 figs.

  5. Stepped Care Versus Direct Face-to-Face Cognitive Behavior Therapy for Social Anxiety Disorder and Panic Disorder: A Randomized Effectiveness Trial.

    PubMed

    Nordgreen, Tine; Haug, Thomas; Öst, Lars-Göran; Andersson, Gerhard; Carlbring, Per; Kvale, Gerd; Tangen, Tone; Heiervang, Einar; Havik, Odd E

    2016-03-01

    The aim of this study was to assess the effectiveness of a cognitive behavioral therapy (CBT) stepped care model (psychoeducation, guided Internet treatment, and face-to-face CBT) compared with direct face-to-face (FtF) CBT. Patients with panic disorder or social anxiety disorder were randomized to either stepped care (n=85) or direct FtF CBT (n=88). Recovery was defined as meeting two of the following three criteria: loss of diagnosis, below cut-off for self-reported symptoms, and functional improvement. No significant differences in intention-to-treat recovery rates were identified between stepped care (40.0%) and direct FtF CBT (43.2%). The majority of the patients who recovered in the stepped care did so at the less therapist-demanding steps (26/34, 76.5%). Moderate to large within-groups effect sizes were identified at posttreatment and 1-year follow-up. The attrition rates were high: 41.2% in the stepped care condition and 27.3% in the direct FtF CBT condition. These findings indicate that the outcome of a stepped care model for anxiety disorders is comparable to that of direct FtF CBT. The rates of improvement at the two less therapist-demanding steps indicate that stepped care models might be useful for increasing patients' access to evidence-based psychological treatments for anxiety disorders. However, attrition in the stepped care condition was high, and research regarding the factors that can improve adherence should be prioritized. Copyright © 2015. Published by Elsevier Ltd.

  6. Optimization of Surface Roughness and Wall Thickness in Dieless Incremental Forming Of Aluminum Sheet Using Taguchi

    NASA Astrophysics Data System (ADS)

    Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir

    2018-03-01

    Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using Taguchi analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.

  7. Single cardiac ventricular myosins are autonomous motors

    PubMed Central

    Wang, Yihua; Yuan, Chen-Ching; Kazmierczak, Katarzyna; Szczesna-Cordary, Danuta

    2018-01-01

    Myosin transduces ATP free energy into mechanical work in muscle. Cardiac muscle has dynamically wide-ranging power demands on the motor as the muscle changes modes in a heartbeat from relaxation, via auxotonic shortening, to isometric contraction. The cardiac power output modulation mechanism is explored in vitro by assessing single cardiac myosin step-size selection versus load. Transgenic mice express human ventricular essential light chain (ELC) in wild- type (WT), or hypertrophic cardiomyopathy-linked mutant forms, A57G or E143K, in a background of mouse α-cardiac myosin heavy chain. Ensemble motility and single myosin mechanical characteristics are consistent with an A57G that impairs ELC N-terminus actin binding and an E143K that impairs lever-arm stability, while both species down-shift average step-size with increasing load. Cardiac myosin in vivo down-shifts velocity/force ratio with increasing load by changed unitary step-size selections. Here, the loaded in vitro single myosin assay indicates quantitative complementarity with the in vivo mechanism. Both have two embedded regulatory transitions, one inhibiting ADP release and a second novel mechanism inhibiting actin detachment via strain on the actin-bound ELC N-terminus. Competing regulators filter unitary step-size selection to control force-velocity modulation without myosin integration into muscle. Cardiac myosin is muscle in a molecule. PMID:29669825

  8. Effect of reaction-step-size noise on the switching dynamics of stochastic populations

    NASA Astrophysics Data System (ADS)

    Be'er, Shay; Heller-Algazi, Metar; Assaf, Michael

    2016-05-01

    In genetic circuits, when the messenger RNA lifetime is short compared to the cell cycle, proteins are produced in geometrically distributed bursts, which greatly affects the cellular switching dynamics between different metastable phenotypic states. Motivated by this scenario, we study a general problem of switching or escape in stochastic populations, where influx of particles occurs in groups or bursts, sampled from an arbitrary distribution. The fact that the step size of the influx reaction is a priori unknown and, in general, may fluctuate in time with a given correlation time and statistics, introduces an additional nondemographic reaction-step-size noise into the system. Employing the probability-generating function technique in conjunction with Hamiltonian formulation, we are able to map the problem in the leading order onto solving a stationary Hamilton-Jacobi equation. We show that compared to the "usual case" of single-step influx, bursty influx exponentially decreases the population's mean escape time from its long-lived metastable state. In particular, close to bifurcation we find a simple analytical expression for the mean escape time which solely depends on the mean and variance of the burst-size distribution. Our results are demonstrated on several realistic distributions and compare well with numerical Monte Carlo simulations.

  9. Formation of HDL-like complexes from apolipoprotein A-I(M) and DMPC.

    PubMed

    Suurkuusk, M; Singh, S K

    2000-01-20

    Conditions for the preparation of reconstituted high density lipoproteins (HDLs) by incubation of the synthetic lipid dimyristoylphosphatidylcholine (DMPC) and recombinant apolipoprotein A-I(M) have been investigated as a function of ratio of incubation lipid to protein, incubation temperature and the lipid form (multilamellar (MLV) or small unilamellar (SUV) vesicles). The size distributions of the resultant lipid-protein complex particles from various incubations have been evaluated by native gel electrophoresis. Structural changes of the protein after incorporation into these complex particles have been estimated by CD. Thermal characteristics of the particles has been examined by DSC and correlated with CD results. Titration calorimetry has been used to obtain interaction parameters based on a simplified binding model. It is hypothesized that the major enthalpic step in the production of rHDLs is the primary association step between protein and lipid vesicles. It has been shown that by raising the temperature and incubation ratio, the formation of rHDL particles can be directed towards smaller size and a narrower size distribution. The results have been described on the basis of a model where formation of discoidal particles requires prior saturation of vesicle surface area by adsorbed protein, thus explaining differences between particles formed from MLVs and SUVs.

  10. An extension of the Saltykov method to quantify 3D grain size distributions in mylonites

    NASA Astrophysics Data System (ADS)

    Lopez-Sanchez, Marco A.; Llana-Fúnez, Sergio

    2016-12-01

    The estimation of 3D grain size distributions (GSDs) in mylonites is key to understanding the rheological properties of crystalline aggregates and to constraining dynamic recrystallization models. This paper investigates whether a common stereological method, the Saltykov method, is appropriate for the study of GSDs in mylonites. In addition, we present a new stereological method, named the two-step method, which estimates a lognormal probability density function describing the 3D GSD. Both methods are tested for reproducibility and accuracy using natural and synthetic data sets. The main conclusion is that both methods are accurate and simple enough to be systematically used in recrystallized aggregates with near-equant grains. The Saltykov method is particularly suitable for estimating the volume percentage of particular grain-size fractions with an absolute uncertainty of ±5 in the estimates. The two-step method is suitable for quantifying the shape of the actual 3D GSD in recrystallized rocks using a single value, the multiplicative standard deviation (MSD) parameter, and providing a precision in the estimate typically better than 5%. The novel method provides a MSD value in recrystallized quartz that differs from previous estimates based on apparent 2D GSDs, highlighting the inconvenience of using apparent GSDs for such tasks.

  11. Sample Size Calculations for Micro-randomized Trials in mHealth

    PubMed Central

    Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A.

    2015-01-01

    The use and development of mobile interventions are experiencing rapid growth. In “just-in-time” mobile interventions, treatments are provided via a mobile device and they are intended to help an individual make healthy decisions “in the moment,” and thus have a proximal, near future impact. Currently the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a “micro-randomized” trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. PMID:26707831

  12. X-ray physico-chemical imaging during activation of cobalt-based Fischer-Tropsch synthesis catalysts

    NASA Astrophysics Data System (ADS)

    Beale, Andrew M.; Jacques, Simon D. M.; Di Michiel, Marco; Mosselmans, J. Frederick W.; Price, Stephen W. T.; Senecal, Pierre; Vamvakeros, Antonios; Paterson, James

    2017-11-01

    The imaging of catalysts and other functional materials under reaction conditions has advanced significantly in recent years. The combination of the computed tomography (CT) approach with methods such as X-ray diffraction (XRD), X-ray fluorescence (XRF) and X-ray absorption near-edge spectroscopy (XANES) now enables local chemical and physical state information to be extracted from within the interiors of intact materials which are, by accident or design, inhomogeneous. In this work, we follow the phase evolution during the initial reduction step(s) to form Co metal, for Co-containing particles employed as Fischer-Tropsch synthesis (FTS) catalysts; firstly, working at small length scales (approx. micrometre spatial resolution), a combination of sample size and density allows for transmission of comparatively low energy signals enabling the recording of `multimodal' tomography, i.e. simultaneous XRF-CT, XANES-CT and XRD-CT. Subsequently, we show high-energy XRD-CT can be employed to reveal extent of reduction and uniformity of crystallite size on millimetre-sized TiO2 trilobes. In both studies, the CoO phase is seen to persist or else evolve under particular operating conditions and we speculate as to why this is observed. This article is part of a discussion meeting issue 'Providing sustainable catalytic solutions for a rapidly changing world'.

  13. A comparison of artificial compressibility and fractional step methods for incompressible flow computations

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Darian, Armen; Sindir, Munir

    1992-01-01

    We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).

  14. Intermediate surface structure between step bunching and step flow in SrRuO3 thin film growth

    NASA Astrophysics Data System (ADS)

    Bertino, Giulia; Gura, Anna; Dawber, Matthew

    We performed a systematic study of SrRuO3 thin films grown on TiO2 terminated SrTiO3 substrates using off-axis magnetron sputtering. We investigated the step bunching formation and the evolution of the SRO film morphology by varying the step size of the substrate, the growth temperature and the film thickness. The thin films were characterized using Atomic Force Microscopy and X-Ray Diffraction. We identified single and multiple step bunching and step flow growth regimes as a function of the growth parameters. Also, we clearly observe a stronger influence of the step size of the substrate on the evolution of the SRO film surface with respect to the other growth parameters. Remarkably, we observe the formation of a smooth, regular and uniform ``fish skin'' structure at the transition between one regime and another. We believe that the fish skin structure results from the merging of 2D flat islands predicted by previous models. The direct observation of this transition structure allows us to better understand how and when step bunching develops in the growth of SrRuO3 thin films.

  15. Data-Driven Simulation-Enhanced Optimization of People-Based Print Production Service

    NASA Astrophysics Data System (ADS)

    Rai, Sudhendu

    This paper describes a systematic six-step data-driven simulation-based methodology for optimizing people-based service systems on a large distributed scale that exhibit high variety and variability. The methodology is exemplified through its application within the printing services industry where it has been successfully deployed by Xerox Corporation across small, mid-sized and large print shops generating over 250 million in profits across the customer value chain. Each step of the methodology consisting of innovative concepts co-development and testing in partnership with customers, development of software and hardware tools to implement the innovative concepts, establishment of work-process and practices for customer-engagement and service implementation, creation of training and infrastructure for large scale deployment, integration of the innovative offering within the framework of existing corporate offerings and lastly the monitoring and deployment of the financial and operational metrics for estimating the return-on-investment and the continual renewal of the offering are described in detail.

  16. Privacy protection in surveillance systems based on JPEG DCT baseline compression and spectral domain watermarking

    NASA Astrophysics Data System (ADS)

    Sablik, Thomas; Velten, Jörg; Kummert, Anton

    2015-03-01

    An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.

  17. A seminested PCR assay for detection and typing of human papillomavirus based on E1 gene sequences.

    PubMed

    Cavalcante, Gustavo Henrique O; de Araújo, Josélio M G; Fernandes, José Veríssimo; Lanza, Daniel C F

    2018-05-01

    HPV infection is considered one of the leading causes of cervical cancer in the world. To date, more than 180 types of HPV have been described and viral typing is critical for defining the prognosis of cancer. In this work, a seminested PCR which allow fast and inexpensively detection and typing of HPV is presented. The system is based on the amplification of a variable length region within the viral gene E1, using three primers that potentially anneal in all HPV genomes. The amplicons produced in the first step can be identified by high resolution electrophoresis or direct sequencing. The seminested step includes nine specific primers which can be used in multiplex or individual reactions to discriminate the main types of HPV by amplicon size differentiation using agarose electrophoresis, reducing the time spent and cost per analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Advanced Design Methodology for Robust Aircraft Sizing and Synthesis

    NASA Technical Reports Server (NTRS)

    Mavris, Dimitri N.

    1997-01-01

    Contract efforts are focused on refining the Robust Design Methodology for Conceptual Aircraft Design. Robust Design Simulation (RDS) was developed earlier as a potential solution to the need to do rapid trade-offs while accounting for risk, conflict, and uncertainty. The core of the simulation revolved around Response Surface Equations as approximations of bounded design spaces. An ongoing investigation is concerned with the advantages of using Neural Networks in conceptual design. Thought was also given to the development of systematic way to choose or create a baseline configuration based on specific mission requirements. Expert system was developed, which selects aerodynamics, performance and weights model from several configurations based on the user's mission requirements for subsonic civil transport. The research has also resulted in a step-by-step illustration on how to use the AMV method for distribution generation and the search for robust design solutions to multivariate constrained problems.

  19. Fast time- and frequency-domain finite-element methods for electromagnetic analysis

    NASA Astrophysics Data System (ADS)

    Lee, Woochan

    Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.

  20. Inferring the demographic history from DNA sequences: An importance sampling approach based on non-homogeneous processes.

    PubMed

    Ait Kaci Azzou, S; Larribe, F; Froda, S

    2016-10-01

    In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.

  1. Impact of voxel size variation on CBCT-based diagnostic outcome in dentistry: a systematic review.

    PubMed

    Spin-Neto, Rubens; Gotfredsen, Erik; Wenzel, Ann

    2013-08-01

    The objective of this study was to make a systematic review on the impact of voxel size in cone beam computed tomography (CBCT)-based image acquisition, retrieving evidence regarding the diagnostic outcome of those images. The MEDLINE bibliographic database was searched from 1950 to June 2012 for reports comparing diverse CBCT voxel sizes. The search strategy was limited to English-language publications using the following combined terms in the search strategy: (voxel or FOV or field of view or resolution) and (CBCT or cone beam CT). The results from the review identified 20 publications that qualitatively or quantitatively assessed the influence of voxel size on CBCT-based diagnostic outcome, and in which the methodology/results comprised at least one of the expected parameters (image acquisition, reconstruction protocols, type of diagnostic task, and presence of a gold standard). The diagnostic task assessed in the studies was diverse, including the detection of root fractures, the detection of caries lesions, and accuracy of 3D surface reconstruction and of bony measurements, among others. From the studies assessed, it is clear that no general protocol can be yet defined for CBCT examination of specific diagnostic tasks in dentistry. Rationale in this direction is an important step to define the utility of CBCT imaging.

  2. Micro-computed tomography characterization of tissue engineering scaffolds: effects of pixel size and rotation step.

    PubMed

    Cengiz, Ibrahim Fatih; Oliveira, Joaquim Miguel; Reis, Rui L

    2017-08-01

    Quantitative assessment of micro-structure of materials is of key importance in many fields including tissue engineering, biology, and dentistry. Micro-computed tomography (µ-CT) is an intensively used non-destructive technique. However, the acquisition parameters such as pixel size and rotation step may have significant effects on the obtained results. In this study, a set of tissue engineering scaffolds including examples of natural and synthetic polymers, and ceramics were analyzed. We comprehensively compared the quantitative results of µ-CT characterization using 15 acquisition scenarios that differ in the combination of the pixel size and rotation step. The results showed that the acquisition parameters could statistically significantly affect the quantified mean porosity, mean pore size, and mean wall thickness of the scaffolds. The effects are also practically important since the differences can be as high as 24% regarding the mean porosity in average, and 19.5 h and 166 GB regarding the characterization time and data storage per sample with a relatively small volume. This study showed in a quantitative manner the effects of such a wide range of acquisition scenarios on the final data, as well as the characterization time and data storage per sample. Herein, a clear picture of the effects of the pixel size and rotation step on the results is provided which can notably be useful to refine the practice of µ-CT characterization of scaffolds and economize the related resources.

  3. In situ formation deposited ZnO nanoparticles on silk fabrics under ultrasound irradiation.

    PubMed

    Khanjani, Somayeh; Morsali, Ali; Joo, Sang W

    2013-03-01

    Deposition of zinc(II) oxide (ZnO) nanoparticles on the surface of silk fabrics was prepared by sequential dipping steps in alternating bath of potassium hydroxide and zinc nitrate under ultrasound irradiation. This coating involves in situ generation and deposition of ZnO in a one step. The effects of ultrasound irradiation, concentration and sequential dipping steps on growth of the ZnO nanoparticles have been studied. Results show a decrease in the particles size as increasing power of ultrasound irradiation. Also, increasing of the concentration and sequential dipping steps increase particle size. The physicochemical properties of the nanoparticles were determined by powder X-ray diffraction (XRD), scanning electron microscopy (SEM) and wavelength dispersive X-ray (WDX). Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Facile one-step construction of covalently networked, self-healable, and transparent superhydrophobic composite films

    NASA Astrophysics Data System (ADS)

    Lee, Yujin; You, Eun-Ah; Ha, Young-Geun

    2018-07-01

    Despite the considerable demand for bioinspired superhydrophobic surfaces with highly transparent, self-cleaning, and self-healable properties, a facile and scalable fabrication method for multifunctional superhydrophobic films with strong chemical networks has rarely been established. Here, we report a rationally designed facile one-step construction of covalently networked, transparent, self-cleaning, and self-healable superhydrophobic films via a one-step preparation and single-reaction process of multi-components. As coating materials for achieving the one-step fabrication of multifunctional superhydrophobic films, we included two different sizes of Al2O3 nanoparticles for hierarchical micro/nano dual-scale structures and transparent films, fluoroalkylsilane for both low surface energy and covalent binding functions, and aluminum nitrate for aluminum oxide networked films. On the basis of stability tests for the robust film composition, the optimized, covalently linked superhydrophobic composite films with a high water contact angle (>160°) and low sliding angle (<1°) showed excellent thermal stability (up to 400 °C), transparency (≈80%), self-healing, self-cleaning, and waterproof abilities. Therefore, the rationally designed, covalently networked superhydrophobic composite films, fabricated via a one-step solution-based process, can be further utilized for various optical and optoelectronic applications.

  5. Steps in the open space planning process

    Treesearch

    Stephanie B. Kelly; Melissa M. Ryan

    1995-01-01

    This paper presents the steps involved in developing an open space plan. The steps are generic in that the methods may be applied various size communities. The intent is to provide a framework to develop an open space plan that meets Massachusetts requirements for funding of open space acquisition.

  6. Variable-mesh method of solving differential equations

    NASA Technical Reports Server (NTRS)

    Van Wyk, R.

    1969-01-01

    Multistep predictor-corrector method for numerical solution of ordinary differential equations retains high local accuracy and convergence properties. In addition, the method was developed in a form conducive to the generation of effective criteria for the selection of subsequent step sizes in step-by-step solution of differential equations.

  7. A simple, compact, and rigid piezoelectric step motor with large step size.

    PubMed

    Wang, Qi; Lu, Qingyou

    2009-08-01

    We present a novel piezoelectric stepper motor featuring high compactness, rigidity, simplicity, and any direction operability. Although tested in room temperature, it is believed to work in low temperatures, owing to its loose operation conditions and large step size. The motor is implemented with a piezoelectric scanner tube that is axially cut into almost two halves and clamp holds a hollow shaft inside at both ends via the spring parts of the shaft. Two driving voltages that singly deform the two halves of the piezotube in one direction and recover simultaneously will move the shaft in the opposite direction, and vice versa.

  8. A simple, compact, and rigid piezoelectric step motor with large step size

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Lu, Qingyou

    2009-08-01

    We present a novel piezoelectric stepper motor featuring high compactness, rigidity, simplicity, and any direction operability. Although tested in room temperature, it is believed to work in low temperatures, owing to its loose operation conditions and large step size. The motor is implemented with a piezoelectric scanner tube that is axially cut into almost two halves and clamp holds a hollow shaft inside at both ends via the spring parts of the shaft. Two driving voltages that singly deform the two halves of the piezotube in one direction and recover simultaneously will move the shaft in the opposite direction, and vice versa.

  9. An improved affine projection algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo

    2017-08-01

    Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.

  10. Real-time inverse planning for Gamma Knife radiosurgery.

    PubMed

    Wu, Q Jackie; Chankong, Vira; Jitprapaikulsarn, Suradet; Wessels, Barry W; Einstein, Douglas B; Mathayomchan, Boonyanit; Kinsella, Timothy J

    2003-11-01

    The challenges of real-time Gamma Knife inverse planning are the large number of variables involved and the unknown search space a priori. With limited collimator sizes, shots have to be heavily overlapped to form a smooth prescription isodose line that conforms to the irregular target shape. Such overlaps greatly influence the total number of shots per plan, making pre-determination of the total number of shots impractical. However, this total number of shots usually defines the search space, a pre-requisite for most of the optimization methods. Since each shot only covers part of the target, a collection of shots in different locations and various collimator sizes selected makes up the global dose distribution that conforms to the target. Hence, planning or placing these shots is a combinatorial optimization process that is computationally expensive by nature. We have previously developed a theory of shot placement and optimization based on skeletonization. The real-time inverse planning process, reported in this paper, is an expansion and the clinical implementation of this theory. The complete planning process consists of two steps. The first step is to determine an optimal number of shots including locations and sizes and to assign initial collimator size to each of the shots. The second step is to fine-tune the weights using a linear-programming technique. The objective function is to minimize the total dose to the target boundary (i.e., maximize the dose conformity). Results of an ellipsoid test target and ten clinical cases are presented. The clinical cases are also compared with physician's manual plans. The target coverage is more than 99% for manual plans and 97% for all the inverse plans. The RTOG PITV conformity indices for the manual plans are between 1.16 and 3.46, compared to 1.36 to 2.4 for the inverse plans. All the inverse plans are generated in less than 2 min, making real-time inverse planning a reality.

  11. A two-step super-Gaussian independent component analysis approach for fMRI data.

    PubMed

    Ge, Ruiyang; Yao, Li; Zhang, Hang; Long, Zhiying

    2015-09-01

    Independent component analysis (ICA) has been widely applied to functional magnetic resonance imaging (fMRI) data analysis. Although ICA assumes that the sources underlying data are statistically independent, it usually ignores sources' additional properties, such as sparsity. In this study, we propose a two-step super-GaussianICA (2SGICA) method that incorporates the sparse prior of the sources into the ICA model. 2SGICA uses the super-Gaussian ICA (SGICA) algorithm that is based on a simplified Lewicki-Sejnowski's model to obtain the initial source estimate in the first step. Using a kernel estimator technique, the source density is acquired and fitted to the Laplacian function based on the initial source estimates. The fitted Laplacian prior is used for each source at the second SGICA step. Moreover, the automatic target generation process for initial value generation is used in 2SGICA to guarantee the stability of the algorithm. An adaptive step size selection criterion is also implemented in the proposed algorithm. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of 2SGICA and made a performance comparison between InfomaxICA, FastICA, mean field ICA (MFICA) with Laplacian prior, sparse online dictionary learning (ODL), SGICA and 2SGICA. Both simulated and real fMRI experiments showed that the 2SGICA was most robust to noises, and had the best spatial detection power and the time course estimation among the six methods. Copyright © 2015. Published by Elsevier Inc.

  12. Optimal leveling of flow over one-dimensional topography by Marangoni stresses

    NASA Astrophysics Data System (ADS)

    Gramlich, C. M.; Kalliadasis, Serafim; Homsy, G. M.; Messer, C.

    2002-06-01

    A thin viscous film flowing over a step down in topography exhibits a capillary ridge preceding the step. In applications, a planar liquid surface is often desired and hence there is a need to level the ridge. This paper investigates optimal leveling of the ridge by means of a Marangoni stress such as might be produced by a localized heater creating temperature variations at the film surface. The differential equation for the free surface based on lubrication theory and incorporating the effects of topography and temperature gradients is solved numerically for steps down in topography with different temperature profiles. Both rectangular "top-hat" and parabolic profiles, chosen to model physically realizable heaters, were found to be effective in reducing the height of the capillary ridge. Leveling the ridge is formulated as an optimization problem to minimize the maximum free-surface height by varying the heater strength, position, and width. With the optimized heaters, the variation in surface height is reduced by more than 50% compared to the original isothermal ridge. For more effective leveling, we consider an asymmetric n-step temperature distribution. The optimal n-step heater in this case results in (n+1) ridges of equal size; 2- and 3-step heaters reduce the variation in surface height by about 70% and 77%, respectively. Finally, we explore the potential of coolers and step temperature profiles for still more effective leveling.

  13. Stability with large step sizes for multistep discretizations of stiff ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Majda, George

    1986-01-01

    One-leg and multistep discretizations of variable-coefficient linear systems of ODEs having both slow and fast time scales are investigated analytically. The stability properties of these discretizations are obtained independent of ODE stiffness and compared. The results of numerical computations are presented in tables, and it is shown that for large step sizes the stability of one-leg methods is better than that of the corresponding linear multistep methods.

  14. AN ATTEMPT TO FIND AN A PRIORI MEASURE OF STEP SIZE. COMPARATIVE STUDIES OF PRINCIPLES FOR PROGRAMMING MATHEMATICS IN AUTOMATED INSTRUCTION, TECHNICAL REPORT NO. 13.

    ERIC Educational Resources Information Center

    ROSEN, ELLEN F.; STOLUROW, LAWRENCE M.

    IN ORDER TO FIND A GOOD PREDICTOR OF EMPIRICAL DIFFICULTY, AN OPERATIONAL DEFINITION OF STEP SIZE, TEN PROGRAMER-JUDGES RATED CHANGE IN COMPLEXITY IN TWO VERSIONS OF A MATHEMATICS PROGRAM, AND THESE RATINGS WERE THEN COMPARED WITH MEASURES OF EMPIRICAL DIFFICULTY OBTAINED FROM STUDENT RESPONSE DATA. THE TWO VERSIONS, A 54 FRAME BOOKLET AND A 35…

  15. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  16. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  17. An Automated Inner Dimensional Measurement System Based on a Laser Displacement Sensor for Long-Stepped Pipes

    PubMed Central

    Zhang, Fumin; Qu, Xinghua; Ouyang, Jianfei

    2012-01-01

    A novel measurement prototype based on a mobile vehicle that carries a laser scanning sensor is proposed. The prototype is intended for the automated measurement of the interior 3D geometry of large-diameter long-stepped pipes. The laser displacement sensor, which has a small measurement range, is mounted on an extended arm of known length. It is scanned to improve the measurement accuracy for large-sized pipes. A fixing mechanism based on two sections is designed to ensure that the stepped pipe is concentric with the axis of rotation of the system. Data are acquired in a cylindrical coordinate system and fitted in a circle to determine diameter. Systematic errors covering arm length, tilt, and offset errors are analyzed and calibrated. The proposed system is applied to sample parts and the results are discussed to verify its effectiveness. This technique measures a diameter of 600 mm with an uncertainty of 0.02 mm at a 95% confidence probability. A repeatability test is performed to examine precision, which is 1.1 μm. A laser tracker is used to verify the measurement accuracy of the system, which is evaluated as 9 μm within a diameter of 600 mm. PMID:22778615

  18. Finite Memory Walk and Its Application to Small-World Network

    NASA Astrophysics Data System (ADS)

    Oshima, Hiraku; Odagaki, Takashi

    2012-07-01

    In order to investigate the effects of cycles on the dynamical process on both regular lattices and complex networks, we introduce a finite memory walk (FMW) as an extension of the simple random walk (SRW), in which a walker is prohibited from moving to sites visited during m steps just before the current position. This walk interpolates the simple random walk (SRW), which has no memory (m = 0), and the self-avoiding walk (SAW), which has an infinite memory (m = ∞). We investigate the FMW on regular lattices and clarify the fundamental characteristics of the walk. We find that (1) the mean-square displacement (MSD) of the FMW shows a crossover from the SAW at a short time step to the SRW at a long time step, and the crossover time is approximately equivalent to the number of steps remembered, and that the MSD can be rescaled in terms of the time step and the size of memory; (2) the mean first-return time (MFRT) of the FMW changes significantly at the number of remembered steps that corresponds to the size of the smallest cycle in the regular lattice, where ``smallest'' indicates that the size of the cycle is the smallest in the network; (3) the relaxation time of the first-return time distribution (FRTD) decreases as the number of cycles increases. We also investigate the FMW on the Watts--Strogatz networks that can generate small-world networks, and show that the clustering coefficient of the Watts--Strogatz network is strongly related to the MFRT of the FMW that can remember two steps.

  19. Ongoing Development of a Series Bosch Reactor System

    NASA Technical Reports Server (NTRS)

    Abney, Morgan; Mansell, Matt; DuMez, Sam; Thomas, John; Cooper, Charlie; Long, David

    2013-01-01

    Future manned missions to deep space or planetary surfaces will undoubtedly require highly robust, efficient, and regenerable life support systems that require minimal consumables. To meet this requirement, NASA continues to explore a Bosch-based carbon dioxide reduction system to recover oxygen from CO2. In order to improve the equivalent system mass of Bosch systems, we seek to design and test a "Series Bosch" system in which two reactors in series are optimized for the two steps of the reaction, as well as to explore the use of in situ materials as carbon deposition catalysts. Here we report recent developments in this effort including assembly and initial testing of a Reverse Water-Gas Shift reactor (RWGSr) and initial testing of two gas separation membranes. The RWGSr was sized to reduce CO2 produced by a crew of four to carbon monoxide as the first stage in a Series Bosch system. The gas separation membranes, necessary to recycle unreacted hydrogen and CO2, were similarly sized. Additionally, we report results of preliminary experiments designed to determine the catalytic properties of Martian and Lunar regolith simulant for the carbon deposition step.

  20. Hysteresis modeling of magnetic shape memory alloy actuator based on Krasnosel'skii-Pokrovskii model.

    PubMed

    Zhou, Miaolei; Wang, Shoubin; Gao, Wei

    2013-01-01

    As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator.

  1. Ongoing Development of a Series Bosch Reactor System

    NASA Technical Reports Server (NTRS)

    Abney, Morgan B; Mansell, J. Matthew; Stanley, Christine; Edmunson, Jennifer; DuMez, Samuel J.; Chen, Kevin

    2013-01-01

    Future manned missions to deep space or planetary surfaces will undoubtedly incorporate highly robust, efficient, and regenerable life support systems that require minimal consumables. To meet this requirement, NASA continues to explore a Bosch-based carbon dioxide reduction system to recover oxygen from CO2. In order to improve the equivalent system mass of Bosch systems, we seek to design and test a "Series Bosch" system in which two reactors in series are optimized for the two steps of the reaction, as well as to explore the use of in situ materials as carbon deposition catalysts. Here we report recent developments in this effort including assembly and initial testing of a Reverse Water-Gas Shift reactor (RWGSr) and initial testing of two gas separation membranes. The RWGSr was sized to reduce CO2 produced by a crew of four to carbon monoxide as the first stage in a Series Bosch system. The gas separation membranes, necessary to recycle unreacted hydrogen and CO2, were similarly sized. Additionally, we report results of preliminary experiments designed to determine the catalytic properties of Martian regolith simulant for the carbon formation step.

  2. Hysteresis Modeling of Magnetic Shape Memory Alloy Actuator Based on Krasnosel'skii-Pokrovskii Model

    PubMed Central

    Wang, Shoubin; Gao, Wei

    2013-01-01

    As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator. PMID:23737730

  3. Varying behavior of different window sizes on the classification of static and dynamic physical activities from a single accelerometer.

    PubMed

    Fida, Benish; Bernabucci, Ivan; Bibbo, Daniele; Conforto, Silvia; Schmid, Maurizio

    2015-07-01

    Accuracy of systems able to recognize in real time daily living activities heavily depends on the processing step for signal segmentation. So far, windowing approaches are used to segment data and the window size is usually chosen based on previous studies. However, literature is vague on the investigation of its effect on the obtained activity recognition accuracy, if both short and long duration activities are considered. In this work, we present the impact of window size on the recognition of daily living activities, where transitions between different activities are also taken into account. The study was conducted on nine participants who wore a tri-axial accelerometer on their waist and performed some short (sitting, standing, and transitions between activities) and long (walking, stair descending and stair ascending) duration activities. Five different classifiers were tested, and among the different window sizes, it was found that 1.5 s window size represents the best trade-off in recognition among activities, with an obtained accuracy well above 90%. Differences in recognition accuracy for each activity highlight the utility of developing adaptive segmentation criteria, based on the duration of the activities. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Design Guidelines for High-Performance Particle-Based Photoanodes for Water Splitting: Lanthanum Titanium Oxynitride as a Model.

    PubMed

    Landsmann, Steve; Maegli, Alexandra E; Trottmann, Matthias; Battaglia, Corsin; Weidenkaff, Anke; Pokrant, Simone

    2015-10-26

    Semiconductor powders are perfectly suited for the scalable fabrication of particle-based photoelectrodes, which can be used to split water using the sun as a renewable energy source. This systematic study is focused on variation of the electrode design using LaTiO2 N as a model system. We present the influence of particle morphology on charge separation and transport properties combined with post-treatment procedures, such as necking and size-dependent co-catalyst loading. Five rules are proposed to guide the design of high-performance particle-based photoanodes by adding or varying several process steps. We also specify how much efficiency improvement can be achieved using each of the steps. For example, implementation of a connectivity network and surface area enhancement leads to thirty times improvement in efficiency and co-catalyst loading achieves an improvement in efficiency by a factor of seven. Some of these guidelines can be adapted to non-particle-based photoelectrodes. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. An illustration of new methods in machine condition monitoring, Part I: stochastic resonance

    NASA Astrophysics Data System (ADS)

    Worden, K.; Antoniadou, I.; Marchesiello, S.; Mba, C.; Garibaldi, L.

    2017-05-01

    There have been many recent developments in the application of data-based methods to machine condition monitoring. A powerful methodology based on machine learning has emerged, where diagnostics are based on a two-step procedure: extraction of damage-sensitive features, followed by unsupervised learning (novelty detection) or supervised learning (classification). The objective of the current pair of papers is simply to illustrate one state-of-the-art procedure for each step, using synthetic data representative of reality in terms of size and complexity. The first paper in the pair will deal with feature extraction. Although some papers have appeared in the recent past considering stochastic resonance as a means of amplifying damage information in signals, they have largely relied on ad hoc specifications of the resonator used. In contrast, the current paper will adopt a principled optimisation-based approach to the resonator design. The paper will also show that a discrete dynamical system can provide all the benefits of a continuous system, but also provide a considerable speed-up in terms of simulation time in order to facilitate the optimisation approach.

  6. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    NASA Astrophysics Data System (ADS)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  7. Identification of column edges of DNA fragments by using K-means clustering and mean algorithm on lane histograms of DNA agarose gel electrophoresis images

    NASA Astrophysics Data System (ADS)

    Turan, Muhammed K.; Sehirli, Eftal; Elen, Abdullah; Karas, Ismail R.

    2015-07-01

    Gel electrophoresis (GE) is one of the most used method to separate DNA, RNA, protein molecules according to size, weight and quantity parameters in many areas such as genetics, molecular biology, biochemistry, microbiology. The main way to separate each molecule is to find borders of each molecule fragment. This paper presents a software application that show columns edges of DNA fragments in 3 steps. In the first step the application obtains lane histograms of agarose gel electrophoresis images by doing projection based on x-axis. In the second step, it utilizes k-means clustering algorithm to classify point values of lane histogram such as left side values, right side values and undesired values. In the third step, column edges of DNA fragments is shown by using mean algorithm and mathematical processes to separate DNA fragments from the background in a fully automated way. In addition to this, the application presents locations of DNA fragments and how many DNA fragments exist on images captured by a scientific camera.

  8. Automated image segmentation-assisted flattening of atomic force microscopy images.

    PubMed

    Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin

    2018-01-01

    Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.

  9. Homoepitaxial and Heteroepitaxial Growth on Step-Free SiC Mesas

    NASA Technical Reports Server (NTRS)

    Neudeck, Philip G.; Powell, J. Anthony

    2004-01-01

    This article describes the initial discovery and development of new approaches to SiC homoepitaxial and heteroepitaxial growth. These approaches are based upon the previously unanticipated ability to effectively supress two-dimensional nucleation of 3C-SiC on large basal plane terraces that form between growth steps when epitaxy is carried out on 4H- and 6H-SiC nearly on-axis substrates. After subdividing the growth surface into mesa regions, pure stepflow homoeptixay with no terrace nucleation was then used to grow all existing surface steps off the edges of screw-dislocation-free mesas, leaving behind perfectly on-axis (0001) basal plane mesa surfaces completely free of atomic-scale steps. Step-free mesa surfaces as large as 0.4 mm x 0.4 mm were experimentally realized, with the yield and size of step-free mesas being initally limited by substrate screw dislocations. Continued epitaxial growth following step-free surface formation leads to the formation of thin lateral cantilevers that extend the step-free surface area from the top edge of the mesa sidewalls. By selecting a proper pre-growth mesa shape and crystallographic orientation, the rate of cantilever growth can be greatly enhanced in a web growth process that has been used to (1) enlarge step-free surface areas and (2) overgrow and laterally relocate micropipes and screw dislocations. A new growth process, named step-free surface heteroepitaxy, has been developed to achieve 3C-SiC films on 4H- and 6H-SiC substrate mesas completely free of double positioning boundary and stacking fault defects. The process is based upon the controlled terrace nucleation and lateral expansion of a single island of 3C-SiC across a step-free mesa surface. Experimental results indicate that substrateepilayer lattice mismatch is at least partially relieved parallel to the interface without dislocations that undesirably thread through the thickness of the epilayer. These results should enable realization of improved SiC homojunction and heterojunction devices. In addition, these experiments offer important insights into the nature of polytypism during SiC crystal growth.

  10. [A focused sound field measurement system by LabVIEW].

    PubMed

    Jiang, Zhan; Bai, Jingfeng; Yu, Ying

    2014-05-01

    In this paper, according to the requirement of the focused sound field measurement, a focused sound field measurement system was established based on the LabVIEW virtual instrument platform. The system can automatically search the focus position of the sound field, and adjust the scanning path according to the size of the focal region. Three-dimensional sound field scanning time reduced from 888 hours in uniform step to 9.25 hours in variable step. The efficiency of the focused sound field measurement was improved. There is a certain deviation between measurement results and theoretical calculation results. Focal plane--6 dB width difference rate was 3.691%, the beam axis--6 dB length differences rate was 12.937%.

  11. Affordable Hybrid Heat Pump Clothes Dryer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TeGrotenhuis, Ward E.; Butterfield, Andrew; Caldwell, Dustin D.

    This project was successful in demonstrating the feasibility of a step change in residential clothes dryer energy efficiency by demonstrating heat pump technology capable of 50% energy savings over conventional standard-size electric dryers with comparable drying times. A prototype system was designed from off-the-shelf components that can meet the project’s efficiency goals and are affordable. An experimental prototype system was built based on the design that reached 50% energy savings. Improvements have been identified that will reduce drying times of over 60 minutes to reach the goal of 40 minutes. Nevertheless, the prototype represents a step change in efficiency overmore » heat pump dryers recently introduced to the U.S. market, with 30% improvement in energy efficiency at comparable drying times.« less

  12. Two-step activation of paper batteries for high power generation: design and fabrication of biofluid- and water-activated paper batteries

    NASA Astrophysics Data System (ADS)

    Lee, Ki Bang

    2006-11-01

    Two-step activation of paper batteries has been successfully demonstrated to provide quick activation and to supply high power to credit card-sized biosystems on a plastic chip. A stack of a magnesium layer (an anode), a fluid guide (absorbent paper), a highly doped filter paper with copper chloride (a cathode) and a copper layer as a current collector is laminated between two transparent plastic films into a high power biofluid- and water-activated battery. The battery is activated by two-step activation: (1) after placing a drop of biofluid/water-based solution on the fluid inlet, the surface tension first drives the fluid to soak the fluid guide; (2) the fluid in the fluid guide then penetrates into the heavily doped filter paper with copper chloride to start the battery reaction. The fabricated half credit card-sized battery was activated by saliva, urine and tap water and delivered a maximum voltage of 1.56 V within 10 s after activation and a maximum power of 15.6 mW. When 10 kΩ and 1 KΩ loads are used, the service time with water, urine and saliva is measured as more than 2 h. An in-series battery of 3 V has been successfully tested to power two LEDs (light emitting diodes) and an electric driving circuit. As such, this high power paper battery could be integrated with on-demand credit card-sized biosystems such as healthcare test kits, biochips, lab-on-a-chip, DNA chips, protein chips or even test chips for water quality checking or chemical checking.

  13. On evaluating clustering procedures for use in classification

    NASA Technical Reports Server (NTRS)

    Pore, M. D.; Moritz, T. E.; Register, D. T.; Yao, S. S.; Eppler, W. G. (Principal Investigator)

    1979-01-01

    The problem of evaluating clustering algorithms and their respective computer programs for use in a preprocessing step for classification is addressed. In clustering for classification the probability of correct classification is suggested as the ultimate measure of accuracy on training data. A means of implementing this criterion and a measure of cluster purity are discussed. Examples are given. A procedure for cluster labeling that is based on cluster purity and sample size is presented.

  14. The detection of earth orbiting objects by IRAS

    NASA Technical Reports Server (NTRS)

    Dow, Kimberly L.; Sykes, Mark V.; Low, Frank J.; Vilas, Faith

    1990-01-01

    A systematic examination of 1836 images of the sky constructed from scans made by the Infrared Astronomical Satellite has resulted in the detection of 466 objects which are shown to be in earth orbit. Analysis of the spatial and size distribution and thermal properties of these objets, which may include payloads, rocket bodies and debris particles, is being conducted as one step in a feasibility study for space-based debris detection technologies.

  15. New trends in Taylor series based applications

    NASA Astrophysics Data System (ADS)

    Kocina, Filip; Šátek, Václav; Veigend, Petr; Nečasová, Gabriela; Valenta, Václav; Kunovský, Jiří

    2016-06-01

    The paper deals with the solution of large system of linear ODEs when minimal comunication among parallel processors is required. The Modern Taylor Series Method (MTSM) is used. The MTSM allows using a higher order during the computation that means a larger integration step size while keeping desired accuracy. As an example of complex systems we can take the Telegraph Equation Model. Symbolic and numeric solutions are compared when harmonic input signal is used.

  16. Effective Capital Provision Within Government. Methodologies for Right-Sizing Base Infrastructure

    DTIC Science & Technology

    2005-01-01

    unknown distributions, since they more accurately represent the complexity of real -world problems. Forecasting uncertain future demand flows is critical to...ordering system with no time lags and no additional costs for instantaneous delivery, shortage and holding costs would be eliminated, because the...order a fixed quantity, Q. 4.1.4 Analyzed Time Step Time is an important dimension in inventory models, since the way the system changes over time affects

  17. A landmark-based 3D calibration strategy for SPM

    NASA Astrophysics Data System (ADS)

    Ritter, Martin; Dziomba, Thorsten; Kranzmann, Axel; Koenders, Ludger

    2007-02-01

    We present a new method for the complete three-dimensional (3D) calibration of scanning probe microscopes (SPM) and other high-resolution microscopes, e.g., scanning electron microscopes (SEM) and confocal laser scanning microscopes (CLSM), by applying a 3D micrometre-sized reference structure with the shape of a cascade slope-step pyramid. The 3D reference structure was produced by focused ion beam induced metal deposition. In contrast to pitch featured calibration procedures that require separate lateral and vertical reference standards such as gratings and step height structures, the new method includes the use of landmarks, which are well established in calibration and measurement tasks on a larger scale. However, the landmarks applied to the new 3D reference structures are of sub-micrometre size, the so-called 'nanomarkers'. The nanomarker coordinates are used for a geometrical calibration of the scanning process of SPM as well as of other instrument types such as SEM and CLSM. For that purpose, a parameter estimation routine involving three scale factors and three coupling factors has been developed that allows lateral and vertical calibration in only one sampling step. With this new calibration strategy, we are able to detect deviations of SPM lateral scaling errors as well as coupling effects causing, e.g., a lateral coordinate shift depending on the measured height position of the probe.

  18. DECISION-MAKING ALIGNED WITH RAPID-CYCLE EVALUATION IN HEALTH CARE.

    PubMed

    Schneeweiss, Sebastian; Shrank, William H; Ruhl, Michael; Maclure, Malcolm

    2015-01-01

    Availability of real-time electronic healthcare data provides new opportunities for rapid-cycle evaluation (RCE) of health technologies, including healthcare delivery and payment programs. We aim to align decision-making processes with stages of RCE to optimize the usefulness and impact of rapid results. Rational decisions about program adoption depend on program effect size in relation to externalities, including implementation cost, sustainability, and likelihood of broad adoption. Drawing on case studies and experience from drug safety monitoring, we examine how decision makers have used scientific evidence on complex interventions in the past. We clarify how RCE alters the nature of policy decisions; develop the RAPID framework for synchronizing decision-maker activities with stages of RCE; and provide guidelines on evidence thresholds for incremental decision-making. In contrast to traditional evaluations, RCE provides early evidence on effectiveness and facilitates a stepped approach to decision making in expectation of future regularly updated evidence. RCE allows for identification of trends in adjusted effect size. It supports adapting a program in midstream in response to interim findings, or adapting the evaluation strategy to identify true improvements earlier. The 5-step RAPID approach that utilizes the cumulating evidence of program effectiveness over time could increase policy-makers' confidence in expediting decisions. RCE enables a step-wise approach to HTA decision-making, based on gradually emerging evidence, reducing delays in decision-making processes after traditional one-time evaluations.

  19. One-Step Reverse-Transcription FRET-PCR for Differential Detection of Five Ebolavirus Species

    PubMed Central

    Lu, Guangwu; Zhang, Jilei; Zhang, Chuntao; Li, Xiaolu; Shi, Dawei; Yang, Zhaopeng; Wang, Chengming

    2015-01-01

    Ebola is an emerging infectious disease caused by a deadly virus belonging to the family Filoviridae, genus Ebolavirus. Based on their geographical distribution, Ebolavirus has been classified into total five species so far, mainly Zaire, Sudan, Taï Forest, Bundibugyo and Reston. It is important to be able to differentiate the Ebolavirus species as they significantly differ in pathogenicity and more than one species can be present in an area. We have developed a one-step step-down RT-PCR detecting all five Ebolavirus species with high sensitivity (1 copy of Ebolavirus DNA, 10 copies of RNA and 320 copies of RNA spiked in 1 ml whole blood). The primers and FRET-probes we designed enabled us to differentiate five Ebolavirus species by distinct T m (Zaire: flat peaks between 53.0°C and 56.9°C; Sudan: 51.6°C; Reston: flat peaks between 47.5°C and 54.9°C; Tai Forest: 52.8°C; Bundibugyo: dual peaks at 48.9°C and 53.5°C), and by different amplicon sizes (Zaire 255bp, Sudan 211bp, Reston 192bp, Taï Forest 166bp, Bundibugyo 146bp). This one-size-fit-all assay enables the rapid detection and discrimination of the five Ebolavirus species in a single reaction. PMID:26017916

  20. Analysis of the Earthquake Impact towards water-based fire extinguishing system

    NASA Astrophysics Data System (ADS)

    Lee, J.; Hur, M.; Lee, K.

    2015-09-01

    Recently, extinguishing system installed in the building when the earthquake occurred at a separate performance requirements. Before the building collapsed during the earthquake, as a function to maintain a fire extinguishing. In particular, the automatic sprinkler fire extinguishing equipment, such as after a massive earthquake without damage to piping also must maintain confidentiality. In this study, an experiment installed in the building during the earthquake, the water-based fire extinguishing saw grasp the impact of the pipe. Experimental structures for water-based fire extinguishing seismic construction step by step, and then applied to the seismic experiment, the building appears in the extinguishing of the earthquake response of the pipe was measured. Construction of acceleration caused by vibration being added to the size and the size of the displacement is measured and compared with the data response of the pipe from the table, thereby extinguishing water piping need to enhance the seismic analysis. Define the seismic design category (SDC) for the four groups in the building structure with seismic criteria (KBC2009) designed according to the importance of the group and earthquake seismic intensity. The event of a real earthquake seismic analysis of Category A and Category B for the seismic design of buildings, the current fire-fighting facilities could have also determined that the seismic performance. In the case of seismic design categories C and D are installed in buildings to preserve the function of extinguishing the required level of seismic retrofit design is determined.

  1. [Selection of patients for transcatheter aortic valve implantation].

    PubMed

    Tron, Christophe; Godin, Matthieu; Litzler, Pierre-Yves; Bauer, Fabrice; Caudron, Jérome; Dacher, Jean-Nicolas; Borz, Bogdan; Canville, Alexandre; Kurtz, Baptiste; Bessou, Jean-Paul; Cribier, Alain; Eltchaninoff, Hélène

    2012-06-01

    A good selection of patients is a crucial step before transcatheter aortic valve implantation (TAVI) in order to select the good indications and choose the access route. TAVI should be considered only in patients with symptomatic severe aortic stenosis and either contraindication or high surgical risk. Indication for TAVI should be discussed in a multidisciplinary team meeting. Echocardiography and/or CT scan are mandatory to evaluate the aortic annulus size and select the good prosthesis size. The possibility of transfemoral implantation is evaluated by angiography and CT scan, and based on the arterial diameters, but also on the presence of tortuosities and arterial calcifications. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  2. Some practical aspects of lossless and nearly-lossless compression of AVHRR imagery

    NASA Technical Reports Server (NTRS)

    Hogan, David B.; Miller, Chris X.; Christensen, Than Lee; Moorti, Raj

    1994-01-01

    Compression of Advanced Very high Resolution Radiometers (AVHRR) imagery operating in a lossless or nearly-lossless mode is evaluated. Several practical issues are analyzed including: variability of compression over time and among channels, rate-smoothing buffer size, multi-spectral preprocessing of data, day/night handling, and impact on key operational data applications. This analysis is based on a DPCM algorithm employing the Universal Noiseless Coder, which is a candidate for inclusion in many future remote sensing systems. It is shown that compression rates of about 2:1 (daytime) can be achieved with modest buffer sizes (less than or equal to 2.5 Mbytes) and a relatively simple multi-spectral preprocessing step.

  3. Clustering of longitudinal data by using an extended baseline: A new method for treatment efficacy clustering in longitudinal data.

    PubMed

    Schramm, Catherine; Vial, Céline; Bachoud-Lévi, Anne-Catherine; Katsahian, Sandrine

    2018-01-01

    Heterogeneity in treatment efficacy is a major concern in clinical trials. Clustering may help to identify the treatment responders and the non-responders. In the context of longitudinal cluster analyses, sample size and variability of the times of measurements are the main issues with the current methods. Here, we propose a new two-step method for the Clustering of Longitudinal data by using an Extended Baseline. The first step relies on a piecewise linear mixed model for repeated measurements with a treatment-time interaction. The second step clusters the random predictions and considers several parametric (model-based) and non-parametric (partitioning, ascendant hierarchical clustering) algorithms. A simulation study compares all options of the clustering of longitudinal data by using an extended baseline method with the latent-class mixed model. The clustering of longitudinal data by using an extended baseline method with the two model-based algorithms was the more robust model. The clustering of longitudinal data by using an extended baseline method with all the non-parametric algorithms failed when there were unequal variances of treatment effect between clusters or when the subgroups had unbalanced sample sizes. The latent-class mixed model failed when the between-patients slope variability is high. Two real data sets on neurodegenerative disease and on obesity illustrate the clustering of longitudinal data by using an extended baseline method and show how clustering may help to identify the marker(s) of the treatment response. The application of the clustering of longitudinal data by using an extended baseline method in exploratory analysis as the first stage before setting up stratified designs can provide a better estimation of treatment effect in future clinical trials.

  4. Physical pretreatment – woody biomass size reduction – for forest biorefinery

    Treesearch

    J.Y. Zhu

    2011-01-01

    Physical pretreatment of woody biomass or wood size reduction is a prerequisite step for further chemical or biochemical processing in forest biorefinery. However, wood size reduction is very energy intensive which differentiates woody biomass from herbaceous biomass for biorefinery. This chapter discusses several critical issues related to wood size reduction: (1)...

  5. Study on characteristics of printed circuit board liberation and its crushed products.

    PubMed

    Quan, Cui; Li, Aimin; Gao, Ningbo

    2012-11-01

    Recycling printed circuit board waste (PCBW) waste is a hot issue of environmental protection and resource recycling. Mechanical and thermo-chemical methods are two traditional recycling processes for PCBW. In the present research, a two-step crushing process combined with a coarse-crushing step and a fine-pulverizing step was adopted, and then the crushed products were classified into seven different fractions with a standard sieve. The liberation situation and particle shape in different size fractions were observed. Properties of different size fractions, such as heating value, thermogravimetric, proximate, ultimate and chemical analysis were determined. The Rosin-Rammler model was applied to analyze the particle size distribution of crushed material. The results indicated that complete liberation of metals from the PCBW was achieved at a size less than 0.59 mm, but the nonmetal particle in the smaller-than-0.15 mm fraction is liable to aggregate. Copper was the most prominent metal in PCBW and mainly enriched in the 0.42-0.25 mm particle size. The Rosin-Rammler equation adequately fit particle size distribution data of crushed PCBW with a correlation coefficient of 0.9810. The results of heating value and proximate analysis revealed that the PCBW had a low heating value and high ash content. The combustion and pyrolysis process of PCBW was different and there was an obvious oxidation peak of Cu in combustion runs.

  6. Workshop II On Unsteady Separated Flow Proceedings

    DTIC Science & Technology

    1988-07-28

    was static stall angle of 12 ° . achieved by injecting diluted food coloring at the apex through a 1.5 mm diameter tube placed The response of the wing...differences with uniform step size in q, and trailing -. 75 three- pront differences with uniform step size in ,, ,,as used The nonlinearity of the...flow prop- "Kutta condition." erties for slender 3D wings are addressed. To begin the The present paper emphasizes recent progress in the de- study

  7. The GRAM-3 model

    NASA Technical Reports Server (NTRS)

    Justus, C. G.

    1987-01-01

    The Global Reference Atmosphere Model (GRAM) is under continuous development and improvement. GRAM data were compared with Middle Atmosphere Program (MAP) predictions and with shuttle data. An important note: Users should employ only step sizes in altitude that give vertical density gradients consistent with shuttle-derived density data. Using too small a vertical step size (finer then 1 km) will result in what appears to be unreasonably high values of density shears but what in reality is noise in the model.

  8. Sector-Based Detection for Hands-Free Speech Enhancement in Cars

    NASA Astrophysics Data System (ADS)

    Lathoud, Guillaume; Bourgeois, Julien; Freudenberger, Jürgen

    2006-12-01

    Adaptation control of beamforming interference cancellation techniques is investigated for in-car speech acquisition. Two efficient adaptation control methods are proposed that avoid target cancellation. The "implicit" method varies the step-size continuously, based on the filtered output signal. The "explicit" method decides in a binary manner whether to adapt or not, based on a novel estimate of target and interference energies. It estimates the average delay-sum power within a volume of space, for the same cost as the classical delay-sum. Experiments on real in-car data validate both methods, including a case with[InlineEquation not available: see fulltext.] km/h background road noise.

  9. Determining size and dispersion of minimum viable populations for land management planning and species conservation

    NASA Astrophysics Data System (ADS)

    Lehmkuhl, John F.

    1984-03-01

    The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.

  10. Flowsheet Analysis of U-Pu Co-Crystallization Process as a New Reprocessing System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shunji Homma; Jun-ichi Ishii; Jiro Koga

    2006-07-01

    A new fuel reprocessing system by U-Pu co-crystallization process is proposed and examined by flowsheet analysis. This reprocessing system is based on the fact that hexavalent plutonium in nitric acid solution is co-crystallized with uranyl nitrate, whereas it is not crystallized when uranyl nitrate does not exist in the solution. The system consists of five steps: dissolution of spent fuel, plutonium oxidation, U-Pu co-crystallization as a co-decontamination, re-dissolution of the crystals, and U re-crystallization as a U-Pu separation. The system requires a recycling of the mother liquor from the U-Pu co-crystallization step and the appropriate recycle ratio is determined bymore » flowsheet analysis such that the satisfactory decontamination is achieved. Further flowsheet study using four different compositions of LWR spent fuels demonstrates that the constant ratio of plutonium to uranium in mother liquor from the re-crystallization step is achieved for every composition by controlling the temperature. It is also demonstrated by comparing to the Purex process that the size of the plant based on the proposed system is significantly reduced. (authors)« less

  11. Biomass-based magnetic fluorescent nanoparticles: One-step scalable synthesis, application as drug carriers and mechanism study.

    PubMed

    Li, Lei; Wang, Feijun; Shao, Ziqiang

    2018-03-15

    A biomass-based magnetic fluorescent nanoparticle (MFNPs) was successively in situ synthesized via a one-step high-gravity approach, which constructed by a magnetic core of Fe 3 O 4 nanoparticles, the fluorescent marker of carbon dots (CDs), and shells of chitosan (CS). The obtained MFNPs had a 10 nm average diameter and narrow particle size distribution, low cytotoxicity, superior fluorescent emission and superparamagnetic properties. The encapsulating and release 5-fluorouracil experiments confirmed that the introduction of CS/CDs effectively improved the drug loading capacity. Mechanism and kinetic studies proved that: (i) the monolayer adsorption was the main sorption mode under the studied conditions; (ii) the whole adsorption process was controlled by intra-liquid diffusion mass transfer and governed by chemisorption; and (iii) the release process was controlled by Fickian diffusion. These results demonstrated this method to one-step continuously produce MFNPs and the construction of non-toxic nanostructure possessed great superiority in currently Nano-delivery systems, which would show high application value in targeted drug delivery, magnetic fluid hyperthermia treatment, magnetic resonance imaging (MRI), in vitro testing and relative research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Modeling of Abrasion and Crushing of Unbound Granular Materials During Compaction

    NASA Astrophysics Data System (ADS)

    Ocampo, Manuel S.; Caicedo, Bernardo

    2009-06-01

    Unbound compacted granular materials are commonly used in engineering structures as layers in road pavements, railroad beds, highway embankments, and foundations. These structures are generally subjected to dynamic loading by construction operations, traffic and wheel loads. These repeated or cyclic loads cause abrasion and crushing of the granular materials. Abrasion changes a particle's shape, and crushing divides the particle into a mixture of many small particles of varying sizes. Particle breakage is important because the mechanical and hydraulic properties of these materials depend upon their grain size distribution. Therefore, it is important to evaluate the evolution of the grain size distribution of these materials. In this paper an analytical model for unbound granular materials is proposed in order to evaluate particle crushing of gravels and soils subjected to cyclic loads. The model is based on a Markov chain which describes the development of grading changes in the material as a function of stress levels. In the model proposed, each particle size is a state in the system, and the evolution of the material is the movement of particles from one state to another in n steps. Each step is a load cycle, and movement between states is possible with a transition probability. The crushing of particles depends on the mechanical properties of each grain and the packing density of the granular material. The transition probability was calculated using both the survival probability defined by Weibull and the compressible packing model developed by De Larrard. Material mechanical properties are considered using the Weibull probability theory. The size and shape of the grains, as well as the method of processing the packing density are considered using De Larrard's model. Results of the proposed analytical model show a good agreement with the experimental tests carried out using the gyratory compaction test.

  13. Skeletal maturation, fundamental motor skills and motor performance in preschool children.

    PubMed

    Freitas, D L; Lausen, B; Maia, J A; Gouveia, É R; Antunes, A M; Thomis, M; Lefevre, J; Malina, R M

    2018-06-01

    Relationships among skeletal age (SA), body size and fundamental motor skills (FMS) and motor performance were considered in 155 boys and 159 girls 3-6 years of age. Stature and body mass were measured. SA of the hand-wrist was assessed with the Tanner-Whitehouse II 20 bone method. The Test of Gross Motor Development, 2 nd edition (TGMD-2) and the Preschool Test Battery were used, respectively, to assess FMS and motor performance. Based on hierarchical regression analyses, the standardized residuals of SA on chronological age (SAsr) explained a maximum of 6.1% of the variance in FMS and motor performance in boys (ΔR 2 3 , range 0.0% to 6.1%) and a maximum of 20.4% of the variance in girls (ΔR 2 3 , range 0.0% to 20.4%) over that explained by body size and interactions of SAsr with body size (step 3). The interactions of the SAsr and stature and body mass (step 2) explained a maximum of 28.3% of the variance in boys (ΔR 2 2 , range 0.5% to 28.3%) and 16.7% of the variance in girls (ΔR 2 2 , range 0.7% to 16.7%) over that explained by body size alone. With the exception of balance, relationships among SAsr and FMS or motor performance differed between boys and girls. Overall, SA per se or interacting with body size had a relatively small influence in FMS and motor performance in children 3-6 years of age. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  14. A simple, rapid, high-fidelity and cost-effective PCR-based two-step DNA synthesis method for long gene sequences.

    PubMed

    Xiong, Ai-Sheng; Yao, Quan-Hong; Peng, Ri-He; Li, Xian; Fan, Hui-Qin; Cheng, Zong-Ming; Li, Yi

    2004-07-07

    Chemical synthesis of DNA sequences provides a powerful tool for modifying genes and for studying gene function, structure and expression. Here, we report a simple, high-fidelity and cost-effective PCR-based two-step DNA synthesis (PTDS) method for synthesis of long segments of DNA. The method involves two steps. (i) Synthesis of individual fragments of the DNA of interest: ten to twelve 60mer oligonucleotides with 20 bp overlap are mixed and a PCR reaction is carried out with high-fidelity DNA polymerase Pfu to produce DNA fragments that are approximately 500 bp in length. (ii) Synthesis of the entire sequence of the DNA of interest: five to ten PCR products from the first step are combined and used as the template for a second PCR reaction using high-fidelity DNA polymerase pyrobest, with the two outermost oligonucleotides as primers. Compared with the previously published methods, the PTDS method is rapid (5-7 days) and suitable for synthesizing long segments of DNA (5-6 kb) with high G + C contents, repetitive sequences or complex secondary structures. Thus, the PTDS method provides an alternative tool for synthesizing and assembling long genes with complex structures. Using the newly developed PTDS method, we have successfully obtained several genes of interest with sizes ranging from 1.0 to 5.4 kb.

  15. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach

    PubMed Central

    Hemming, Karla; Taljaard, Monica

    2016-01-01

    Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808

  16. Control of a three-dimensional turbulent shear layer by means of oblique vortices

    NASA Astrophysics Data System (ADS)

    Jürgens, Werner; Kaltenbach, Hans-Jakob

    2018-04-01

    The effect of local forcing on the separated, three-dimensional shear layer downstream of a backward-facing step is investigated by means of large-eddy simulation for a Reynolds number based on the step height of 10,700. The step edge is either oriented normal to the approaching turbulent boundary layer or swept at an angle of 40°. Oblique vortices with different orientation and spacing are generated by wavelike suction and blowing of fluid through an edge parallel slot. The vortices exhibit a complex three-dimensional structure, but they can be characterized by a wavevector in a horizontal section plane. In order to determine the step-normal component of the wavevector, a method is developed based on phase averages. The dependence of the wavevector on the forcing parameters can be described in terms of a dispersion relation, the structure of which indicates that the disturbances are mainly convected through the fluid. The introduced vortices reduce the size of the recirculation region by up to 38%. In both the planar and the swept case, the most efficient of the studied forcings consists of vortices which propagate in a direction that deviates by more than 50° from the step normal. These vortices exhibit a spacing in the order of 2.5 step heights. The upstream shift of the reattachment line can be explained by increased mixing and momentum transport inside the shear layer which is reflected in high levels of the Reynolds shear stress -ρ \\overline{u'v'}. The position of the maximum of the coherent shear stress is found to depend linearly on the wavelength, similar to two-dimensional free shear layers.

  17. Robust adaptive 3-D segmentation of vessel laminae from fluorescence confocal microscope images and parallel GPU implementation.

    PubMed

    Narayanaswamy, Arunachalam; Dwarakapuram, Saritha; Bjornsson, Christopher S; Cutler, Barbara M; Shain, William; Roysam, Badrinath

    2010-03-01

    This paper presents robust 3-D algorithms to segment vasculature that is imaged by labeling laminae, rather than the lumenal volume. The signal is weak, sparse, noisy, nonuniform, low-contrast, and exhibits gaps and spectral artifacts, so adaptive thresholding and Hessian filtering based methods are not effective. The structure deviates from a tubular geometry, so tracing algorithms are not effective. We propose a four step approach. The first step detects candidate voxels using a robust hypothesis test based on a model that assumes Poisson noise and locally planar geometry. The second step performs an adaptive region growth to extract weakly labeled and fine vessels while rejecting spectral artifacts. To enable interactive visualization and estimation of features such as statistical confidence, local curvature, local thickness, and local normal, we perform the third step. In the third step, we construct an accurate mesh representation using marching tetrahedra, volume-preserving smoothing, and adaptive decimation algorithms. To enable topological analysis and efficient validation, we describe a method to estimate vessel centerlines using a ray casting and vote accumulation algorithm which forms the final step of our algorithm. Our algorithm lends itself to parallel processing, and yielded an 8 x speedup on a graphics processor (GPU). On synthetic data, our meshes had average error per face (EPF) values of (0.1-1.6) voxels per mesh face for peak signal-to-noise ratios from (110-28 dB). Separately, the error from decimating the mesh to less than 1% of its original size, the EPF was less than 1 voxel/face. When validated on real datasets, the average recall and precision values were found to be 94.66% and 94.84%, respectively.

  18. Effects of two-step homogenization on precipitation behavior of Al{sub 3}Zr dispersoids and recrystallization resistance in 7150 aluminum alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Zhanying; Key Laboratory for Anisotropy and Texture of Materials, Northeastern University, Shenyang 110819, China,; Zhao, Gang

    2015-04-15

    The effect of two-step homogenization treatments on the precipitation behavior of Al{sub 3}Zr dispersoids was investigated by transmission electron microscopy (TEM) in 7150 alloys. Two-step treatments with the first step in the temperature range of 300–400 °C followed by the second step at 470 °C were applied during homogenization. Compared with the conventional one-step homogenization, both a finer particle size and a higher number density of Al{sub 3}Zr dispersoids were obtained with two-step homogenization treatments. The most effective dispersoid distribution was attained using the first step held at 300 °C. In addition, the two-step homogenization minimized the precipitate free zonesmore » and greatly increased the number density of dispersoids near dendrite grain boundaries. The effect of two-step homogenization on recrystallization resistance of 7150 alloys with different Zr contents was quantitatively analyzed using the electron backscattered diffraction (EBSD) technique. It was found that the improved dispersoid distribution through the two-step treatment can effectively inhibit the recrystallization process during the post-deformation annealing for 7150 alloys containing 0.04–0.09 wt.% Zr, resulting in a remarkable reduction of the volume fraction and grain size of recrystallization grains. - Highlights: • Effect of two-step homogenization on Al{sub 3}Zr dispersoids was investigated by TEM. • Finer and higher number of dispersoids obtained with two-step homogenization • Minimized the precipitate free zones and improved the dispersoid distribution • Recrystallization resistance with varying Zr content was quantified by EBSD. • Effectively inhibit the recrystallization through two-step treatments in 7150 alloy.« less

  19. Potential for adult-based epidemiological studies to characterize overall cancer risks associated with a lifetime of CT scans.

    PubMed

    Shuryak, Igor; Lubin, Jay H; Brenner, David J

    2014-06-01

    Recent epidemiological studies have suggested that radiation exposure from pediatric CT scanning is associated with small excess cancer risks. However, the majority of CT scans are performed on adults, and most radiation-induced cancers appear during middle or old age, in the same age range as background cancers. Consequently, a logical next step is to investigate the effects of CT scanning in adulthood on lifetime cancer risks by conducting adult-based, appropriately designed epidemiological studies. Here we estimate the sample size required for such studies to detect CT-associated risks. This was achieved by incorporating different age-, sex-, time- and cancer type-dependent models of radiation carcinogenesis into an in silico simulation of a population-based cohort study. This approach simulated individual histories of chest and abdominal CT exposures, deaths and cancer diagnoses. The resultant sample sizes suggest that epidemiological studies of realistically sized cohorts can detect excess lifetime cancer risks from adult CT exposures. For example, retrospective analysis of CT exposure and cancer incidence data from a population-based cohort of 0.4 to 1.3 million (depending on the carcinogenic model) CT-exposed UK adults, aged 25-65 in 1980 and followed until 2015, provides 80% power for detecting cancer risks from chest and abdominal CT scans.

  20. Control of Alginate Core Size in Alginate-Poly (Lactic-Co-Glycolic) Acid Microparticles

    NASA Astrophysics Data System (ADS)

    Lio, Daniel; Yeo, David; Xu, Chenjie

    2016-01-01

    Core-shell alginate-poly (lactic-co-glycolic) acid (PLGA) microparticles are potential candidates to improve hydrophilic drug loading while facilitating controlled release. This report studies the influence of the alginate core size on the drug release profile of alginate-PLGA microparticles and its size. Microparticles are synthesized through double-emulsion fabrication via a concurrent ionotropic gelation and solvent extraction. The size of alginate core ranges from approximately 10, 50, to 100 μm when the emulsification method at the first step is homogenization, vortexing, or magnetic stirring, respectively. The second step emulsification for all three conditions is performed with magnetic stirring. Interestingly, although the alginate core has different sizes, alginate-PLGA microparticle diameter does not change. However, drug release profiles are dramatically different for microparticles comprising different-sized alginate cores. Specifically, taking calcein as a model drug, microparticles containing the smallest alginate core (10 μm) show the slowest release over a period of 26 days with burst release less than 1 %.

  1. Monte Carlo modeling of single-molecule cytoplasmic dynein.

    PubMed

    Singh, Manoranjan P; Mallik, Roop; Gross, Steven P; Yu, Clare C

    2005-08-23

    Molecular motors are responsible for active transport and organization in the cell, underlying an enormous number of crucial biological processes. Dynein is more complicated in its structure and function than other motors. Recent experiments have found that, unlike other motors, dynein can take different size steps along microtubules depending on load and ATP concentration. We use Monte Carlo simulations to model the molecular motor function of cytoplasmic dynein at the single-molecule level. The theory relates dynein's enzymatic properties to its mechanical force production. Our simulations reproduce the main features of recent single-molecule experiments that found a discrete distribution of dynein step sizes, depending on load and ATP concentration. The model reproduces the large steps found experimentally under high ATP and no load by assuming that the ATP binding affinities at the secondary sites decrease as the number of ATP bound to these sites increases. Additionally, to capture the essential features of the step-size distribution at very low ATP concentration and no load, the ATP hydrolysis of the primary site must be dramatically reduced when none of the secondary sites have ATP bound to them. We make testable predictions that should guide future experiments related to dynein function.

  2. Two-Step Single Particle Mass Spectrometry for On-Line Monitoring of Polycyclic Aromatic Hydrocarbons Bound to Ambient Fine Particulate Matter

    NASA Astrophysics Data System (ADS)

    Zimmermann, R.; Bente, M.; Sklorz, M.

    2007-12-01

    Polycyclic aromatic hydrocarbons (PAH) are formed as trace products in combustion processes and are emitted to the atmosphere. Larger PAH have low vapour pressure and are predominantly bound to the ambient fine particulate matter (PM). Upon inhalation, PAH show both, chronic human toxicity (i.e. many PAH are potent carcinogens) as well as acute human toxicity (i.e. inflammatory effects due to oxi-dative stress) and are discussed to be relevant for the observed health effect of ambient PM. Therefore a better understanding of the occurrence, dynamics and particle size dependence of particle bound-PAH is of great interest. On-line aerosol mass spectrometry in principle is the method of choice to investigate the size resolved changes in the chemical speciation of particles as well the status of internal vs. external mixing of chemical constituents. However the present available aerosol mass spectrometers (ATOFMS and AMS) do not allow detection of PAH from ambient air PM. In order to allow a single particle based monitoring of PAH from ambient PM a new single particle laser ionisation mass spectrometer was built and applied. The system is based on ATOFMS principle but uses a two- step photo-ionization. A tracked and sized particle firstly is laser desorbed (LD) by a IR-laser pulse (CO2-laser, λ=10.2 μm) and subsequently the released PAH are selectively ionized by an intense UV-laser pulse (ArF excimer, λ=248 nm) in a resonance enhanced multiphoton ionisation process (REMPI). The PAH-ions are detected in a time of flight mass spectrometer (TOFMS). A virtual impactor enrichment unit is used to increase the detection frequency of the ambient particles. With the current inlet system particles from about 400 nm to 10 μm are accessible. Single particle based temporal profiles of PAH containing particles ion (size distribution and PAH speciation) have been recorded in Oberschleissheim, Germany from ambient air. Furthermore profiles of relevant emission sources (e.g. gasoline and diesel engine, wood combustion) and the obtained chemical profiles were compared with the ones from the ambient PAH containing particles.

  3. Outward Bound to the Galaxies--One Step at a Time

    ERIC Educational Resources Information Center

    Ward, R. Bruce; Miller-Friedmann, Jaimie; Sienkiewicz, Frank; Antonucci, Paul

    2012-01-01

    Less than a century ago, astronomers began to unlock the cosmic distances within and beyond the Milky Way. Understanding the size and scale of the universe is a continuing, step-by-step process that began with the remarkably accurate measurement of the distance to the Moon made by early Greeks. In part, the authors have ITEAMS (Innovative…

  4. SUPRAMOLECULAR COMPOSITE MATERIALS FROM CELLULOSE, CHITOSAN AND CYCLODEXTRIN: FACILE PREPARATION AND THEIR SELECTIVE INCLUSION COMPLEX FORMATION WITH ENDOCRINE DISRUPTORS

    PubMed Central

    Duri, Simon; Tran, Chieu D.

    2013-01-01

    We have successfully developed a simple and one step method to prepare high performance supramolecular polysaccharide composites from cellulose (CEL), chitosan (CS) and (2,3,6-tri-O-acetyl)-α-, β- and γ-cyclodextrin (α-, β- and γ-TCD). In this method, [BMIm+Cl−], an ionic liquid (IL), was used as a solvent to dissolve and prepare the composites. Since majority (>88%) of the IL used was recovered for reuse, the method is recyclable. XRD, FT-IR, NIR and SEM were used to monitor the dissolution process and to confirm that the polysaccharides were regenerated without any chemical modifications. It was found that unique properties of each component including superior mechanical properties (from CEL), excellent adsorbent for pollutants and toxins (from CS) and size/structure selectivity through inclusion complex formation (from TCDs) remain intact in the composites. Specifically, results from kinetics and adsorption isotherms show that while CS-based composites can effectively adsorb the endocrine disruptors (polychlrophenols, bisphenol-A), its adsorption is independent on the size and structure of the analytes. Conversely, the adsorption by γ-TCD-based composites exhibits strong dependency on size and structure of the analytes. For example, while all three TCD-based composites (i.e., α-, β- and γ-TCD) can effectively adsorb 2-, 3- and 4-chlorophenol, only γ-TCD-based composite can adsorb analytes with bulky groups including 3,4-dichloro- and 2,4,5-trichlorophenol. Furthermore, equilibrium sorption capacities for the analytes with bulky groups by γ-TCD-based composite are much higher than those by CS-based composites. Together, these results indicate that γ-TCD-based composite with its relatively larger cavity size can readily form inclusion complexes with analytes with bulky groups, and through inclusion complex formation, it can strongly adsorb much more analytes and with size/structure selectivity compared to CS-based composites which can adsorb the analyte only by surface adsorption. PMID:23517477

  5. Leaf Morphology, Taxonomy and Geometric Morphometrics: A Simplified Protocol for Beginners

    PubMed Central

    Viscosi, Vincenzo; Cardini, Andrea

    2011-01-01

    Taxonomy relies greatly on morphology to discriminate groups. Computerized geometric morphometric methods for quantitative shape analysis measure, test and visualize differences in form in a highly effective, reproducible, accurate and statistically powerful way. Plant leaves are commonly used in taxonomic analyses and are particularly suitable to landmark based geometric morphometrics. However, botanists do not yet seem to have taken advantage of this set of methods in their studies as much as zoologists have done. Using free software and an example dataset from two geographical populations of sessile oak leaves, we describe in detailed but simple terms how to: a) compute size and shape variables using Procrustes methods; b) test measurement error and the main levels of variation (population and trees) using a hierachical design; c) estimate the accuracy of group discrimination; d) repeat this estimate after controlling for the effect of size differences on shape (i.e., allometry). Measurement error was completely negligible; individual variation in leaf morphology was large and differences between trees were generally bigger than within trees; differences between the two geographic populations were small in both size and shape; despite a weak allometric trend, controlling for the effect of size on shape slighly increased discrimination accuracy. Procrustes based methods for the analysis of landmarks were highly efficient in measuring the hierarchical structure of differences in leaves and in revealing very small-scale variation. In taxonomy and many other fields of botany and biology, the application of geometric morphometrics contributes to increase scientific rigour in the description of important aspects of the phenotypic dimension of biodiversity. Easy to follow but detailed step by step example studies can promote a more extensive use of these numerical methods, as they provide an introduction to the discipline which, for many biologists, is less intimidating than the often inaccessible specialistic literature. PMID:21991324

  6. Smart Hydrogel Particles: Biomarker Harvesting: One-step affinity purification, size exclusion, and protection against degradation

    PubMed Central

    Luchini, Alessandra; Geho, David H.; Bishop, Barney; Tran, Duy; Xia, Cassandra; Dufour, Robert; Jones, Clint; Espina, Virginia; Patanarut, Alexis; Zhu, Weidong; Ross, Mark; Tessitore, Alessandra; Petricoin, Emanuel; Liotta, Lance A.

    2010-01-01

    Disease-associated blood biomarkers exist in exceedingly low concentrations within complex mixtures of high-abundance proteins such as albumin. We have introduced an affinity bait molecule into N-isopropylacrylamide to produce a particle that will perform three independent functions within minutes, in one step, in solution: a) molecular size sieving b) affinity capture of all solution phase target molecules, and c) complete protection of harvested proteins from enzymatic degradation. The captured analytes can be readily electroeluted for analysis. PMID:18076201

  7. Carotid artery phantom designment and simulation using field II

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Yang, Xin; Ding, Mingyue

    2013-10-01

    Carotid atherosclerosis is the major cause of ischemic stroke, a leading cause of mortality and disability. Morphology and structure features of carotid plaques are the keys to identify plaques and monitoring the disease. Manually segmentation on the ultrasonic images to get the best-fitted actual size of the carotid plaques based on physicians personal experience, namely "gold standard", is a important step in the study of plaque size. However, it is difficult to qualitatively measure the segmentation error caused by the operator's subjective factors. In order to reduce the subjective factors, and the uncertainty factors of quantification, the experiments in this paper were carried out. In this study, we firstly designed a carotid artery phantom, and then use three different beam-forming algorithms of medical ultrasound to simulate the phantom. Finally obtained plaques areas were analyzed through manual segmentation on simulation images. We could (1) directly evaluate the different beam-forming algorithms for the ultrasound imaging simulation on the effect of carotid artery; (2) also analyze the sensitivity of detection on different size of plaques; (3) indirectly reflect the accuracy of the manual segmentation base on segmentation results the evaluation.

  8. A Time Integration Algorithm Based on the State Transition Matrix for Structures with Time Varying and Nonlinear Properties

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2003-01-01

    A variable order method of integrating the structural dynamics equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. When the time variation of the system can be modeled exactly by a polynomial it produces nearly exact solutions for a wide range of time step sizes. Solutions of a model nonlinear dynamic response exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with solutions obtained by established methods.

  9. Mathematical Analysis for Non-reciprocal-interaction-based Model of Collective Behavior

    NASA Astrophysics Data System (ADS)

    Kano, Takeshi; Osuka, Koichi; Kawakatsu, Toshihiro; Ishiguro, Akio

    2017-12-01

    In many natural and social systems, collective behaviors emerge as a consequence of non-reciprocal interaction between their constituents. As a first step towards understanding the core principle that underlies these phenomena, we previously proposed a minimal model of collective behavior based on non-reciprocal interactions by drawing inspiration from friendship formation in human society, and demonstrated via simulations that various non-trivial patterns emerge by changing parameters. In this study, a mathematical analysis of the proposed model wherein the system size is small is performed. Through the analysis, the mechanism of the transition between several patterns is elucidated.

  10. Ultrafast axial scanning for two-photon microscopy via a digital micromirror device and binary holography.

    PubMed

    Cheng, Jiyi; Gu, Chenglin; Zhang, Dapeng; Wang, Dien; Chen, Shih-Chi

    2016-04-01

    In this Letter, we present an ultrafast nonmechanical axial scanning method for two-photon excitation (TPE) microscopy based on binary holography using a digital micromirror device (DMD), achieving a scanning rate of 4.2 kHz, scanning range of ∼180  μm, and scanning resolution (minimum step size) of ∼270  nm. Axial scanning is achieved by projecting the femtosecond laser to a DMD programmed with binary holograms of spherical wavefronts of increasing/decreasing radii. To guide the scanner design, we have derived the parametric relationships between the DMD parameters (i.e., aperture and pixel size), and the axial scanning characteristics, including (1) maximum optical power, (2) minimum step size, and (3) scan range. To verify the results, the DMD scanner is integrated with a custom-built TPE microscope that operates at 60 frames per second. In the experiment, we scanned a pollen sample via both the DMD scanner and a precision z-stage. The results show the DMD scanner generates images of equal quality throughout the scanning range. The overall efficiency of the TPE system was measured to be ∼3%. With the high scanning rate, the DMD scanner may find important applications in random-access imaging or high-speed volumetric imaging that enables visualization of highly dynamic biological processes in 3D with submillisecond temporal resolution.

  11. Establishing intensively cultured hybrid poplar plantations for fuel and fiber.

    Treesearch

    Edward Hansen; Lincoln Moore; Daniel Netzer; Michael Ostry; Howard Phipps; Jaroslav Zavitkovski

    1983-01-01

    This paper describes a step-by-step procedure for establishing commercial size intensively cultured plantations of hybrid poplar and summarizes the state-of-knowledge as developed during 10 years of field research at Rhinelander, Wisconsin.

  12. UT Austin Villa 2011: 3D Simulation Team Report

    DTIC Science & Technology

    2011-01-01

    inverted pendulum model omnidirectional walk engine based on one that was originally designed for the real Nao robot [7]. The omnidirectional walk is...using a double linear inverted pendulum , where the center of mass is swinging over the stance foot. In addition, as in Graf et al.’s work [7], we use...between the inverted pendulums formed by the respective stance feet. Notation Description maxStep∗i Maximum step sizes allowed for x, y, and θ y

  13. Social Media: More Than Just a Communications Medium

    DTIC Science & Technology

    2012-03-14

    video-hosting web services with the recognition that “Internet-based capabilities are integral to operations across the Department of Defense.”10...as DoD and the government as a whole, the U.S. Army’s recognition of social media’s unique relationship to time and speed is a step forward toward...populated size of social media entities, Alexa , the leader in free global web analytics, provides an updated list of the top 500 websites on the Internet

  14. A new algorithm for modeling friction in dynamic mechanical systems

    NASA Technical Reports Server (NTRS)

    Hill, R. E.

    1988-01-01

    A method of modeling friction forces that impede the motion of parts of dynamic mechanical systems is described. Conventional methods in which the friction effect is assumed a constant force, or torque, in a direction opposite to the relative motion, are applicable only to those cases where applied forces are large in comparison to the friction, and where there is little interest in system behavior close to the times of transitions through zero velocity. An algorithm is described that provides accurate determination of friction forces over a wide range of applied force and velocity conditions. The method avoids the simulation errors resulting from a finite integration interval used in connection with a conventional friction model, as is the case in many digital computer-based simulations. The algorithm incorporates a predictive calculation based on initial conditions of motion, externally applied forces, inertia, and integration step size. The predictive calculation in connection with an external integration process provides an accurate determination of both static and Coulomb friction forces and resulting motions in dynamic simulations. Accuracy of the results is improved over that obtained with conventional methods and a relatively large integration step size is permitted. A function block for incorporation in a specific simulation program is described. The general form of the algorithm facilitates implementation with various programming languages such as FORTRAN or C, as well as with other simulation programs.

  15. Data analysis-based autonomic bandwidth adjustment in software defined multi-vendor optical transport networks.

    PubMed

    Li, Yajie; Zhao, Yongli; Zhang, Jie; Yu, Xiaosong; Jing, Ruiquan

    2017-11-27

    Network operators generally provide dedicated lightpaths for customers to meet the demand for high-quality transmission. Considering the variation of traffic load, customers usually rent peak bandwidth that exceeds the practical average traffic requirement. In this case, bandwidth provisioning is unmetered and customers have to pay according to peak bandwidth. Supposing that network operators could keep track of traffic load and allocate bandwidth dynamically, bandwidth can be provided as a metered service and customers would pay for the bandwidth that they actually use. To achieve cost-effective bandwidth provisioning, this paper proposes an autonomic bandwidth adjustment scheme based on data analysis of traffic load. The scheme is implemented in a software defined networking (SDN) controller and is demonstrated in the field trial of multi-vendor optical transport networks. The field trial shows that the proposed scheme can track traffic load and realize autonomic bandwidth adjustment. In addition, a simulation experiment is conducted to evaluate the performance of the proposed scheme. We also investigate the impact of different parameters on autonomic bandwidth adjustment. Simulation results show that the step size and adjustment period have significant influences on bandwidth savings and packet loss. A small value of step size and adjustment period can bring more benefits by tracking traffic variation with high accuracy. For network operators, the scheme can serve as technical support of realizing bandwidth as metered service in the future.

  16. Production of pure indinavir free base nanoparticles by a supercritical anti-solvent (SAS) method.

    PubMed

    Imperiale, Julieta C; Bevilacqua, Gabriela; Rosa, Paulo de Tarso Vieira E; Sosnik, Alejandro

    2014-12-01

    This work investigated the production of pure indinavir free base nanoparticles by a supercritical anti-solvent method to improve the drug dissolution in intestine-like medium. To increase the dissolution of the drug by means of a supercritical fluid processing method. Acetone was used as solvent and supercritical CO2 as antisolvent. Products were characterized by dynamic light scattering (size, size distribution), scanning electron microscopy (morphology), differential scanning calorimetry (thermal behaviour) and X-rays diffraction (crystallinity). Processed indinavir resulted in particles of significantly smaller size than the original drug. Particles showed at least one dimension at the nanometer scale with needle or rod-like morphology. Results of X-rays powder diffraction suggested the formation of a mixture of polymorphs. Differential scanning calorimetry analysis showed a main melting endotherm at 152 °C. Less prominent transitions due to the presence of small amounts of bound water (in the raw drug) or an unstable polymorph (in processed IDV) were also visible. Finally, drug particle size reduction significantly increased the dissolution rate with respect to the raw drug. Conversely, the slight increase of the intrinsic solubility of the nanoparticles was not significant. A supercritical anti-solvent method enabled the nanonization of indinavir free base in one single step with high yield. The processing led to faster dissolution that would improve the oral bioavailability of the drug.

  17. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  18. Step-scan T cell-based differential Fourier transform infrared photoacoustic spectroscopy (DFTIR-PAS) for detection of ambient air contaminants

    NASA Astrophysics Data System (ADS)

    Liu, Lixian; Mandelis, Andreas; Huan, Huiting; Melnikov, Alexander

    2016-10-01

    A step-scan differential Fourier transform infrared photoacoustic spectroscopy (DFTIR-PAS) using a commercial FTIR spectrometer was developed theoretically and experimentally for air contaminant monitoring. The configuration comprises two identical, small-size and low-resonance-frequency T cells satisfying the conflicting requirements of low chopping frequency and limited space in the sample compartment. Carbon dioxide (CO2) IR absorption spectra were used to demonstrate the capability of the DFTIR-PAS method to detect ambient pollutants. A linear amplitude response to CO2 concentrations from 100 to 10,000 ppmv was observed, leading to a theoretical detection limit of 2 ppmv. The differential mode was able to suppress the coherent noise, thereby imparting the DFTIR-PAS method with a better signal-to-noise ratio and lower theoretical detection limit than the single mode. The results indicate that it is possible to use step-scan DFTIR-PAS with T cells as a quantitative method for high sensitivity analysis of ambient contaminants.

  19. Clustering on Magnesium Surfaces - Formation and Diffusion Energies.

    PubMed

    Chu, Haijian; Huang, Hanchen; Wang, Jian

    2017-07-12

    The formation and diffusion energies of atomic clusters on Mg surfaces determine the surface roughness and formation of faulted structure, which in turn affect the mechanical deformation of Mg. This paper reports first principles density function theory (DFT) based quantum mechanics calculation results of atomic clustering on the low energy surfaces {0001} and [Formula: see text]. In parallel, molecular statics calculations serve to test the validity of two interatomic potentials and to extend the scope of the DFT studies. On a {0001} surface, a compact cluster consisting of few than three atoms energetically prefers a face-centered-cubic stacking, to serve as a nucleus of stacking fault. On a [Formula: see text], clusters of any size always prefer hexagonal-close-packed stacking. Adatom diffusion on surface [Formula: see text] is high anisotropic while isotropic on surface (0001). Three-dimensional Ehrlich-Schwoebel barriers converge as the step height is three atomic layers or thicker. Adatom diffusion along steps is via hopping mechanism, and that down steps is via exchange mechanism.

  20. Learning target masks in infrared linescan imagery

    NASA Astrophysics Data System (ADS)

    Fechner, Thomas; Rockinger, Oliver; Vogler, Axel; Knappe, Peter

    1997-04-01

    In this paper we propose a neural network based method for the automatic detection of ground targets in airborne infrared linescan imagery. Instead of using a dedicated feature extraction stage followed by a classification procedure, we propose the following three step scheme: In the first step of the recognition process, the input image is decomposed into its pyramid representation, thus obtaining a multiresolution signal representation. At the lowest three levels of the Laplacian pyramid a neural network filter of moderate size is trained to indicate the target location. The last step consists of a fusion process of the several neural network filters to obtain the final result. To perform this fusion we use a belief network to combine the various filter outputs in a statistical meaningful way. In addition, the belief network allows the integration of further knowledge about the image domain. By applying this multiresolution recognition scheme, we obtain a nearly scale- and rotational invariant target recognition with a significantly decreased false alarm rate compared with a single resolution target recognition scheme.

  1. Efficient preconditioning of the electronic structure problem in large scale ab initio molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiffmann, Florian; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch

    2015-06-28

    We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filteringmore » small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.« less

  2. A Mixed Approach to Similarity Metric Selection in Affinity Propagation-Based WiFi Fingerprinting Indoor Positioning.

    PubMed

    Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella

    2015-10-30

    The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms.

  3. A Mixed Approach to Similarity Metric Selection in Affinity Propagation-Based WiFi Fingerprinting Indoor Positioning

    PubMed Central

    Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella

    2015-01-01

    The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms. PMID:26528984

  4. Differential Effects of Monovalent Cations and Anions on Key Nanoparticle Attributes

    EPA Science Inventory

    Understanding the key particle attributes such as particle size, size distribution and surface charge of both the nano- and micron-sized particles is the first step in drug formulation as such attributes are known to directly influence several characteristics of drugs including d...

  5. A two step Bayesian approach for genomic prediction of breeding values.

    PubMed

    Shariati, Mohammad M; Sørensen, Peter; Janss, Luc

    2012-05-21

    In genomic models that assign an individual variance to each marker, the contribution of one marker to the posterior distribution of the marker variance is only one degree of freedom (df), which introduces many variance parameters with only little information per variance parameter. A better alternative could be to form clusters of markers with similar effects where markers in a cluster have a common variance. Therefore, the influence of each marker group of size p on the posterior distribution of the marker variances will be p df. The simulated data from the 15th QTL-MAS workshop were analyzed such that SNP markers were ranked based on their effects and markers with similar estimated effects were grouped together. In step 1, all markers with minor allele frequency more than 0.01 were included in a SNP-BLUP prediction model. In step 2, markers were ranked based on their estimated variance on the trait in step 1 and each 150 markers were assigned to one group with a common variance. In further analyses, subsets of 1500 and 450 markers with largest effects in step 2 were kept in the prediction model. Grouping markers outperformed SNP-BLUP model in terms of accuracy of predicted breeding values. However, the accuracies of predicted breeding values were lower than Bayesian methods with marker specific variances. Grouping markers is less flexible than allowing each marker to have a specific marker variance but, by grouping, the power to estimate marker variances increases. A prior knowledge of the genetic architecture of the trait is necessary for clustering markers and appropriate prior parameterization.

  6. Pellet-free isolation of human and bovine milk extracellular vesicles by size-exclusion chromatography.

    PubMed

    Blans, Kristine; Hansen, Maria S; Sørensen, Laila V; Hvam, Michael L; Howard, Kenneth A; Möller, Arne; Wiking, Lars; Larsen, Lotte B; Rasmussen, Jan T

    2017-01-01

    Studies have suggested that nanoscale extracellular vesicles (EV) in human and bovine milk carry immune modulatory properties which could provide beneficial health effects to infants. In order to assess the possible health effects of milk EV, it is essential to use isolates of high purity from other more abundant milk structures with well-documented bioactive properties. Furthermore, gentle isolation procedures are important for reducing the risk of generating vesicle artefacts, particularly when EV subpopulations are investigated. In this study, we present two isolation approaches accomplished in three steps based on size-exclusion chromatography (SEC) resulting in effective and reproducible EV isolation from raw milk. The approaches do not require any EV pelleting and can be applied to both human and bovine milk. We show that SEC effectively separates phospholipid membrane vesicles from the primary casein and whey protein components in two differently obtained casein reduced milk fractions, with one of the fractions obtained without the use of ultracentrifugation. Milk EV isolates were enriched in lactadherin, CD9, CD63 and CD81 compared to minimal levels of the EV-marker proteins in other relevant milk fractions such as milk fat globules. Nanoparticle tracking analysis and electron microscopy reveals the presence of heterogeneous sized vesicle structures in milk EV isolates. Lipid analysis by thin layer chromatography shows that EV isolates are devoid of triacylglycerides and presents a phospholipid profile differing from milk fat globules surrounded by epithelial cell plasma membrane. Moreover, the milk EV fractions are enriched in RNA with distinct and diverging profiles from milk fat globules. Collectively, our data supports that successful milk EV isolation can be accomplished in few steps without the use of ultracentrifugation, as the presented isolation approaches based on SEC effectively isolates EV in both human and bovine milk.

  7. Pellet-free isolation of human and bovine milk extracellular vesicles by size-exclusion chromatography

    PubMed Central

    Blans, Kristine; Hansen, Maria S.; Sørensen, Laila V.; Hvam, Michael L.; Howard, Kenneth A.; Möller, Arne; Wiking, Lars; Larsen, Lotte B.; Rasmussen, Jan T.

    2017-01-01

    ABSTRACT Studies have suggested that nanoscale extracellular vesicles (EV) in human and bovine milk carry immune modulatory properties which could provide beneficial health effects to infants. In order to assess the possible health effects of milk EV, it is essential to use isolates of high purity from other more abundant milk structures with well-documented bioactive properties. Furthermore, gentle isolation procedures are important for reducing the risk of generating vesicle artefacts, particularly when EV subpopulations are investigated. In this study, we present two isolation approaches accomplished in three steps based on size-exclusion chromatography (SEC) resulting in effective and reproducible EV isolation from raw milk. The approaches do not require any EV pelleting and can be applied to both human and bovine milk. We show that SEC effectively separates phospholipid membrane vesicles from the primary casein and whey protein components in two differently obtained casein reduced milk fractions, with one of the fractions obtained without the use of ultracentrifugation. Milk EV isolates were enriched in lactadherin, CD9, CD63 and CD81 compared to minimal levels of the EV-marker proteins in other relevant milk fractions such as milk fat globules. Nanoparticle tracking analysis and electron microscopy reveals the presence of heterogeneous sized vesicle structures in milk EV isolates. Lipid analysis by thin layer chromatography shows that EV isolates are devoid of triacylglycerides and presents a phospholipid profile differing from milk fat globules surrounded by epithelial cell plasma membrane. Moreover, the milk EV fractions are enriched in RNA with distinct and diverging profiles from milk fat globules. Collectively, our data supports that successful milk EV isolation can be accomplished in few steps without the use of ultracentrifugation, as the presented isolation approaches based on SEC effectively isolates EV in both human and bovine milk. PMID:28386391

  8. Craters of the Pluto-Charon system

    NASA Astrophysics Data System (ADS)

    Robbins, Stuart J.; Singer, Kelsi N.; Bray, Veronica J.; Schenk, Paul; Lauer, Tod R.; Weaver, Harold A.; Runyon, Kirby; McKinnon, William B.; Beyer, Ross A.; Porter, Simon; White, Oliver L.; Hofgartner, Jason D.; Zangari, Amanda M.; Moore, Jeffrey M.; Young, Leslie A.; Spencer, John R.; Binzel, Richard P.; Buie, Marc W.; Buratti, Bonnie J.; Cheng, Andrew F.; Grundy, William M.; Linscott, Ivan R.; Reitsema, Harold J.; Reuter, Dennis C.; Showalter, Mark R.; Tyler, G. Len; Olkin, Catherine B.; Ennico, Kimberly S.; Stern, S. Alan; New Horizons Lorri, Mvic Instrument Teams

    2017-05-01

    NASA's New Horizons flyby mission of the Pluto-Charon binary system and its four moons provided humanity with its first spacecraft-based look at a large Kuiper Belt Object beyond Triton. Excluding this system, multiple Kuiper Belt Objects (KBOs) have been observed for only 20 years from Earth, and the KBO size distribution is unconstrained except among the largest objects. Because small KBOs will remain beyond the capabilities of ground-based observatories for the foreseeable future, one of the best ways to constrain the small KBO population is to examine the craters they have made on the Pluto-Charon system. The first step to understanding the crater population is to map it. In this work, we describe the steps undertaken to produce a robust crater database of impact features on Pluto, Charon, and their two largest moons, Nix and Hydra. These include an examination of different types of images and image processing, and we present an analysis of variability among the crater mapping team, where crater diameters were found to average ± 10% uncertainty across all sizes measured (∼0.5-300 km). We also present a few basic analyses of the crater databases, finding that Pluto's craters' differential size-frequency distribution across the encounter hemisphere has a power-law slope of approximately -3.1 ± 0.1 over diameters D ≈ 15-200 km, and Charon's has a slope of -3.0 ± 0.2 over diameters D ≈ 10-120 km; it is significantly shallower on both bodies at smaller diameters. We also better quantify evidence of resurfacing evidenced by Pluto's craters in contrast with Charon's. With this work, we are also releasing our database of potential and probable impact craters: 5287 on Pluto, 2287 on Charon, 35 on Nix, and 6 on Hydra.

  9. Evaluation of flaws in carbon steel piping. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zahoor, A.; Gamble, R.M.; Mehta, H.S.

    1986-10-01

    The objective of this program was to develop flaw evaluation procedures and allowable flaw sizes for ferritic piping used in light water reactor (LWR) power generation facilities. The program results provide relevant ASME Code groups with the information necessary to define flaw evaluation procedures, allowable flaw sizes, and their associated bases for Section XI of the code. Because there are several possible flaw-related failure modes for ferritic piping over the LWR operating temperature range, three analysis methods were employed to develop the evaluation procedures. These include limit load analysis for plastic collapse, elastic plastic fracture mechanics (EPFM) analysis for ductilemore » tearing, and linear elastic fracture mechanics (LEFM) analysis for non ductile crack extension. To ensure the appropriate analysis method is used in an evaluation, a step by step procedure also is provided to identify the relevant acceptance standard or procedure on a case by case basis. The tensile strength and toughness properties required to complete the flaw evaluation for any of the three analysis methods are included in the evaluation procedure. The flaw evaluation standards are provided in tabular form for the plastic collapse and ductile tearing modes, where the allowable part through flaw depth is defined as a function of load and flaw length. For non ductile crack extension, linear elastic fracture mechanics analysis methods, similar to those in Appendix A of Section XI, are defined. Evaluation flaw sizes and procedures are developed for both longitudinal and circumferential flaw orientations and normal/upset and emergency/faulted operating conditions. The tables are based on margins on load of 2.77 and 1.39 for circumferential flaws and 3.0 and 1.5 for longitudinal flaws for normal/upset and emergency/faulted conditions, respectively.« less

  10. Craters of the Pluto-Charon System

    NASA Technical Reports Server (NTRS)

    Robbins, Stuart J.; Singer, Kelsi N.; Bray, Veronica J.; Schenk, Paul; Lauer, Todd R.; Weaver, Harold A.; Runyon, Kirby; Mckinnon, William B.; Beyer, Ross A.; Porter, Simon; hide

    2016-01-01

    NASA's New Horizons flyby mission of the Pluto-Charon binary system and its four moons provided humanity with its first spacecraft-based look at a large Kuiper Belt Object beyond Triton. Excluding this system, multiple Kuiper Belt Objects (KBOs) have been observed for only 20 years from Earth, and the KBO size distribution is unconstrained except among the largest objects. Because small KBOs will remain beyond the capabilities of ground-based observatories for the foreseeable future, one of the best ways to constrain the small KBO population is to examine the craters they have made on the Pluto-Charon system. The first step to understanding the crater population is to map it. In this work, we describe the steps undertaken to produce a robust crater database of impact features on Pluto, Charon, and their two largest moons, Nix and Hydra. These include an examination of different types of images and image processing, and we present an analysis of variability among the crater mapping team, where crater diameters were found to average +/-10% uncertainty across all sizes measured (approx.0.5-300 km). We also present a few basic analyses of the crater databases, finding that Pluto's craters' differential size-frequency distribution across the encounter hemisphere has a power-law slope of approximately -3.1 +/- 0.1 over diameters D approx. = 15-200 km, and Charon's has a slope of -3.0 +/- 0.2 over diameters D approx. = 10-120 km; it is significantly shallower on both bodies at smaller diameters. We also better quantify evidence of resurfacing evidenced by Pluto's craters in contrast with Charon's. With this work, we are also releasing our database of potential and probable impact craters: 5287 on Pluto, 2287 on Charon, 35 on Nix, and 6 on Hydra.

  11. Accuracy of the Yamax CW-701 Pedometer for measuring steps in controlled and free-living conditions

    PubMed Central

    Coffman, Maren J; Reeve, Charlie L; Butler, Shannon; Keeling, Maiya; Talbot, Laura A

    2016-01-01

    Objective The Yamax Digi-Walker CW-701 (Yamax CW-701) is a low-cost pedometer that includes a 7-day memory, a 2-week cumulative memory, and automatically resets to zero at midnight. To date, the accuracy of the Yamax CW-701 has not been determined. The purpose of this study was to assess the accuracy of steps recorded by the Yamax CW-701 pedometer compared with actual steps and two other devices. Methods The study was conducted in a campus-based lab and in free-living settings with 22 students, faculty, and staff at a mid-sized university in the Southeastern US. While wearing a Yamax CW-701, Yamax Digi-Walker SW-200, and an ActiGraph GTX3 accelerometer, participants engaged in activities at variable speeds and conditions. To assess accuracy of each device, steps recorded were compared with actual step counts. Statistical tests included paired sample t-tests, percent accuracy, intraclass correlation coefficient, and Bland–Altman plots. Results The Yamax CW-701 demonstrated reliability and concurrent validity during walking at a fast pace and walking on a track, and in free-living conditions. Decreased accuracy was noted walking at a slow pace. Conclusions These findings are consistent with prior research. With most pedometers and accelerometers, adequate force and intensity must be present for a step to register. The Yamax CW-701 is accurate in recording steps taken while walking at a fast pace and in free-living settings. PMID:29942555

  12. Accuracy of the Yamax CW-701 Pedometer for measuring steps in controlled and free-living conditions.

    PubMed

    Coffman, Maren J; Reeve, Charlie L; Butler, Shannon; Keeling, Maiya; Talbot, Laura A

    2016-01-01

    The Yamax Digi-Walker CW-701 (Yamax CW-701) is a low-cost pedometer that includes a 7-day memory, a 2-week cumulative memory, and automatically resets to zero at midnight. To date, the accuracy of the Yamax CW-701 has not been determined. The purpose of this study was to assess the accuracy of steps recorded by the Yamax CW-701 pedometer compared with actual steps and two other devices. The study was conducted in a campus-based lab and in free-living settings with 22 students, faculty, and staff at a mid-sized university in the Southeastern US. While wearing a Yamax CW-701, Yamax Digi-Walker SW-200, and an ActiGraph GTX3 accelerometer, participants engaged in activities at variable speeds and conditions. To assess accuracy of each device, steps recorded were compared with actual step counts. Statistical tests included paired sample t -tests, percent accuracy, intraclass correlation coefficient, and Bland-Altman plots. The Yamax CW-701 demonstrated reliability and concurrent validity during walking at a fast pace and walking on a track, and in free-living conditions. Decreased accuracy was noted walking at a slow pace. These findings are consistent with prior research. With most pedometers and accelerometers, adequate force and intensity must be present for a step to register. The Yamax CW-701 is accurate in recording steps taken while walking at a fast pace and in free-living settings.

  13. Concentration, size, and excitation power effects on fluorescence from microdroplets and microparticles containing tryptophan and bacteria

    NASA Astrophysics Data System (ADS)

    Fell, Nicholas F., Jr.; Pinnick, Ronald G.; Hill, Steven C.; Videen, Gorden W.; Niles, Stanley; Chang, Richard K.; Holler, Stephen; Pan, Yongle; Bottiger, Jerold R.; Bronk, Burt V.

    1999-01-01

    Our group has been developing a system for single-particle fluorescence detection of aerosolized agents. This paper describes the most recent steps in the evolution of this system. The effects of fluorophore concentrations, droplet size, and excitation power have also been investigated with microdroplets containing tryptophan in water to determine the effects of these parameters on our previous results. The vibrating orifice droplet generator was chosen for this study base don its ability to generate particles of well- known and reproducible size. The power levels required to reach saturation and photodegradation were determined. In addition, the collection of fluorescence emission was optimized through the use of a UV achromatic photographic lens. This arrangement permitted collection of images of the droplet stream. Finally, the use of a dual-beam, conditional firing scheme facilitated the collection of improved signal- to-noise single-shot spectra from individual biological particles.

  14. Audiovisual integration increases the intentional step synchronization of side-by-side walkers.

    PubMed

    Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A

    2017-12-01

    When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Semi-automated hydrophobic interaction chromatography column scouting used in the two-step purification of recombinant green fluorescent protein.

    PubMed

    Stone, Orrin J; Biette, Kelly M; Murphy, Patrick J M

    2014-01-01

    Hydrophobic interaction chromatography (HIC) most commonly requires experimental determination (i.e., scouting) in order to select an optimal chromatographic medium for purifying a given target protein. Neither a two-step purification of untagged green fluorescent protein (GFP) from crude bacterial lysate using sequential HIC and size exclusion chromatography (SEC), nor HIC column scouting elution profiles of GFP, have been previously reported. Bacterial lysate expressing recombinant GFP was sequentially adsorbed to commercially available HIC columns containing butyl, octyl, and phenyl-based HIC ligands coupled to matrices of varying bead size. The lysate was fractionated using a linear ammonium phosphate salt gradient at constant pH. Collected HIC eluate fractions containing retained GFP were then pooled and further purified using high-resolution preparative SEC. Significant differences in presumptive GFP elution profiles were observed using in-line absorption spectrophotometry (A395) and post-run fluorimetry. SDS-PAGE and western blot demonstrated that fluorometric detection was the more accurate indicator of GFP elution in both HIC and SEC purification steps. Comparison of composite HIC column scouting data indicated that a phenyl ligand coupled to a 34 µm matrix produced the highest degree of target protein capture and separation. Conducting two-step protein purification using the preferred HIC medium followed by SEC resulted in a final, concentrated product with >98% protein purity. In-line absorbance spectrophotometry was not as precise of an indicator of GFP elution as post-run fluorimetry. These findings demonstrate the importance of utilizing a combination of detection methods when evaluating purification strategies. GFP is a well-characterized model protein, used heavily in educational settings and by researchers with limited protein purification experience, and the data and strategies presented here may aid in development other of HIC-compatible protein purification schemes.

  16. Study of mesoporous CdS-quantum-dot-sensitized TiO2 films by using X-ray photoelectron spectroscopy and AFM

    PubMed Central

    Wojcieszak, Robert; Raj, Gijo

    2014-01-01

    Summary CdS quantum dots were grown on mesoporous TiO2 films by successive ionic layer adsorption and reaction processes in order to obtain CdS particles of various sizes. AFM analysis shows that the growth of the CdS particles is a two-step process. The first step is the formation of new crystallites at each deposition cycle. In the next step the pre-deposited crystallites grow to form larger aggregates. Special attention is paid to the estimation of the CdS particle size by X-ray photoelectron spectroscopy (XPS). Among the classical methods of characterization the XPS model is described in detail. In order to make an attempt to validate the XPS model, the results are compared to those obtained from AFM analysis and to the evolution of the band gap energy of the CdS nanoparticles as obtained by UV–vis spectroscopy. The results showed that XPS technique is a powerful tool in the estimation of the CdS particle size. In conjunction with these results, a very good correlation has been found between the number of deposition cycles and the particle size. PMID:24605274

  17. Continuous-Flow In-Line Solvent-Swap Crystallization of Vitamin D3

    PubMed Central

    2017-01-01

    A continuous tandem in-line evaporation–crystallization is presented. The process includes an in-line solvent-swap step, suitable to be coupled to a capillary based cooler. As a proof of concept, this setup is tested in a direct in-line acetonitrile mediated crystallization of Vitamin D3. This configuration is suitable to be coupled to a new end-to-end continuous microflow synthesis of Vitamin D3. By this procedure, vitamin particles can be crystallized in continuous flow and isolated using an in-line continuous filtration step. In one run in just 1 min of cooling time, ∼50% (w/w) crystals of Vitamin D3 are directly obtained. Furthermore, the polymorphic form as well as crystals shape and size properties are described in this paper.

  18. A calibration mechanism based on worm drive for space telescope

    NASA Astrophysics Data System (ADS)

    Chong, Yaqin; Li, Chuang; Xia, Siyu; Zhong, Peifeng; Lei, Wang

    2017-08-01

    In this paper, a new type of calibration mechanism based on worm drive is presented for a space telescope. This calibration mechanism based on worm drive has the advantages of compact size and self-lock. The mechanism mainly consists of thirty-six LEDs as the light source for flat calibration, a diffuse plate, a step motor, a worm gear reducer and a potentiometer. As the main part of the diffuse plate, a PTFE tablet is mounted in an aluminum alloy frame. The frame is fixed on the shaft of the worm gear, which is driven by the step motor through the worm. The shaft of the potentiometer is connected to that of the worm gear to measure the rotation angle of the diffuse plate through a flexible coupler. Firstly, the calibration mechanism is designed, which includes the LEDs assembly design, the worm gear reducer design and the diffuse plate assembly design. The counterweight blocks and two end stops are also designed for the diffuse plate assembly. Then a modal analysis with finite element method for the diffuse plate assembly is completed.

  19. Modeling solute clustering in the diffusion layer around a growing crystal.

    PubMed

    Shiau, Lie-Ding; Lu, Yung-Fang

    2009-03-07

    The mechanism of crystal growth from solution is often thought to consist of a mass transfer diffusion step followed by a surface reaction step. Solute molecules might form clusters in the diffusion step before incorporating into the crystal lattice. A model is proposed in this work to simulate the evolution of the cluster size distribution due to the simultaneous aggregation and breakage of solute molecules in the diffusion layer around a growing crystal in the stirred solution. The crystallization of KAl(SO(4))(2)12H(2)O from aqueous solution is studied to illustrate the effect of supersaturation and diffusion layer thickness on the number-average degree of clustering and the size distribution of solute clusters in the diffusion layer.

  20. Monte-Carlo simulation of a stochastic differential equation

    NASA Astrophysics Data System (ADS)

    Arif, ULLAH; Majid, KHAN; M, KAMRAN; R, KHAN; Zhengmao, SHENG

    2017-12-01

    For solving higher dimensional diffusion equations with an inhomogeneous diffusion coefficient, Monte Carlo (MC) techniques are considered to be more effective than other algorithms, such as finite element method or finite difference method. The inhomogeneity of diffusion coefficient strongly limits the use of different numerical techniques. For better convergence, methods with higher orders have been kept forward to allow MC codes with large step size. The main focus of this work is to look for operators that can produce converging results for large step sizes. As a first step, our comparative analysis has been applied to a general stochastic problem. Subsequently, our formulization is applied to the problem of pitch angle scattering resulting from Coulomb collisions of charge particles in the toroidal devices.

  1. Sealing properties of one-step root-filling fibre post-obturators vs. two-step delayed fibre post-placement.

    PubMed

    Monticelli, Francesca; Osorio, Raquel; Toledano, Manuel; Ferrari, Marco; Pashley, David H; Tay, Franklin R

    2010-07-01

    The sealing properties of a one-step obturation post-placement technique consisting of Resilon-capped fibre post-obturators were compared with a two-step technique based on initial Resilon root filling following by 24h-delayed fibre post-placement. Thirty root segments were shaped to size 40, 0.04 taper and filled with: (1) InnoEndo obturators; (2) Resilon/24h-delayed FibreKor post-cementation. Obturator, root filling and post-cementation procedures were performed using InnoEndo bonding agent/dual-cured root canal sealer. Fluid flow rate through the filled roots was evaluated at 10psi using a computerised fluid filtration model before root resection and after 3 and 9mm apical resections. Fluid flow data were analysed using two-way repeated measures ANOVA and Tukey test to examine the effects of root-filling post-placement techniques and root resection lengths on fluid leakage from the filled canals (alpha=0.05). A significantly greater amount of fluid leakage was observed with the one-step technique when compared with two-step technique. No difference in fluid leakage was observed among intact canals and canals resected at different lengths for both materials. The seal of root canals achieved with the one-step obturator is less effective than separate Resilon root fillings followed by a 24-h delay prior to the fibre post-placement. Incomplete setting of the sealer and restricted relief of polymerisation shrinkage stresses may be responsible for the inferior seal of the one-step root-filling/post-restoration technique. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions

    NASA Astrophysics Data System (ADS)

    Song, Bongyong; Park, Justin C.; Song, William Y.

    2014-11-01

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires ‘at most one function evaluation’ in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a ‘smoothed TV’ or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  3. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions.

    PubMed

    Song, Bongyong; Park, Justin C; Song, William Y

    2014-11-07

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires 'at most one function evaluation' in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a 'smoothed TV' or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  4. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  5. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  6. Is the size of the useful field of view affected by postural demands associated with standing and stepping?

    PubMed

    Reed-Jones, James G; Reed-Jones, Rebecca J; Hollands, Mark A

    2014-04-30

    The useful field of view (UFOV) is the visual area from which information is obtained at a brief glance. While studies have examined the effects of increased cognitive load on the visual field, no one has specifically looked at the effects of postural control or locomotor activity on the UFOV. The current study aimed to examine the effects of postural demand and locomotor activity on UFOV performance in healthy young adults. Eleven participants were tested on three modified UFOV tasks (central processing, peripheral processing, and divided-attention) while seated, standing, and stepping in place. Across all postural conditions, participants showed no difference in their central or peripheral processing. However, in the divided-attention task (reporting the letter in central vision and target location in peripheral vision amongst distracter items) a main effect of posture condition on peripheral target accuracy was found for targets at 57° of eccentricity (p=.037). The mean accuracy reduced from 80.5% (standing) to 74% (seated) to 56.3% (stepping). These findings show that postural demands do affect UFOV divided-attention performance. In particular, the size of the useful field of view significantly decreases when stepping. This finding has important implications for how the results of a UFOV test are used to evaluate the general size of the UFOV during varying activities, as the traditional seated test procedure may overestimate the size of the UFOV during locomotor activities. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  8. Thermally conductive of nanofluid from surfactant doped polyaniline nanoparticle and deep eutectic ionic liquid

    NASA Astrophysics Data System (ADS)

    Siong, Chew Tze; Daik, Rusli; Hamid, Muhammad Azmi Abdul

    2014-09-01

    Nanofluid is a colloidal suspension of nano-size particles in a fluid. Spherical shape dodecylbenzenesulfonic acid doped polyaniline (DBSA-PANI) nanoparticles were synthesized via reverse micellar polymerization in isooctane with average size of 50 nm- 60 nm. The aim of study is to explore the possibility of using deep eutectic ionic liquid (DES) as a new base fluid in heat transfer application. DES was prepared by heating up choline chloride and urea with stirring. DES based nanofluids containing DBSA-PANI nanoparticles were prepared using two-step method. Thermal conductivity of nanofluids was measured using KD2 Pro Thermal Properties Analyzer. When incorporated with DBSA-PANI nanoparticles, DES with water was found to exhibit a bigger increase in thermal conductivity compared to that of the pure DES. The thermal conductivity of DES with water was increased by 4.67% when incorporated with 0.2 wt% of DBSA-PANI nanoparticles at 50°C. The enhancement in thermal conductivity of DES based nanofluids is possibly related to Brownian motion of nanoparticles as well as micro-convection of base fluids and also interaction between dopants and DES ions.

  9. Preparation of epoxy-based macroporous monolithic columns for the fast and efficient immunofiltration of Staphylococcus aureus.

    PubMed

    Ott, Sonja; Niessner, Reinhard; Seidel, Michael

    2011-08-01

    Macroporous epoxy-based monolithic columns were used for immunofiltration of bacteria. The prepared monolithic polymer support is hydrophilic and has large pore sizes of 21 μm without mesopores. A surface chemistry usually applied for immobilization of antibodies on glass slides is successfully transferred to monolithic columns. Step-by-step, the surface of the epoxy-based monolith is hydrolyzed, silanized, coated with poly(ethylene glycol diamine) and activated with the homobifunctional crosslinker di(N-succinimidyl)carbonate for immobilization of antibodies on the monolithic columns. The functionalization steps are characterized to ensure the coating of each monolayer. The prepared antibody-immobilized monolithic column is optimized for immunofiltration to enrich Staphylococcus aureus as an important food contaminant. Different kinds of geometries of monolithic columns, flow rates and elution buffers are tested with the goal to get high recoveries in the shortest enrichment time as possible. An effective capture of S. aureus was achieved at a flow rate of 7.0 mL/min with low backpressures of 20.1±5.4 mbar enabling a volumetric enrichment of 1000 within 145 min. The bacteria were quantified by flow cytometry using a double-labeling approach. After immunofiltration the sensitivity was significantly increased and a detection limit of the total system of 42 S. aureus/mL was reached. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    DOE PAGES

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; ...

    2017-06-09

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order tomore » achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.« less

  11. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Savara, Aditya

    2017-10-01

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of "KMC stiffness" (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps/CPU time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events-allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm is designed for use in achieving and simulating steady-state conditions in KMC simulations. As shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.

  12. Generation of dense granular deposits for porosity analysis: assessment and application of large-scale non-smooth granular dynamics

    NASA Astrophysics Data System (ADS)

    Schruff, T.; Liang, R.; Rüde, U.; Schüttrumpf, H.; Frings, R. M.

    2018-01-01

    The knowledge of structural properties of granular materials such as porosity is highly important in many application-oriented and scientific fields. In this paper we present new results of computer-based packing simulations where we use the non-smooth granular dynamics (NSGD) method to simulate gravitational random dense packing of spherical particles with various particle size distributions and two types of depositional conditions. A bin packing scenario was used to compare simulation results to laboratory porosity measurements and to quantify the sensitivity of the NSGD regarding critical simulation parameters such as time step size. The results of the bin packing simulations agree well with laboratory measurements across all particle size distributions with all absolute errors below 1%. A large-scale packing scenario with periodic side walls was used to simulate the packing of up to 855,600 spherical particles with various particle size distributions (PSD). Simulation outcomes are used to quantify the effect of particle-domain-size ratio on the packing compaction. A simple correction model, based on the coordination number, is employed to compensate for this effect on the porosity and to determine the relationship between PSD and porosity. Promising accuracy and stability results paired with excellent computational performance recommend the application of NSGD for large-scale packing simulations, e.g. to further enhance the generation of representative granular deposits.

  13. 16 CFR 642.3 - Prescreen opt-out notice.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... size that is larger than the type size of the principal text on the same page, but in no event smaller than 12-point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (ii) On the...

  14. 16 CFR 642.3 - Prescreen opt-out notice.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... size that is larger than the type size of the principal text on the same page, but in no event smaller than 12-point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (ii) On the...

  15. 16 CFR 642.3 - Prescreen opt-out notice.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... size that is larger than the type size of the principal text on the same page, but in no event smaller than 12-point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (ii) On the...

  16. 3D Product Development for Loose-Fitting Garments Based on Parametric Human Models

    NASA Astrophysics Data System (ADS)

    Krzywinski, S.; Siegmund, J.

    2017-10-01

    Researchers and commercial suppliers worldwide pursue the objective of achieving a more transparent garment construction process that is computationally linked to a virtual body, in order to save development costs over the long term. The current aim is not to transfer the complete pattern making step to a 3D design environment but to work out basic constructions in 3D that provide excellent fit due to their accurate construction and morphological pattern grading (automatic change of sizes in 3D) in respect of sizes and body types. After a computer-aided derivation of 2D pattern parts, these can be made available to the industry as a basis on which to create more fashionable variations.

  17. Scanning near-field optical microscopy.

    PubMed

    Vobornik, Dusan; Vobornik, Slavenka

    2008-02-01

    An average human eye can see details down to 0,07 mm in size. The ability to see smaller details of the matter is correlated with the development of the science and the comprehension of the nature. Today's science needs eyes for the nano-world. Examples are easily found in biology and medical sciences. There is a great need to determine shape, size, chemical composition, molecular structure and dynamic properties of nano-structures. To do this, microscopes with high spatial, spectral and temporal resolution are required. Scanning Near-field Optical Microscopy (SNOM) is a new step in the evolution of microscopy. The conventional, lens-based microscopes have their resolution limited by diffraction. SNOM is not subject to this limitation and can offer up to 70 times better resolution.

  18. Statistical Estimation of Orbital Debris Populations with a Spectrum of Object Size

    NASA Technical Reports Server (NTRS)

    Xu, Y. -l; Horstman, M.; Krisko, P. H.; Liou, J. -C; Matney, M.; Stansbery, E. G.; Stokely, C. L.; Whitlock, D.

    2008-01-01

    Orbital debris is a real concern for the safe operations of satellites. In general, the hazard of debris impact is a function of the size and spatial distributions of the debris populations. To describe and characterize the debris environment as reliably as possible, the current NASA Orbital Debris Engineering Model (ORDEM2000) is being upgraded to a new version based on new and better quality data. The data-driven ORDEM model covers a wide range of object sizes from 10 microns to greater than 1 meter. This paper reviews the statistical process for the estimation of the debris populations in the new ORDEM upgrade, and discusses the representation of large-size (greater than or equal to 1 m and greater than or equal to 10 cm) populations by SSN catalog objects and the validation of the statistical approach. Also, it presents results for the populations with sizes of greater than or equal to 3.3 cm, greater than or equal to 1 cm, greater than or equal to 100 micrometers, and greater than or equal to 10 micrometers. The orbital debris populations used in the new version of ORDEM are inferred from data based upon appropriate reference (or benchmark) populations instead of the binning of the multi-dimensional orbital-element space. This paper describes all of the major steps used in the population-inference procedure for each size-range. Detailed discussions on data analysis, parameter definition, the correlation between parameters and data, and uncertainty assessment are included.

  19. WAKES: Wavelet Adaptive Kinetic Evolution Solvers

    NASA Astrophysics Data System (ADS)

    Mardirian, Marine; Afeyan, Bedros; Larson, David

    2016-10-01

    We are developing a general capability to adaptively solve phase space evolution equations mixing particle and continuum techniques in an adaptive manner. The multi-scale approach is achieved using wavelet decompositions which allow phase space density estimation to occur with scale dependent increased accuracy and variable time stepping. Possible improvements on the SFK method of Larson are discussed, including the use of multiresolution analysis based Richardson-Lucy Iteration, adaptive step size control in explicit vs implicit approaches. Examples will be shown with KEEN waves and KEEPN (Kinetic Electrostatic Electron Positron Nonlinear) waves, which are the pair plasma generalization of the former, and have a much richer span of dynamical behavior. WAKES techniques are well suited for the study of driven and released nonlinear, non-stationary, self-organized structures in phase space which have no fluid, limit nor a linear limit, and yet remain undamped and coherent well past the drive period. The work reported here is based on the Vlasov-Poisson model of plasma dynamics. Work supported by a Grant from the AFOSR.

  20. Protein complex purification from Thermoplasma acidophilum using a phage display library.

    PubMed

    Hubert, Agnes; Mitani, Yasuo; Tamura, Tomohiro; Boicu, Marius; Nagy, István

    2014-03-01

    We developed a novel protein complex isolation method using a single-chain variable fragment (scFv) based phage display library in a two-step purification procedure. We adapted the antibody-based phage display technology which has been developed for single target proteins to a protein mixture containing about 300 proteins, mostly subunits of Thermoplasma acidophilum complexes. T. acidophilum protein specific phages were selected and corresponding scFvs were expressed in Escherichia coli. E. coli cell lysate containing the expressed His-tagged scFv specific against one antigen protein and T. acidophilum crude cell lysate containing intact target protein complexes were mixed, incubated and subjected to protein purification using affinity and size exclusion chromatography steps. This method was confirmed to isolate intact particles of thermosome and proteasome suitable for electron microscopy analysis and provides a novel protein complex isolation strategy applicable to organisms where no genetic tools are available. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    NASA Astrophysics Data System (ADS)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  2. Advanced Extraction of Spatial Information from High Resolution Satellite Data

    NASA Astrophysics Data System (ADS)

    Pour, T.; Burian, J.; Miřijovský, J.

    2016-06-01

    In this paper authors processed five satellite image of five different Middle-European cities taken by five different sensors. The aim of the paper was to find methods and approaches leading to evaluation and spatial data extraction from areas of interest. For this reason, data were firstly pre-processed using image fusion, mosaicking and segmentation processes. Results going into the next step were two polygon layers; first one representing single objects and the second one representing city blocks. In the second step, polygon layers were classified and exported into Esri shapefile format. Classification was partly hierarchical expert based and partly based on the tool SEaTH used for separability distinction and thresholding. Final results along with visual previews were attached to the original thesis. Results are evaluated visually and statistically in the last part of the paper. In the discussion author described difficulties of working with data of large size, taken by different sensors and different also thematically.

  3. Growth from Solutions: Kink dynamics, Stoichiometry, Face Kinetics and stability in turbulent flow

    NASA Technical Reports Server (NTRS)

    Chernov, A. A.; DeYoreo, J. J.; Rashkovich, L. N.; Vekilov, P. G.

    2005-01-01

    1. Kink dynamics. The first segment of a polygomized dislocation spiral step measured by AFM demonstrates up to 60% scattering in the critical length l*- the length when the segment starts to propagate. On orthorhombic lysozyme, this length is shorter than that the observed interkink distance. Step energy from the critical segment length based on the Gibbs-Thomson law (GTL), l* = 20(omega)alpha/(Delta)mu is several times larger than the energy from 2D nucleation rate. Here o is tine building block specific voiume, a is the step riser specific free energy, Delta(mu) is the crystallization driving force. These new data support our earlier assumption that the classical Frenkel, Burton -Cabrera-Frank concept of the abundant kink supply by fluctuations is not applicable for strongly polygonized steps. Step rate measurements on brushite confirms that statement. This is the1D nucleation of kinks that control step propagation. The GTL is valid only if l*

  4. Dislocation-induced Charges in Quantum Dots: Step Alignment and Radiative Emission

    NASA Technical Reports Server (NTRS)

    Leon, R.; Okuno, J.; Lawton, R.; Stevens-Kalceff, M.; Phillips, M.; Zou, J.; Cockayne, D.; Lobo, C.

    1999-01-01

    A transition between two types of step alignment was observed in a multilayered InGaAs/GaAs quantum-dot (QD) structure. A change to larger QD sizes in smaller concentrations occurred after formation of a dislocation array.

  5. Solar kerosene from H2O and CO2

    NASA Astrophysics Data System (ADS)

    Furler, P.; Marxer, D.; Scheffe, J.; Reinalda, D.; Geerlings, H.; Falter, C.; Batteiger, V.; Sizmann, A.; Steinfeld, A.

    2017-06-01

    The entire production chain for renewable kerosene obtained directly from sunlight, H2O, and CO2 is experimentally demonstrated. The key component of the production process is a high-temperature solar reactor containing a reticulated porous ceramic (RPC) structure made of ceria, which enables the splitting of H2O and CO2 via a 2-step thermochemical redox cycle. In the 1st reduction step, ceria is endo-thermally reduced using concentrated solar radiation as the energy source of process heat. In the 2nd oxidation step, nonstoichiometric ceria reacts with H2O and CO2 to form H2 and CO - syngas - which is finally converted into kerosene by the Fischer-Tropsch process. The RPC featured dual-scale porosity for enhanced heat and mass transfer: mm-size pores for volumetric radiation absorption during the reduction step and μm-size pores within its struts for fast kinetics during the oxidation step. We report on the engineering design of the solar reactor and the experimental demonstration of over 290 consecutive redox cycles for producing high-quality syngas suitable for the processing of liquid hydrocarbon fuels.

  6. Software forecasting as it is really done: A study of JPL software engineers

    NASA Technical Reports Server (NTRS)

    Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.

    1993-01-01

    This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.

  7. Carbon dioxide sequestration using NaHSO4 and NaOH: A dissolution and carbonation optimisation study.

    PubMed

    Sanna, Aimaro; Steel, Luc; Maroto-Valer, M Mercedes

    2017-03-15

    The use of NaHSO 4 to leach out Mg fromlizardite-rich serpentinite (in form of MgSO 4 ) and the carbonation of CO 2 (captured in form of Na 2 CO 3 using NaOH) to form MgCO 3 and Na 2 SO 4 was investigated. Unlike ammonium sulphate, sodium sulphate can be separated via precipitation during the recycling step avoiding energy intensive evaporation process required in NH 4 -based processes. To determine the effectiveness of the NaHSO 4 /NaOH process when applied to lizardite, the optimisation of the dissolution and carbonation steps were performed using a UK lizardite-rich serpentine. Temperature, solid/liquid ratio, particle size, concentration and molar ratio were evaluated. An optimal dissolution efficiency of 69.6% was achieved over 3 h at 100 °C using 1.4 M sodium bisulphate and 50 g/l serpentine with particle size 75-150 μm. An optimal carbonation efficiency of 95.4% was achieved over 30 min at 90 °C and 1:1 magnesium:sodium carbonate molar ratio using non-synthesised solution. The CO 2 sequestration capacity was 223.6 g carbon dioxide/kg serpentine (66.4% in terms of Mg bonded to hydromagnesite), which is comparable with those obtained using ammonium based processes. Therefore, lizardite-rich serpentinites represent a valuable resource for the NaHSO 4 /NaOH based pH swing mineralisation process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.

    PubMed

    Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué

    2018-02-15

    We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shim, Yunsic; Amar, Jacques G.

    While temperature-accelerated dynamics (TAD) is a powerful method for carrying out non-equilibrium simulations of systems over extended time scales, the computational cost of serial TAD increases approximately as N{sup 3} where N is the number of atoms. In addition, although a parallel TAD method based on domain decomposition [Y. Shim et al., Phys. Rev. B 76, 205439 (2007)] has been shown to provide significantly improved scaling, the dynamics in such an approach is only approximate while the size of activated events is limited by the spatial decomposition size. Accordingly, it is of interest to develop methods to improve the scalingmore » of serial TAD. As a first step in understanding the factors which determine the scaling behavior, we first present results for the overall scaling of serial TAD and its components, which were obtained from simulations of Ag/Ag(100) growth and Ag/Ag(100) annealing, and compare with theoretical predictions. We then discuss two methods based on localization which may be used to address two of the primary “bottlenecks” to the scaling of serial TAD with system size. By implementing both of these methods, we find that for intermediate system-sizes, the scaling is improved by almost a factor of N{sup 1/2}. Some additional possible methods to improve the scaling of TAD are also discussed.« less

  10. Improved scaling of temperature-accelerated dynamics using localization

    NASA Astrophysics Data System (ADS)

    Shim, Yunsic; Amar, Jacques G.

    2016-07-01

    While temperature-accelerated dynamics (TAD) is a powerful method for carrying out non-equilibrium simulations of systems over extended time scales, the computational cost of serial TAD increases approximately as N3 where N is the number of atoms. In addition, although a parallel TAD method based on domain decomposition [Y. Shim et al., Phys. Rev. B 76, 205439 (2007)] has been shown to provide significantly improved scaling, the dynamics in such an approach is only approximate while the size of activated events is limited by the spatial decomposition size. Accordingly, it is of interest to develop methods to improve the scaling of serial TAD. As a first step in understanding the factors which determine the scaling behavior, we first present results for the overall scaling of serial TAD and its components, which were obtained from simulations of Ag/Ag(100) growth and Ag/Ag(100) annealing, and compare with theoretical predictions. We then discuss two methods based on localization which may be used to address two of the primary "bottlenecks" to the scaling of serial TAD with system size. By implementing both of these methods, we find that for intermediate system-sizes, the scaling is improved by almost a factor of N1/2. Some additional possible methods to improve the scaling of TAD are also discussed.

  11. Mendelian Randomization.

    PubMed

    Grover, Sandeep; Del Greco M, Fabiola; Stein, Catherine M; Ziegler, Andreas

    2017-01-01

    Confounding and reverse causality have prevented us from drawing meaningful clinical interpretation even in well-powered observational studies. Confounding may be attributed to our inability to randomize the exposure variable in observational studies. Mendelian randomization (MR) is one approach to overcome confounding. It utilizes one or more genetic polymorphisms as a proxy for the exposure variable of interest. Polymorphisms are randomly distributed in a population, they are static throughout an individual's lifetime, and may thus help in inferring directionality in exposure-outcome associations. Genome-wide association studies (GWAS) or meta-analyses of GWAS are characterized by large sample sizes and the availability of many single nucleotide polymorphisms (SNPs), making GWAS-based MR an attractive approach. GWAS-based MR comes with specific challenges, including multiple causality. Despite shortcomings, it still remains one of the most powerful techniques for inferring causality.With MR still an evolving concept with complex statistical challenges, the literature is relatively scarce in terms of providing working examples incorporating real datasets. In this chapter, we provide a step-by-step guide for causal inference based on the principles of MR with a real dataset using both individual and summary data from unrelated individuals. We suggest best possible practices and give recommendations based on the current literature.

  12. Annealing of Solar Cells and Other Thin Film Devices

    NASA Technical Reports Server (NTRS)

    Escobar, Hector; Kuhlman, Franz; Dils, D. W.; Lush, G. B.; Mackey, Willie R. (Technical Monitor)

    2001-01-01

    Annealing is a key step in most semiconductor fabrication processes, especially for thin films where annealing enhances performance by healing defects and increasing grain sizes. We have employed a new annealing oven for the annealing of CdTe-based solar cells and have been using this system in an attempt to grow US on top of CdTe by annealing in the presence of H2S gas. Preliminary results of this process on CdTe solar cells and other thin-film devices will be presented.

  13. Recursive Factorization of the Inverse Overlap Matrix in Linear-Scaling Quantum Molecular Dynamics Simulations.

    PubMed

    Negre, Christian F A; Mniszewski, Susan M; Cawkwell, Marc J; Bock, Nicolas; Wall, Michael E; Niklasson, Anders M N

    2016-07-12

    We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive, iterative refinement of an initial guess of Z (inverse square root of the overlap matrix S). The initial guess of Z is obtained beforehand by using either an approximate divide-and-conquer technique or dynamical methods, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under the incomplete, approximate, iterative refinement of Z. Linear-scaling performance is obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables efficient shared-memory parallelization. As we show in this article using self-consistent density-functional-based tight-binding MD, our approach is faster than conventional methods based on the diagonalization of overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4158-atom water-solvated polyalanine system, we find an average speedup factor of 122 for the computation of Z in each MD step.

  14. Recursive Factorization of the Inverse Overlap Matrix in Linear Scaling Quantum Molecular Dynamics Simulations

    DOE PAGES

    Negre, Christian F. A; Mniszewski, Susan M.; Cawkwell, Marc Jon; ...

    2016-06-06

    We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive iterative re nement of an initial guess Z of the inverse overlap matrix S. The initial guess of Z is obtained beforehand either by using an approximate divide and conquer technique or dynamically, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under incomplete approximate iterative re nement of Z. Linear scaling performance ismore » obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables e cient shared memory parallelization. As we show in this article using selfconsistent density functional based tight-binding MD, our approach is faster than conventional methods based on the direct diagonalization of the overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4,158 atom water-solvated polyalanine system we nd an average speedup factor of 122 for the computation of Z in each MD step.« less

  15. Preparation of alumina-hercynite nano-spinel via one-step thermal conversion of Fe-doped metal-organic framework MIL-53(Al)

    NASA Astrophysics Data System (ADS)

    Chen, Shuyi; Lu, Huigong; Wu, Yi-nan; Gu, Yifan; Li, Fengting; Morlay, Catherine

    2016-09-01

    Alumina-hercynite nano-spinel powders were prepared via one-step pyrolysis of iron-acetylacetone-doped Al-based metal-organic framework (MOF), i.e., MIL-53(Al). Organic ferric source, iron acetylacetone, was incorporated in situ into the framework of MIL-53(Al) during the solvothermal synthesis process. Under high-temperature pyrolysis, alumina derived from the MIL-53(Al) matrix and ferric oxides originated from the decomposition of organic ferric precursor in the framework were thermally converted into hercynite (FeAl2O4). The prepared samples were characterized using transmission electron microscopy, X-ray diffraction, N2 sorption, thermogravimetry, Raman spectroscopy and X-ray photoelectron spectroscopy. The final products were identified to be composed of alumina, hercynite and trace amounts of carbon depending on pyrolysis temperature. The experimental results showed that hercynite phase can be obtained and stabilized at low temperatures between 900 and 1100 °C under inert atmosphere. The final products were composed of nano-sized particles with an average size below 100 nm of individual crystal and specific surface areas of 18-49 m2 g-1.

  16. Delivery of high intensity beams with large clad step-index fibers for engine ignition

    NASA Astrophysics Data System (ADS)

    Joshi, Sachin; Wilvert, Nick; Yalin, Azer P.

    2012-09-01

    We show, for the first time, that step-index silica fibers with a large clad (400 μm core and 720 μm clad) can be used to transmit nanosecond duration pulses in a way that allows reliable (consistent) spark formation in atmospheric pressure air by the focused output light from the fiber. The high intensity (>100 GW/cm2) of the focused output light is due to the combination of high output power (typical of fibers of this core size) with high output beam quality (better than that typical of fibers of this core size). The high output beam quality, which enables tight focusing, is due to the large clad which suppresses microbending-induced diffusion of modal power to higher order modes owing to the increased rigidity of the core-clad interface. We also show that extending the pulse duration provides a means to increase the delivered pulse energy (>20 mJ delivered for 50 ns pulses) without causing fiber damage. Based on this ability to deliver high energy sparks, we report the first reliable laser ignition of a natural gas engine including startup under typical procedures using silica fiber optics for pulse delivery.

  17. Residual urinary extracellular vesicles in ultracentrifugation supernatants after hydrostatic filtration dialysis enrichment.

    PubMed

    Musante, Luca; Tataruch-Weinert, Dorota; Kerjaschki, Dontscho; Henry, Michael; Meleady, Paula; Holthofer, Harry

    2017-01-01

    Urinary extracellular vesicles (UEVs) appear an ideal source of biomarkers for kidney and urogenital diseases. The majority of protocols designed for their isolation are based on differential centrifugation steps. However, little is still known of the type and amount of vesicles left in the supernatant. Here we used an isolation protocol for UEVs which uses hydrostatic filtration dialysis as first pre-enrichment step, followed by differential centrifugation. Transmission electron microscopy (TEM), mass spectrometry (MS), western blot, ELISA assays and tuneable resistive pulse sensing (TRPS) were used to characterise and quantify UEVs in the ultracentrifugation supernatant. TEM showed the presence of a variety of small size vesicles in the supernatant while protein identification by MS matched accurately with the protein list available in Vesiclepedia. Screening and relative quantification for specific vesicle markers showed that the supernatant was preferentially positive for CD9 and TSG101. ELISA tests for quantification of exosome revealed that 14%, was left in the supernatant with a particle diameter of 110 nm and concentration of 1.54 × 10 10 /ml. Here we show a comprehensive characterisation of exosomes and other small size urinary vesicles which the conventional differential centrifugation protocol may lose.

  18. Residual urinary extracellular vesicles in ultracentrifugation supernatants after hydrostatic filtration dialysis enrichment

    PubMed Central

    Musante, Luca; Tataruch-Weinert, Dorota; Kerjaschki, Dontscho; Henry, Michael; Meleady, Paula; Holthofer, Harry

    2017-01-01

    ABSTRACT Urinary extracellular vesicles (UEVs) appear an ideal source of biomarkers for kidney and urogenital diseases. The majority of protocols designed for their isolation are based on differential centrifugation steps. However, little is still known of the type and amount of vesicles left in the supernatant. Here we used an isolation protocol for UEVs which uses hydrostatic filtration dialysis as first pre-enrichment step, followed by differential centrifugation. Transmission electron microscopy (TEM), mass spectrometry (MS), western blot, ELISA assays and tuneable resistive pulse sensing (TRPS) were used to characterise and quantify UEVs in the ultracentrifugation supernatant. TEM showed the presence of a variety of small size vesicles in the supernatant while protein identification by MS matched accurately with the protein list available in Vesiclepedia. Screening and relative quantification for specific vesicle markers showed that the supernatant was preferentially positive for CD9 and TSG101. ELISA tests for quantification of exosome revealed that 14%, was left in the supernatant with a particle diameter of 110 nm and concentration of 1.54 × 1010/ml. Here we show a comprehensive characterisation of exosomes and other small size urinary vesicles which the conventional differential centrifugation protocol may lose. PMID:28326167

  19. Lévy flight artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Sharma, Harish; Bansal, Jagdish Chand; Arya, K. V.; Yang, Xin-She

    2016-08-01

    Artificial bee colony (ABC) optimisation algorithm is a relatively simple and recent population-based probabilistic approach for global optimisation. The solution search equation of ABC is significantly influenced by a random quantity which helps in exploration at the cost of exploitation of the search space. In the ABC, there is a high chance to skip the true solution due to its large step sizes. In order to balance between diversity and convergence in the ABC, a Lévy flight inspired search strategy is proposed and integrated with ABC. The proposed strategy is named as Lévy Flight ABC (LFABC) has both the local and global search capability simultaneously and can be achieved by tuning the Lévy flight parameters and thus automatically tuning the step sizes. In the LFABC, new solutions are generated around the best solution and it helps to enhance the exploitation capability of ABC. Furthermore, to improve the exploration capability, the numbers of scout bees are increased. The experiments on 20 test problems of different complexities and five real-world engineering optimisation problems show that the proposed strategy outperforms the basic ABC and recent variants of ABC, namely, Gbest-guided ABC, best-so-far ABC and modified ABC in most of the experiments.

  20. An exact and efficient first passage time algorithm for reaction-diffusion processes on a 2D-lattice

    NASA Astrophysics Data System (ADS)

    Bezzola, Andri; Bales, Benjamin B.; Alkire, Richard C.; Petzold, Linda R.

    2014-01-01

    We present an exact and efficient algorithm for reaction-diffusion-nucleation processes on a 2D-lattice. The algorithm makes use of first passage time (FPT) to replace the computationally intensive simulation of diffusion hops in KMC by larger jumps when particles are far away from step-edges or other particles. Our approach computes exact probability distributions of jump times and target locations in a closed-form formula, based on the eigenvectors and eigenvalues of the corresponding 1D transition matrix, maintaining atomic-scale resolution of resulting shapes of deposit islands. We have applied our method to three different test cases of electrodeposition: pure diffusional aggregation for large ranges of diffusivity rates and for simulation domain sizes of up to 4096×4096 sites, the effect of diffusivity on island shapes and sizes in combination with a KMC edge diffusion, and the calculation of an exclusion zone in front of a step-edge, confirming statistical equivalence to standard KMC simulations. The algorithm achieves significant speedup compared to standard KMC for cases where particles diffuse over long distances before nucleating with other particles or being captured by larger islands.

  1. An exact and efficient first passage time algorithm for reaction–diffusion processes on a 2D-lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezzola, Andri, E-mail: andri.bezzola@gmail.com; Bales, Benjamin B., E-mail: bbbales2@gmail.com; Alkire, Richard C., E-mail: r-alkire@uiuc.edu

    2014-01-01

    We present an exact and efficient algorithm for reaction–diffusion–nucleation processes on a 2D-lattice. The algorithm makes use of first passage time (FPT) to replace the computationally intensive simulation of diffusion hops in KMC by larger jumps when particles are far away from step-edges or other particles. Our approach computes exact probability distributions of jump times and target locations in a closed-form formula, based on the eigenvectors and eigenvalues of the corresponding 1D transition matrix, maintaining atomic-scale resolution of resulting shapes of deposit islands. We have applied our method to three different test cases of electrodeposition: pure diffusional aggregation for largemore » ranges of diffusivity rates and for simulation domain sizes of up to 4096×4096 sites, the effect of diffusivity on island shapes and sizes in combination with a KMC edge diffusion, and the calculation of an exclusion zone in front of a step-edge, confirming statistical equivalence to standard KMC simulations. The algorithm achieves significant speedup compared to standard KMC for cases where particles diffuse over long distances before nucleating with other particles or being captured by larger islands.« less

  2. Computer-based image analysis of one-dimensional electrophoretic gels used for the separation of DNA restriction fragments.

    PubMed Central

    Gray, A J; Beecher, D E; Olson, M V

    1984-01-01

    A stand-alone, interactive computer system has been developed that automates the analysis of ethidium bromide-stained agarose and acrylamide gels on which DNA restriction fragments have been separated by size. High-resolution digital images of the gels are obtained using a camera that contains a one-dimensional, 2048-pixel photodiode array that is mechanically translated through 2048 discrete steps in a direction perpendicular to the gel lanes. An automatic band-detection algorithm is used to establish the positions of the gel bands. A color-video graphics system, on which both the gel image and a variety of operator-controlled overlays are displayed, allows the operator to visualize and interact with critical stages of the analysis. The principal interactive steps involve defining the regions of the image that are to be analyzed and editing the results of the band-detection process. The system produces a machine-readable output file that contains the positions, intensities, and descriptive classifications of all the bands, as well as documentary information about the experiment. This file is normally further processed on a larger computer to obtain fragment-size assignments. Images PMID:6320097

  3. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Height of a faceted macrostep for sticky steps in a step-faceting zone

    NASA Astrophysics Data System (ADS)

    Akutsu, Noriko

    2018-02-01

    The driving force dependence of the surface velocity and the average height of faceted merged steps, the terrace-surface slope, and the elementary step velocity are studied using the Monte Carlo method in the nonequilibrium steady state. The Monte Carlo study is based on a lattice model, the restricted solid-on-solid model with point-contact-type step-step attraction (p-RSOS model). The main focus of this paper is a change of the "kink density" on the vicinal surface. The temperature is selected to be in the step-faceting zone [N. Akutsu, AIP Adv. 6, 035301 (2016), 10.1063/1.4943400] where the vicinal surface is surrounded by the (001) terrace and the (111) faceted step at equilibrium. Long time simulations are performed at this temperature to obtain steady states for the different driving forces that influence the growth/recession of the surface. A Wulff figure of the p-RSOS model is produced through the anomalous surface tension calculated using the density-matrix renormalization group method. The characteristics of the faceted macrostep profile at equilibrium are classified with respect to the connectivity of the surface tension. This surface tension connectivity also leads to a faceting diagram, where the separated areas are, respectively, classified as a Gruber-Mullins-Pokrovsky-Talapov zone, step droplet zone, and step-faceting zone. Although the p-RSOS model is a simplified model, the model shows a wide variety of dynamics in the step-faceting zone. There are four characteristic driving forces: Δ μy,Δ μf,Δ μc o , and Δ μR . For the absolute value of the driving force, |Δ μ | is smaller than Max[ Δ μy,Δ μf] , the step attachment-detachments are inhibited, and the vicinal surface consists of (001) terraces and the (111) side surfaces of the faceted macrosteps. For Max[ Δ μy,Δ μf]<|Δ μ |<Δ μc o , the surface grows/recedes intermittently through the two-dimensional (2D) heterogeneous nucleation at the facet edge of the macrostep. For Δ μc o<|Δ μ | <Δ μR , the surface grows/recedes with the successive attachment-detachment of steps to/from a macrostep. When |Δ μ | exceeds Δ μR , the macrostep vanishes and the surface roughens kinetically. Classical 2D heterogeneous multinucleation was determined to be valid with slight modifications based on the Monte Carlo results of the step velocity and the change in the surface slope of the "terrace." The finite-size effects were also determined to be distinctive near equilibrium.

  5. Combining gas-phase electrophoretic mobility molecular analysis (GEMMA), light scattering, field flow fractionation and cryo electron microscopy in a multidimensional approach to characterize liposomal carrier vesicles

    PubMed Central

    Gondikas, Andreas; von der Kammer, Frank; Hofmann, Thilo; Marchetti-Deschmann, Martina; Allmaier, Günter; Marko-Varga, György; Andersson, Roland

    2017-01-01

    For drug delivery, characterization of liposomes regarding size, particle number concentrations, occurrence of low-sized liposome artefacts and drug encapsulation are of importance to understand their pharmacodynamic properties. In our study, we aimed to demonstrate the applicability of nano Electrospray Gas-Phase Electrophoretic Mobility Molecular Analyser (nES GEMMA) as a suitable technique for analyzing these parameters. We measured number-based particle concentrations, identified differences in size between nominally identical liposomal samples, and detected the presence of low-diameter material which yielded bimodal particle size distributions. Subsequently, we compared these findings to dynamic light scattering (DLS) data and results from light scattering experiments coupled to Asymmetric Flow-Field Flow Fractionation (AF4), the latter improving the detectability of smaller particles in polydisperse samples due to a size separation step prior detection. However, the bimodal size distribution could not be detected due to method inherent limitations. In contrast, cryo transmission electron microscopy corroborated nES GEMMA results. Hence, gas-phase electrophoresis proved to be a versatile tool for liposome characterization as it could analyze both vesicle size and size distribution. Finally, a correlation of nES GEMMA results with cell viability experiments was carried out to demonstrate the importance of liposome batch-to-batch control as low-sized sample components possibly impact cell viability. PMID:27639623

  6. Twisting and subunit rotation in single FOF1-ATP synthase

    PubMed Central

    Sielaff, Hendrik; Börsch, Michael

    2013-01-01

    FOF1-ATP synthases are ubiquitous proton- or ion-powered membrane enzymes providing ATP for all kinds of cellular processes. The mechanochemistry of catalysis is driven by two rotary nanomotors coupled within the enzyme. Their different step sizes have been observed by single-molecule microscopy including videomicroscopy of fluctuating nanobeads attached to single enzymes and single-molecule Förster resonance energy transfer. Here we review recent developments of approaches to monitor the step size of subunit rotation and the transient elastic energy storage mechanism in single FOF1-ATP synthases. PMID:23267178

  7. Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance

    DTIC Science & Technology

    2013-11-01

    2219 , 2000 Tile gap is found to increase the DoP as compared to One Tile tiles The next step will be run simulations on narrower and wider gap sizes...experiments described in reference - ARL-TR- 2219 , 2000 □ Tile gap is found to increase the DoP as compared to One Tile tiles □ The next step will be run...L| Al m ^ s\\cr V^ 1 v^ □ Smoothed-particle hydrodynamics (SPH) used for all parts □ SPH size = 0.40-mm, totaling 278k

  8. Optical design of an athermalised dual field of view step zoom optical system in MWIR

    NASA Astrophysics Data System (ADS)

    Kucukcelebi, Doruk

    2017-08-01

    In this paper, the optical design of an athermalised dual field of view step zoom optical system in MWIR (3.7μm - 4.8μm) is described. The dual field of view infrared optical system is designed based on the principle of passive athermalization method not only to achieve athermal optical system but also to keep the high image quality within the working temperature between -40°C and +60°C. The infrared optical system used in this study had a 320 pixel x 256 pixel resolution, 20μm pixel pitch size cooled MWIR focal plane array detector. In this study, the step zoom mechanism, which has the axial motion due to consisting of a lens group, is considered to simplify mechanical structure. The optical design was based on moving a single lens along the optical axis for changing the optical system's field of view not only to reduce the number of moving parts but also to athermalize for the optical system. The optical design began with an optimization process using paraxial optics when first-order optics parameters are determined. During the optimization process, in order to reduce aberrations, such as coma, astigmatism, spherical and chromatic aberrations, aspherical surfaces were used. As a result, athermalised dual field of view step zoom optical design is proposed and the performance of the design using proposed method was verified by providing the focus shifts, spot diagrams and MTF analyzes' plots.

  9. Inferring Regulatory Networks by Combining Perturbation Screens and Steady State Gene Expression Profiles

    PubMed Central

    Michailidis, George

    2014-01-01

    Reconstructing transcriptional regulatory networks is an important task in functional genomics. Data obtained from experiments that perturb genes by knockouts or RNA interference contain useful information for addressing this reconstruction problem. However, such data can be limited in size and/or are expensive to acquire. On the other hand, observational data of the organism in steady state (e.g., wild-type) are more readily available, but their informational content is inadequate for the task at hand. We develop a computational approach to appropriately utilize both data sources for estimating a regulatory network. The proposed approach is based on a three-step algorithm to estimate the underlying directed but cyclic network, that uses as input both perturbation screens and steady state gene expression data. In the first step, the algorithm determines causal orderings of the genes that are consistent with the perturbation data, by combining an exhaustive search method with a fast heuristic that in turn couples a Monte Carlo technique with a fast search algorithm. In the second step, for each obtained causal ordering, a regulatory network is estimated using a penalized likelihood based method, while in the third step a consensus network is constructed from the highest scored ones. Extensive computational experiments show that the algorithm performs well in reconstructing the underlying network and clearly outperforms competing approaches that rely only on a single data source. Further, it is established that the algorithm produces a consistent estimate of the regulatory network. PMID:24586224

  10. A New Material Mapping Procedure for Quantitative Computed Tomography-Based, Continuum Finite Element Analyses of the Vertebra

    PubMed Central

    Unnikrishnan, Ginu U.; Morgan, Elise F.

    2011-01-01

    Inaccuracies in the estimation of material properties and errors in the assignment of these properties into finite element models limit the reliability, accuracy, and precision of quantitative computed tomography (QCT)-based finite element analyses of the vertebra. In this work, a new mesh-independent, material mapping procedure was developed to improve the quality of predictions of vertebral mechanical behavior from QCT-based finite element models. In this procedure, an intermediate step, called the material block model, was introduced to determine the distribution of material properties based on bone mineral density, and these properties were then mapped onto the finite element mesh. A sensitivity study was first conducted on a calibration phantom to understand the influence of the size of the material blocks on the computed bone mineral density. It was observed that varying the material block size produced only marginal changes in the predictions of mineral density. Finite element (FE) analyses were then conducted on a square column-shaped region of the vertebra and also on the entire vertebra in order to study the effect of material block size on the FE-derived outcomes. The predicted values of stiffness for the column and the vertebra decreased with decreasing block size. When these results were compared to those of a mesh convergence analysis, it was found that the influence of element size on vertebral stiffness was less than that of the material block size. This mapping procedure allows the material properties in a finite element study to be determined based on the block size required for an accurate representation of the material field, while the size of the finite elements can be selected independently and based on the required numerical accuracy of the finite element solution. The mesh-independent, material mapping procedure developed in this study could be particularly helpful in improving the accuracy of finite element analyses of vertebroplasty and spine metastases, as these analyses typically require mesh refinement at the interfaces between distinct materials. Moreover, the mapping procedure is not specific to the vertebra and could thus be applied to many other anatomic sites. PMID:21823740

  11. National Stormwater Calculator: Low Impact Development ...

    EPA Pesticide Factsheets

    The National Stormwater Calculator (NSC) makes it easy to estimate runoff reduction when planning a new development or redevelopment site with low impact development (LID) stormwater controls. The Calculator is currently deployed as a Windows desktop application. The Calculator is organized as a wizard style application that walks the user through the steps necessary to perform runoff calculations on a single urban sub-catchment of 10 acres or less in size. Using an interactive map, the user can select the sub-catchment location and the Calculator automatically acquires hydrologic data for the site.A new LID cost estimation module has been developed for the Calculator. This project involved programming cost curves into the existing Calculator desktop application. The integration of cost components of LID controls into the Calculator increases functionality and will promote greater use of the Calculator as a stormwater management and evaluation tool. The addition of the cost estimation module allows planners and managers to evaluate LID controls based on comparison of project cost estimates and predicted LID control performance. Cost estimation is accomplished based on user-identified size (or auto-sizing based on achieving volume control or treatment of a defined design storm), configuration of the LID control infrastructure, and other key project and site-specific variables, including whether the project is being applied as part of new development or redevelopm

  12. Simulation of Micron-Sized Debris Populations in Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Xu, Y.-L.; Hyde, J. L.; Prior, T.; Matney, Mark

    2010-01-01

    The update of ORDEM2000, the NASA Orbital Debris Engineering Model, to its new version ORDEM2010, is nearly complete. As a part of the ORDEM upgrade, this paper addresses the simulation of micro-debris (greater than 10 m and smaller than 1 mm in size) populations in low Earth orbit. The principal data used in the modeling of the micron-sized debris populations are in-situ hypervelocity impact records, accumulated in post-flight damage surveys on the space-exposed surfaces of returned spacecrafts. The development of the micro-debris model populations follows the general approach to deriving other ORDEM2010-required input populations for various components and types of debris. This paper describes the key elements and major steps in the statistical inference of the ORDEM2010 micro-debris populations. A crucial step is the construction of a degradation/ejecta source model to provide prior information on the micron-sized objects (such as orbital and object-size distributions). Another critical step is to link model populations with data, which is rather involved. It demands detailed information on area-time/directionality for all the space-exposed elements of a shuttle orbiter and damage laws, which relate impact damage with the physical properties of a projectile and impact conditions such as impact angle and velocity. Also needed are model-predicted debris fluxes as a function of object size and impact velocity from all possible directions. In spite of the very limited quantity of the available shuttle impact data, the population-derivation process is satisfactorily stable. Final modeling results obtained from shuttle window and radiator impact data are reasonably convergent and consistent, especially for the debris populations with object-size thresholds at 10 and 100 m.

  13. Simulation of Micron-Sized Debris Populations in Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Xu, Y.-L.; Matney, M.; Liou, J.-C.; Hyde, J. L.; Prior, T. G.

    2010-01-01

    The update of ORDEM2000, the NASA Orbital Debris Engineering Model, to its new version . ORDEM2010, is nearly complete. As a part of the ORDEM upgrade, this paper addresses the simulation of micro-debris (greater than 10 micron and smaller than 1 mm in size) populations in low Earth orbit. The principal data used in the modeling of the micron-sized debris populations are in-situ hypervelocity impact records, accumulated in post-flight damage surveys on the space-exposed surfaces of returned spacecrafts. The development of the micro-debris model populations follows the general approach to deriving other ORDEM2010-required input populations for various components and types of debris. This paper describes the key elements and major steps in the statistical inference of the ORDEM2010 micro-debris populations. A crucial step is the construction of a degradation/ejecta source model to provide prior information on the micron-sized objects (such as orbital and object-size distributions). Another critical step is to link model populations with data, which is rather involved. It demands detailed information on area-time/directionality for all the space-exposed elements of a shuttle orbiter and damage laws, which relate impact damage with the physical properties of a projectile and impact conditions such as impact angle and velocity. Also needed are model-predicted debris fluxes as a function of object size and impact velocity from all possible directions. In spite of the very limited quantity of the available shuttle impact data, the population-derivation process is satisfactorily stable. Final modeling results obtained from shuttle window and radiator impact data are reasonably convergent and consistent, especially for the debris populations with object-size thresholds at 10 and 100 micron.

  14. Tailoring plasmonic properties of metal nanoparticle-embedded dielectric thin films: the sandwich method of preparation

    NASA Astrophysics Data System (ADS)

    Laha, Ranjit; Malar, P.; Osipowicz, Thomas; Kasiviswanathan, S.

    2017-09-01

    Tailoring of plasmonic properties of metal nanoparticle-embedded dielectric thin films are very crucial for many thin film-based applications. We, herein, investigate the various ways of tuning the plasmonic positions of gold nanoparticles (AuNPs)-embedded indium oxide thin films (Au:IO) through a sequence-specific sandwich method. The sandwich method is a four-step process involving deposition of In2O3 film by magnetron sputtering in first and fourth steps, thermal evaporation of Au on to In2O3 film in second and annealing of Au/In2O3 film in the third step. The Au:IO films were characterized by x-ray diffraction, spectrophotometry and transmission electron microscopy. The size and shape of the embedded nanoparticles were found from Rutherford back-scattering spectrometry. Based on dynamic Maxwell Garnett theory, the observed plasmon resonance position was ascribed to the oblate shape of AuNPs formed in sandwich method. Finally, through experimental data, it was shown that the plasmon resonance position of Au:IO thin films can be tuned by 125 nm. The method shown here can be used to tune the plasmon resonance position over the entire range of visible region for the thin films made from other combinations of metal-dielectric pair.

  15. Citrate-capped superparamagnetic iron oxide (Fe3O4-CA) nanocatalyst for synthesis of pyrimidine derivative compound as antioxidative agent

    NASA Astrophysics Data System (ADS)

    Cahyana, A. H.; Pratiwi, D.; Ardiansah, B.

    2017-04-01

    The development of a recyclable catalyst based on magnetic nanoparticles has attracted an increasing interest as the emerging application in the heterogeneous catalyst field. Superparamagnetic iron oxide nanoparticle with citric acid as capping agent was successfully obtained from iron (III) chloride solution via two steps synthesis. The first step involving the formation of magnetite nanoparticle by bioreduction using Sargassum Sp, then its surface was modified by adding citric acid solution in the second step. The structural, surface morphology and magnetic properties of the nanocatalyst were investigated by various instrumentations such as scanning electron microscope with energy dispersive (SEM-EDS), and particle size analyser (PSA). Fe3O4-CA was then applied as reusable catalyst for Knoevenagel condensation of barbituric acid and cinnamaldehyde to produce (E)-5-(3-phenylallylidene)pyrimidine-2,4,6(1H,3H,5H)-trione. The optimum condition of this reaction was achieved by using 7.5% mole of catalyst at 50°C for 6 h to give 83% yield. Some spectroscopy techniques such as UV-Vis, FTIR, LC-MS and 1H-NMR were used to confirm the product’s structure. Furthermore, the synthesized compound has an attractive antioxidant activity based on the in-vitro analysis using DPPH method.

  16. Effect of initial shock wave voltage on shock wave lithotripsy-induced lesion size during step-wise voltage ramping.

    PubMed

    Connors, Bret A; Evan, Andrew P; Blomgren, Philip M; Handa, Rajash K; Willis, Lynn R; Gao, Sujuan

    2009-01-01

    To determine if the starting voltage in a step-wise ramping protocol for extracorporeal shock wave lithotripsy (SWL) alters the size of the renal lesion caused by the SWs. To address this question, one kidney from 19 juvenile pigs (aged 7-8 weeks) was treated in an unmodified Dornier HM-3 lithotripter (Dornier Medical Systems, Kennesaw, GA, USA) with either 2000 SWs at 24 kV (standard clinical treatment, 120 SWs/min), 100 SWs at 18 kV followed by 2000 SWs at 24 kV or 100 SWs at 24 kV followed by 2000 SWs at 24 kV. The latter protocols included a 3-4 min interval, between the 100 SWs and the 2000 SWs, used to check the targeting of the focal zone. The kidneys were removed at the end of the experiment so that lesion size could be determined by sectioning the entire kidney and quantifying the amount of haemorrhage in each slice. The average parenchymal lesion for each pig was then determined and a group mean was calculated. Kidneys that received the standard clinical treatment had a mean (sem) lesion size of 3.93 (1.29)% functional renal volume (FRV). The mean lesion size for the 18 kV ramping group was 0.09 (0.01)% FRV, while lesion size for the 24 kV ramping group was 0.51 (0.14)% FRV. The lesion size for both of these groups was significantly smaller than the lesion size in the standard clinical treatment group. The data suggest that initial voltage in a voltage-ramping protocol does not correlate with renal damage. While voltage ramping does reduce injury when compared with SWL with no voltage ramping, starting at low or high voltage produces lesions of the same approximate size. Our findings also suggest that the interval between the initial shocks and the clinical dose of SWs, in our one-step ramping protocol, is important for protecting the kidney against injury.

  17. A protected annealing strategy to enhanced light emission and photostability of YAG:Ce nanoparticle-based films

    NASA Astrophysics Data System (ADS)

    Revaux, Amelie; Dantelle, Geraldine; George, Nathan; Seshadri, Ram; Gacoin, Thierry; Boilot, Jean-Pierre

    2011-05-01

    A significant obstacle in the development of YAG:Ce nanoparticles as light converters in white LEDs and as biological labels is associated with the difficulty of finding preparative conditions that allow simultaneous control of structure, particle size and size distribution, while maintaining the optical properties of bulk samples. Preparation conditions frequently involve high-temperature treatments of precursors (up to 1400 °C), which result in increased particle size and aggregation, and lead to oxidation of Ce(iii) to Ce(iv). We report here a process that we term protected annealing, that allows the thermal treatment of preformed precursor particles at temperatures up to 1000 °C while preserving their small size and state of dispersion. In a first step, pristine nanoparticles are prepared by a glycothermal reaction, leading to a mixture of YAG and boehmite crystalline phases. The preformed nanoparticles are then dispersed in a porous silica. Annealing of the composite material at 1000 °C is followed by dissolution of the amorphous silica by hydrofluoric acid to recover the annealed particles as a colloidal dispersion. This simple process allows completion of YAG crystallization while preserving their small size. The redox state of Ce ions can be controlled through the annealing atmosphere. The obtained particles of YAG:Ce (60 +/- 10 nm in size) can be dispersed as nearly transparent aqueous suspensions, with a luminescence quantum yield of 60%. Transparent YAG:Ce nanoparticle-based films of micron thickness can be deposited on glass substrates using aerosol spraying. Films formed from particles prepared by the protected annealing strategy display significantly improved photostability over particles that have not been subject to such annealing.A significant obstacle in the development of YAG:Ce nanoparticles as light converters in white LEDs and as biological labels is associated with the difficulty of finding preparative conditions that allow simultaneous control of structure, particle size and size distribution, while maintaining the optical properties of bulk samples. Preparation conditions frequently involve high-temperature treatments of precursors (up to 1400 °C), which result in increased particle size and aggregation, and lead to oxidation of Ce(iii) to Ce(iv). We report here a process that we term protected annealing, that allows the thermal treatment of preformed precursor particles at temperatures up to 1000 °C while preserving their small size and state of dispersion. In a first step, pristine nanoparticles are prepared by a glycothermal reaction, leading to a mixture of YAG and boehmite crystalline phases. The preformed nanoparticles are then dispersed in a porous silica. Annealing of the composite material at 1000 °C is followed by dissolution of the amorphous silica by hydrofluoric acid to recover the annealed particles as a colloidal dispersion. This simple process allows completion of YAG crystallization while preserving their small size. The redox state of Ce ions can be controlled through the annealing atmosphere. The obtained particles of YAG:Ce (60 +/- 10 nm in size) can be dispersed as nearly transparent aqueous suspensions, with a luminescence quantum yield of 60%. Transparent YAG:Ce nanoparticle-based films of micron thickness can be deposited on glass substrates using aerosol spraying. Films formed from particles prepared by the protected annealing strategy display significantly improved photostability over particles that have not been subject to such annealing. Electronic supplementary information (ESI) available: Thermogravimetric analysis curve, picture of a YAG:Ce3+ thin film. See DOI: 10.1039/c0nr01000f

  18. Medical image reconstruction algorithm based on the geometric information between sensor detector and ROI

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk

    2016-05-01

    In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.

  19. Distributed Environment Control Using Wireless Sensor/Actuator Networks for Lighting Applications

    PubMed Central

    Nakamura, Masayuki; Sakurai, Atsushi; Nakamura, Jiro

    2009-01-01

    We propose a decentralized algorithm to calculate the control signals for lights in wireless sensor/actuator networks. This algorithm uses an appropriate step size in the iterative process used for quickly computing the control signals. We demonstrate the accuracy and efficiency of this approach compared with the penalty method by using Mote-based mesh sensor networks. The estimation error of the new approach is one-eighth as large as that of the penalty method with one-fifth of its computation time. In addition, we describe our sensor/actuator node for distributed lighting control based on the decentralized algorithm and demonstrate its practical efficacy. PMID:22291525

  20. A Numerical Scheme for Ordinary Differential Equations Having Time Varying and Nonlinear Coefficients Based on the State Transition Matrix

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2002-01-01

    A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.

  1. CR-39 track etching and blow-up method

    DOEpatents

    Hankins, Dale E.

    1987-01-01

    This invention is a method of etching tracks in CR-39 foil to obtain uniformly sized tracks. The invention comprises a step of electrochemically etching the foil at a low frequency and a "blow-up" step of electrochemically etching the foil at a high frequency.

  2. Simplified 4-Step Transportation Planning Process For Any Sized Area

    DOT National Transportation Integrated Search

    1999-01-01

    This paper presents a streamlined version of the Washington, D.C. region's : 4-step travel demand forecasting model. The purpose for streamlining the : model was to have a model that could: replicate the regional model, and be run : in a new s...

  3. Saving Lives.

    ERIC Educational Resources Information Center

    Moon, Daniel

    2002-01-01

    Advises schools on how to establish an automated external defibrillator (AED) program. These laptop-size devices can save victims of sudden cardiac arrest by delivering an electrical shock to return the heartbeat to normal. Discusses establishing standards, developing a strategy, step-by-step advice towards establishing an AED program, and school…

  4. Two step continuous method to synthesize colloidal spheroid gold nanorods.

    PubMed

    Chandra, S; Doran, J; McCormack, S J

    2015-12-01

    This research investigated a two-step continuous process to synthesize colloidal suspension of spheroid gold nanorods. In the first step; gold precursor was reduced to seed-like particles in the presence of polyvinylpyrrolidone and ascorbic acid. In continuous second step; silver nitrate and alkaline sodium hydroxide produced various shape and size Au nanoparticles. The shape was manipulated through weight ratio of ascorbic acid to silver nitrate by varying silver nitrate concentration. The specific weight ratio of 1.35-1.75 grew spheroid gold nanorods of aspect ratio ∼1.85 to ∼2.2. Lower weight ratio of 0.5-1.1 formed spherical nanoparticle. The alkaline medium increased the yield of gold nanorods and reduced reaction time at room temperature. The synthesized gold nanorods retained their shape and size in ethanol. The surface plasmon resonance was red shifted by ∼5 nm due to higher refractive index of ethanol than water. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Research on the effect of coverage rate on the surface quality in laser direct writing process

    NASA Astrophysics Data System (ADS)

    Pan, Xuetao; Tu, Dawei

    2017-07-01

    Direct writing technique is usually used in femtosecond laser two-photon micromachining. The size of the scanning step is an important factor affecting the surface quality and machining efficiency of micro devices. According to the mechanism of two-photon polymerization, combining the distribution function of light intensity and the free radical concentration theory, we establish the mathematical model of coverage of solidification unit, then analyze the effect of coverage on the machining quality and efficiency. Using the principle of exposure equivalence, we also obtained the analytic expressions of the relationship among the surface quality characteristic parameters of microdevices and the scanning step, and carried out the numerical simulation and experiment. The results show that the scanning step has little influence on the surface quality of the line when it is much smaller than the size of the solidification unit. However, with increasing scanning step, the smoothness of line surface is reduced rapidly, and the surface quality becomes much worse.

  6. Role of transient water pressure in quarrying: A subglacial experiment using acoustic emissions

    USGS Publications Warehouse

    Cohen, D.; Hooyer, T.S.; Iverson, N.R.; Thomason, J.F.; Jackson, M.

    2006-01-01

    Probably the most important mechanism of glacial erosion is quarrying: the growth and coalescence of cracks in subglacial bedrock and dislodgement of resultant rock fragments. Although evidence indicates that erosion rates depend on sliding speed, rates of crack growth in bedrock may be enhanced by changing stresses on the bed caused by fluctuating basal water pressure in zones of ice-bed separation. To study quarrying in real time, a granite step, 12 cm high with a crack in its stoss surface, was installed at the bed of Engabreen, Norway. Acoustic emission sensors monitored crack growth events in the step as ice slid over it. Vertical stresses, water pressure, and cavity height in the lee of the step were also measured. Water was pumped to the lee of the step several times over 8 days. Pumping initially caused opening of a leeward cavity, which then closed after pumping was stopped and water pressure decreased. During cavity closure, acoustic emissions emanating mostly from the vicinity of the base of the crack in the step increased dramatically. With repeated pump tests this crack grew with time until the step's lee surface was quarried. Our experiments indicate that fluctuating water pressure caused stress thresholds required for crack growth to be exceeded. Natural basal water pressure fluctuations should also concentrate stresses on rock steps, increasing rates of crack growth. Stress changes on the bed due to water pressure fluctuations will increase in magnitude and duration with cavity size, which may help explain the effect of sliding speed on erosion rates. Copyright 2006 by the American Geophysical Union.

  7. Step-down versus outpatient psychotherapeutic treatment for personality disorders: 6-year follow-up of the Ullevål personality project

    PubMed Central

    2014-01-01

    Background Although psychotherapy is considered the treatment of choice for patients with personality disorders (PDs), there is no consensus about the optimal level of care for this group of patients. This study reports the results from the 6-year follow-up of the Ullevål Personality Project (UPP), a randomized clinical trial comparing outpatient individual psychotherapy with a long-term step-down treatment program that included a short-term day hospital treatment followed by combined group and individual psychotherapy. Methods The UPP included 113 patients with PDs. Outcome was evaluated after 8 months, 18 months, 3 years and 6 years and was based on a wide range of clinical measures, such as psychosocial functioning, interpersonal problems, symptom severity, and axis I and II diagnoses. Results At the 6-year follow-up, there were no statistically significant differences in outcome between the treatment groups. Effect sizes ranged from medium to large for all outcome variables in both treatment arms. However, patients in the outpatient group had a marked decline in psychosocial functioning during the period between the 3- and 6-year follow-ups; while psychosocial functioning continued to improve in the step-down group during the same period. This difference between groups was statistically significant. Conclusions The findings suggest that both hospital-based long-term step-down treatment and long-term outpatient individual psychotherapy may improve symptoms and psychosocial functioning in poorly functioning PD patients. Social and interpersonal functioning continued to improve in the step-down group during the post-treatment phase, indicating that longer-term changes were stimulated during treatment. Trial registration NCT00378248. PMID:24758722

  8. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review

    PubMed Central

    Morris, Tom; Gray, Laura

    2017-01-01

    Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637

  9. A pilot randomized clinical trial testing integrated 12-Step facilitation (iTSF) treatment for adolescent substance use disorder.

    PubMed

    Kelly, John F; Kaminer, Yifrah; Kahler, Christopher W; Hoeppner, Bettina; Yeterian, Julie; Cristello, Julie V; Timko, Christine

    2017-12-01

    The integration of 12-Step philosophy and practices is common in adolescent substance use disorder (SUD) treatment programs, particularly in North America. However, although numerous experimental studies have tested 12-Step facilitation (TSF) treatments among adults, no studies have tested TSF-specific treatments for adolescents. We tested the efficacy of a novel integrated TSF. Explanatory, parallel-group, randomized clinical trial comparing 10 sessions of either motivational enhancement therapy/cognitive-behavioral therapy (MET/CBT; n = 30) or a novel integrated TSF (iTSF; n = 29), with follow-up assessments at 3, 6 and 9 months following treatment entry. Out-patient addiction clinic in the United States. Adolescents [n = 59; mean age = 16.8 (1.7) years; range = 14-21; 27% female; 78% white]. The iTSF integrated 12-Step with motivational and cognitive-behavioral strategies, and was compared with state-of-the-art MET/CBT for SUD. Primary outcome: percentage days abstinent (PDA); secondary outcomes: 12-Step attendance, substance-related consequences, longest period of abstinence, proportion abstinent/mostly abstinent, psychiatric symptoms. Primary outcome: PDA was not significantly different across treatments [b = 0.08, 95% confidence interval (CI) = -0.08 to 0.24, P = 0.33; Bayes' factor = 0.28). during treatment, iTSF patients had substantially greater 12-Step attendance, but this advantage declined thereafter (b = -0.87; 95% CI = -1.67 to 0.07, P = 0.03). iTSF did show a significant advantage at all follow-up points for substance-related consequences (b = -0.42; 95% CI = -0.80 to -0.04, P < 0.05; effect size range d = 0.26-0.71). Other secondary outcomes did not differ significantly between treatments, but effect sizes tended to favor iTSF. Throughout the entire sample, greater 12-Step meeting attendance was associated significantly with longer abstinence during (r = 0.39, P = 0.008), and early following (r = 0.30, P = 0.049), treatment. Compared with motivational enhancement therapy/cognitive-behavioral therapy (MET/CBT), in terms of abstinence, a novel integrated 12-Step facilitation treatment for adolescent substance use disorder (iTSF) showed no greater benefits, but showed benefits in terms of 12-Step attendance and consequences. Given widespread use of combinations of 12-Step, MET and CBT in adolescent community out-patient settings in North America, iTSF may provide an integrated evidence-based option that is compatible with existing practices. © 2017 Society for the Study of Addiction.

  10. Supersonic burning in separated flow regions

    NASA Technical Reports Server (NTRS)

    Zumwalt, G. W.

    1982-01-01

    The trough vortex phenomena is used for combustion of hydrogen in a supersonic air stream. This was done in small sizes suitable for igniters in supersonic combustion ramjets so long as the boundary layer displacement thickness is less than 25% of the trough step height. A simple electric spark, properly positioned, ignites the hydrogen in the trough corner. The resulting flame is self sustaining and reignitable. Hydrogen can be injected at the base wall or immediately upstream of the trough. The hydrogen is introduced at low velocity to permit it to be drawn into the corner vortex system and thus experience a long residence time in the combustion region. The igniters can be placed on a skewed back step for angles at least up to 30 deg. without affecting the igniter performance significantly. Certain metals (platinum, copper) act catalytically to improve ignition.

  11. Sparse alignment for robust tensor learning.

    PubMed

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods.

  12. Formulation and optimization by experimental design of eco-friendly emulsions based on d-limonene.

    PubMed

    Pérez-Mosqueda, Luis M; Trujillo-Cayado, Luis A; Carrillo, Francisco; Ramírez, Pablo; Muñoz, José

    2015-04-01

    d-Limonene is a natural occurring solvent that can replace more pollutant chemicals in agrochemical formulations. In the present work, a comprehensive study of the influence of dispersed phase mass fraction, ϕ, and of the surfactant/oil ratio, R, on the emulsion stability and droplet size distribution of d-limonene-in-water emulsions stabilized by a non-ionic triblock copolymer surfactant has been carried out. An experimental full factorial design 3(2) was conducted in order to optimize the emulsion formulation. The independent variables, ϕ and R were studied in the range 10-50 wt% and 0.02-0.1, respectively. The emulsions studied were mainly destabilized by both creaming and Ostwald ripening. Therefore, initial droplet size and an overall destabilization parameter, the so-called turbiscan stability index, were used as dependent variables. The optimal formulation, comprising minimum droplet size and maximum stability was achieved at ϕ=50 wt%; R=0.062. Furthermore, the surface response methodology allowed us to obtain the formulation yielding sub-micron emulsions by using a single step rotor/stator homogenizer process instead of most commonly used two-step emulsification methods. In addition, the optimal formulation was further improved against Ostwald ripening by adding silicone oil to the dispersed phase. The combination of these experimental findings allowed us to gain a deeper insight into the stability of these emulsions, which can be applied to the rational development of new formulations with potential application in agrochemical formulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Morphing Aircraft Structures: Research in AFRL/RB

    DTIC Science & Technology

    2008-09-01

    various iterative steps in the process, etc. The solver also internally controls the step size for integration, as this is independent of the step...Coupling of Substructures for Dynamic Analyses,” AIAA Journal , Vol. 6, No. 7, 1968, pp. 1313-1319. 2“Using the State-Dependent Modal Force (MFORCE),” AFL...an actuation system consisting of multiple internal actuators, centrally computer controlled to implement any commanded morphing configuration; and

  14. Optimized method of dispersion of titanium dioxide nanoparticles for evaluation of safety aspects in cosmetics

    NASA Astrophysics Data System (ADS)

    Carvalho, Karina Penedo; Martins, Nathalia Balthazar; Ribeiro, Ana Rosa Lopes Pereira; Lopes, Taliria Silva; de Sena, Rodrigo Caciano; Sommer, Pascal; Granjeiro, José Mauro

    2016-08-01

    Nanoparticles agglomerate when in contact with biological solutions, depending on the solutions' nature. The agglomeration state will directly influence cellular response, since free nanoparticles are prone to interact with cells and get absorbed into them. In sunscreens, titanium dioxide nanoparticles (TiO2-NPs) form mainly aggregates between 30 and 150 nm. Until now, no toxicological study with skin cells has reached this range of size distribution. Therefore, in order to reliably evaluate their safety, it is essential to prepare suspensions with reproducibility, irrespective of the biological solution used, representing the above particle size distribution range of NPs (30-150 nm) found on sunscreens. Thus, the aim of this study was to develop a unique protocol of TiO2 dispersion, combining these features after dilution in different skin cell culture media, for in vitro tests. This new protocol was based on physicochemical characteristics of TiO2, which led to the choice of the optimal pH condition for ultrasonication. The next step consisted of stabilization of protein capping with acidified bovine serum albumin, followed by an adjustment of pH to 7.0. At each step, the solutions were analyzed by dynamic light scattering and transmission electron microscopy. The final concentration of NPs was determined by inductively coupled plasma-optical emission spectroscopy. Finally, when diluted in dulbecco's modified eagle medium, melanocytes growth medium, or keratinocytes growth medium, TiO2-NPs displayed a highly reproducible size distribution, within the desired size range and without significant differences among the media. Together, these results demonstrate the consistency achieved by this new methodology and its suitability for in vitro tests involving skin cell cultures.

  15. One-step preparation of antimicrobial silver nanoparticles in polymer matrix

    NASA Astrophysics Data System (ADS)

    Lyutakov, O.; Kalachyova, Y.; Solovyev, A.; Vytykacova, S.; Svanda, J.; Siegel, J.; Ulbrich, P.; Svorcik, V.

    2015-03-01

    Simple one-step procedure for in situ preparation of silver nanoparticles (AgNPs) in the polymer thin films is described. Nanoparticles (NPs) were prepared by reaction of N-methyl pyrrolidone with silver salt in semi-dry polymer film and characterized by transmission electron microscopy, XPS, and UV-Vis spectroscopy techniques. Direct synthesis of NPs in polymer has several advantages; even though it avoids time-consuming NPs mixing with polymer matrix, uniform silver distribution in polymethylmethacrylate (PMMA) films is achieved without necessity of additional stabilization. The influence of the silver concentration, reaction temperature and time on reaction conversion rate, and the size and size-distribution of the AgNPs was investigated. Polymer films doped with AgNPs were tested for their antibacterial activity on Gram-negative bacteria. Antimicrobial properties of AgNPs/PMMA films were found to be depended on NPs concentration, their size and distribution. Proposed one-step synthesis of functional polymer containing AgNPs is environmentally friendly, experimentally simple and extremely quick. It opens up new possibilities in development of antimicrobial coatings with medical and sanitation applications.

  16. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  17. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    NASA Astrophysics Data System (ADS)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  18. Advantages offered by high average power picosecond lasers

    NASA Astrophysics Data System (ADS)

    Moorhouse, C.

    2011-03-01

    As electronic devices shrink in size to reduce material costs, device size and weight, thinner material thicknesses are also utilized. Feature sizes are also decreasing, which is pushing manufacturers towards single step laser direct write process as an attractive alternative to conventional, multiple step photolithography processes by eliminating process steps and the cost of chemicals. The fragile nature of these thin materials makes them difficult to machine either mechanically or with conventional nanosecond pulsewidth, Diode Pumped Solids State (DPSS) lasers. Picosecond laser pulses can cut materials with reduced damage regions and selectively remove thin films due to the reduced thermal effects of the shorter pulsewidth. Also, the high repetition rate allows high speed processing for industrial applications. Selective removal of thin films for OLED patterning, silicon solar cells and flat panel displays is discussed, as well as laser cutting of transparent materials with low melting point such as Polyethylene Terephthalate (PET). For many of these thin film applications, where low pulse energy and high repetition rate are required, throughput can be increased by the use of a novel technique to using multiple beams from a single laser source is outlined.

  19. Size-dependent axial instability of microtubules surrounded by cytoplasm of a living cell based on nonlocal strain gradient elasticity theory.

    PubMed

    Sahmani, S; Aghdam, M M

    2017-06-07

    Microtubules including tubulin heterodimers arranging in a parallel shape of cylindrical hollow plays an important role in the mechanical stiffness of a living cell. In the present study, the nonlocal strain gradient theory of elasticity including simultaneously the both nonlocality and strain gradient size dependency is put to use within the framework of a refined orthotropic shell theory with hyperbolic distribution of shear deformation to analyze the size-dependent buckling and postbuckling characteristics of microtubules embedded in cytoplasm under axial compressive load. The non-classical governing differential equations are deduced via boundary layer theory of shell buckling incorporating the nonlinear prebuckling deformation and microtubule-cytoplasm interaction in the living cell environment. Finally, with the aid of a two-stepped perturbation solution methodology, the explicit analytical expressions for nonlocal strain gradient stability paths of axially loaded microtubules are achieved. It is illustrated that by taking the nonlocal size effect into consideration, the critical buckling load of microtubule and its maximum deflection associated with the minimum postbuckling load decreases, while the strain gradient size dependency causes to increase them. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Comparative analysis of cells and proteins of pumpkin plants for the control of fruit size.

    PubMed

    Nakata, Yumiko; Taniguchi, Go; Takazaki, Shinya; Oda-Ueda, Naoko; Miyahara, Kohji; Ohshima, Yasumi

    2012-09-01

    Common pumpkin plants (Cucurbita maxima) produce fruits of 1-2 kg size on the average, while special varieties of the same species called Atlantic Giant are known to produce a huge fruit up to several hundred kilograms. As an approach to determine the factors controlling the fruit size in C. maxima, we cultivated both AG and control common plants, and found that both the cell number and cell sizes were increased in a large fruit while DNA content of the cell did not change significantly. We also compared protein patterns in the leaves, stems, ripe and young fruits by two-dimensional (2D) gel electrophoresis, and identified those differentially expressed between them with mass spectroscopy. Based on these results, we suggest that factors in photosynthesis such as ribulose-bisphosphate carboxylase, glycolysis pathway enzymes, heat-shock proteins and ATP synthase play positive or negative roles in the growth of a pumpkin fruit. These results provide a step toward the development of plant biotechnology to control fruit size in the future. Copyright © 2012 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  1. Ab initio calculations of optical properties of silver clusters: cross-over from molecular to nanoscale behavior

    NASA Astrophysics Data System (ADS)

    Titantah, John T.; Karttunen, Mikko

    2016-05-01

    Electronic and optical properties of silver clusters were calculated using two different ab initio approaches: (1) based on all-electron full-potential linearized-augmented plane-wave method and (2) local basis function pseudopotential approach. Agreement is found between the two methods for small and intermediate sized clusters for which the former method is limited due to its all-electron formulation. The latter, due to non-periodic boundary conditions, is the more natural approach to simulate small clusters. The effect of cluster size is then explored using the local basis function approach. We find that as the cluster size increases, the electronic structure undergoes a transition from molecular behavior to nanoparticle behavior at a cluster size of 140 atoms (diameter ~1.7 nm). Above this cluster size the step-like electronic structure, evident as several features in the imaginary part of the polarizability of all clusters smaller than Ag147, gives way to a dominant plasmon peak localized at wavelengths 350 nm ≤ λ ≤ 600 nm. It is, thus, at this length-scale that the conduction electrons' collective oscillations that are responsible for plasmonic resonances begin to dominate the opto-electronic properties of silver nanoclusters.

  2. Facile control of silica nanoparticles using a novel solvent varying method for the fabrication of artificial opal photonic crystals

    NASA Astrophysics Data System (ADS)

    Gao, Weihong; Rigout, Muriel; Owens, Huw

    2016-12-01

    In this work, the Stöber process was applied to produce uniform silica nanoparticles (SNPs) in the meso-scale size range. The novel aspect of this work was to control the produced silica particle size by only varying the volume of the solvent ethanol used, whilst fixing the other reaction conditions. Using this one-step Stöber-based solvent varying (SV) method, seven batches of SNPs with target diameters ranging from 70 to 400 nm were repeatedly reproduced, and the size distribution in terms of the polydispersity index (PDI) was well maintained (within 0.1). An exponential equation was used to fit the relationship between the particle diameter and ethanol volume. This equation allows the prediction of the amount of ethanol required in order to produce particles of any target diameter within this size range. In addition, it was found that the reaction was completed in approximately 2 h for all batches regardless of the volume of ethanol. Structurally coloured artificial opal photonic crystals (PCs) were fabricated from the prepared SNPs by self-assembly under gravity sedimentation.

  3. Preparation of metallic nanoparticles by irradiation in starch aqueous solution

    NASA Astrophysics Data System (ADS)

    NemÅ£anu, Monica R.; Braşoveanu, Mirela; Iacob, Nicuşor

    2014-11-01

    Colloidal silver nanoparticles (AgNPs) were synthesized in a single step by electron beam irradiation reduction of silver ions in aqueous solution containing starch. The nanoparticles were characterized by spectrophotocolorimetry and compared with those obtained by chemical (thermal) reduction method. The results showed that the smaller sizes of AgNPs were prepared with higher yields as the irradiation dose increased. The broadening of particle size distribution occurred by increasing of irradiation dose and dose rate. Chromatic parameters such as b* (yellow-blue coordinate), C* (chroma) and ΔEab (total color difference) could characterize the nanoparticles with respect of their concentration. Hue angle ho was correlated to the particle size distribution. Experimental data of the irradiated samples were also subjected to factor analysis using principal component extraction and varimax rotation in order to reveal the relation between dependent variables and independent variables and to reduce their number. The radiation-based method provided silver nanoparticles with higher concentration and narrower size distribution than those produced by chemical reduction method. Therefore, the electron beam irradiation is effective for preparation of silver nanoparticles using starch aqueous solution as dispersion medium.

  4. Size distribution of extracellular vesicles by optical correlation techniques.

    PubMed

    Montis, Costanza; Zendrini, Andrea; Valle, Francesco; Busatto, Sara; Paolini, Lucia; Radeghieri, Annalisa; Salvatore, Annalisa; Berti, Debora; Bergese, Paolo

    2017-10-01

    Understanding the colloidal properties of extracellular vesicles (EVs) is key to advance fundamental knowledge in this field and to develop effective EV-based diagnostics, therapeutics and devices. Determination of size distribution and of colloidal stability of purified EVs resuspended in buffered media is a complex and challenging issue - because of the wide range of EV diameters (from 30 to 2000nm), concentrations of interest and membrane properties, and the possible presence of co-isolated contaminants with similar size and densities, such as protein aggregates and fat globules - which is still waiting to be fully addressed. We report here a fully detailed protocol for accurate and robust determination of the size distribution and stability of EV samples which leverages a dedicated combination of Fluorescence Correlation Spectroscopy (FCS) and Dynamic Light Scattering (DLS). The theoretical background, critical experimental steps and data analysis procedures are thoroughly presented and finally illustrated through the representative case study of EV formulations obtained from culture media of B16 melanoma cells, a murine tumor cell line used as a model for human skin cancers. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Ultrahigh pressure fast size exclusion chromatography for top-down proteomics.

    PubMed

    Chen, Xin; Ge, Ying

    2013-09-01

    Top-down MS-based proteomics has gained a solid growth over the past few years but still faces significant challenges in the LC separation of intact proteins. In top-down proteomics, it is essential to separate the high mass proteins from the low mass species due to the exponential decay in S/N as a function of increasing molecular mass. SEC is a favored LC method for size-based separation of proteins but suffers from notoriously low resolution and detrimental dilution. Herein, we reported the use of ultrahigh pressure (UHP) SEC for rapid and high-resolution separation of intact proteins for top-down proteomics. Fast separation of intact proteins (6-669 kDa) was achieved in < 7 min with high resolution and high efficiency. More importantly, we have shown that this UHP-SEC provides high-resolution separation of intact proteins using a MS-friendly volatile solvent system, allowing the direct top-down MS analysis of SEC-eluted proteins without an additional desalting step. Taken together, we have demonstrated that UHP-SEC is an attractive LC strategy for the size separation of proteins with great potential for top-down proteomics. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Portfolio of automated trading systems: complexity and learning set size issues.

    PubMed

    Raudys, Sarunas

    2013-03-01

    In this paper, we consider using profit/loss histories of multiple automated trading systems (ATSs) as N input variables in portfolio management. By means of multivariate statistical analysis and simulation studies, we analyze the influences of sample size (L) and input dimensionality on the accuracy of determining the portfolio weights. We find that degradation in portfolio performance due to inexact estimation of N means and N(N - 1)/2 correlations is proportional to N/L; however, estimation of N variances does not worsen the result. To reduce unhelpful sample size/dimensionality effects, we perform a clustering of N time series and split them into a small number of blocks. Each block is composed of mutually correlated ATSs. It generates an expert trading agent based on a nontrainable 1/N portfolio rule. To increase the diversity of the expert agents, we use training sets of different lengths for clustering. In the output of the portfolio management system, the regularized mean-variance framework-based fusion agent is developed in each walk-forward step of an out-of-sample portfolio validation experiment. Experiments with the real financial data (2003-2012) confirm the effectiveness of the suggested approach.

  7. Streaming simplification of tetrahedral meshes.

    PubMed

    Vo, Huy T; Callahan, Steven P; Lindstrom, Peter; Pascucci, Valerio; Silva, Cláudio T

    2007-01-01

    Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.

  8. Cast-In-Situ, Large-Sized Monolithic Silica Xerogel Prepared in Aqueous System.

    PubMed

    Ding, Wenhui; Wang, Xiaodong; Chen, Dong; Li, Tiemin; Shen, Jun

    2018-05-15

    This paper reports the preparation of cast-in-situ, large-sized monolithic silica xerogels by a two-step acid⁻base catalyzed approach under ambient pressure drying. Low-cost industrial silica sol and deionized water were used as the silicon source and the solvent, respectively. Hexadecetyltrimethylammonium bromide (CTAB) was used as a modification agent. Different amounts of polyethylene glycol 400 (PEG400) was added as a pore-forming agent. The prepared silica xerogels under ambient pressure drying have a mesoporous structure with a low density of 221 mg·cm -3 and a thermal conductivity of 0.0428 W·m -1 ·K -1 . The low-cost and facile preparation process, as well as the superior performance of the monolithic silica xerogels make it a promising candidate for industrial thermal insulation materials.

  9. SCANNING NEAR-FIELD OPTICAL MICROSCOPY

    PubMed Central

    Vobornik, Dušan; Vobornik, Slavenka

    2008-01-01

    An average human eye can see details down to 0,07 mm in size. The ability to see smaller details of the matter is correlated with the development of the science and the comprehension of the nature. Today’s science needs eyes for the nano-world. Examples are easily found in biology and medical sciences. There is a great need to determine shape, size, chemical composition, molecular structure and dynamic properties of nano-structures. To do this, microscopes with high spatial, spectral and temporal resolution are required. Scanning Near-field Optical Microscopy (SNOM) is a new step in the evolution of microscopy. The conventional, lens-based microscopes have their resolution limited by diffraction. SNOM is not subject to this limitation and can offer up to 70 times better resolution. PMID:18318675

  10. Steps Toward an EOS-Era Aerosol Air Mass Type Climatology

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph A.

    2012-01-01

    We still have a way to go to develop a global climatology of aerosol type from the EOS-era satellite data record that currently spans more than 12 years of observations. We have demonstrated the ability to retrieve aerosol type regionally, providing a classification based on the combined constraints on particle size, shape, and single-scattering albedo (SSA) from the MISR instrument. Under good but not necessarily ideal conditions, the MISR data can distinguish three-to-five size bins, two-to-four bins in SSA, and spherical vs. non-spherical particles. However, retrieval sensitivity varies enormously with scene conditions. So, for example, there is less information about aerosol type when the mid-visible aerosol optical depth (AOD) is less that about 0.15 or 0.2.

  11. Multiplexed Affinity-Based Separation of Proteins and Cells Using Inertial Microfluidics.

    PubMed

    Sarkar, Aniruddh; Hou, Han Wei; Mahan, Alison E; Han, Jongyoon; Alter, Galit

    2016-03-30

    Isolation of low abundance proteins or rare cells from complex mixtures, such as blood, is required for many diagnostic, therapeutic and research applications. Current affinity-based protein or cell separation methods use binary 'bind-elute' separations and are inefficient when applied to the isolation of multiple low-abundance proteins or cell types. We present a method for rapid and multiplexed, yet inexpensive, affinity-based isolation of both proteins and cells, using a size-coded mixture of multiple affinity-capture microbeads and an inertial microfluidic particle sorter device. In a single binding step, different targets-cells or proteins-bind to beads of different sizes, which are then sorted by flowing them through a spiral microfluidic channel. This technique performs continuous-flow, high throughput affinity-separation of milligram-scale protein samples or millions of cells in minutes after binding. We demonstrate the simultaneous isolation of multiple antibodies from serum and multiple cell types from peripheral blood mononuclear cells or whole blood. We use the technique to isolate low abundance antibodies specific to different HIV antigens and rare HIV-specific cells from blood obtained from HIV+ patients.

  12. Method of manufacturing corrosion resistant tubing from welded stock of titanium or titanium base alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meredith, S.E.; Benjamin, J.F.

    1993-07-13

    A method is described of manufacturing corrosion resistant tubing from seam welded stock of a titanium or titanium based alloy, comprising: cold pilgering a seam welded tube hollow of titanium or titanium based alloy in a single pass to a final sized tubing, the tube hollow comprising a strip which has been bent and welded along opposed edges thereof to form the tube hollow, the tube hollow optionally being heat treated prior to the cold pilgering step provided the tube hollow is not heated to a temperature which would transform the titanium or titanium alloy into the beta phase, themore » cold pilgering effecting a reduction in cross sectional area of the tube hollow of at least 50% and a reduction of wall thickness of at least 50%, in order to achieve a radially oriented crystal structure; and annealing the final sized tubing at a temperature and time sufficient to effect complete recrystallization and reform grains in a weld area along the seam into smaller, homogeneous grains.« less

  13. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  15. Controlling Surface Chemistry to Deconvolute Corrosion Benefits Derived from SMAT Processing

    NASA Astrophysics Data System (ADS)

    Murdoch, Heather A.; Labukas, Joseph P.; Roberts, Anthony J.; Darling, Kristopher A.

    2017-07-01

    Grain refinement through surface plastic deformation processes such as surface mechanical attrition treatment has shown measureable benefits for mechanical properties, but the impact on corrosion behavior has been inconsistent. Many factors obfuscate the particular corrosion mechanisms at work, including grain size, but also texture, processing contamination, and surface roughness. Many studies attempting to link corrosion and grain size have not been able to decouple these effects. Here we introduce a preprocessing step to mitigate the surface contamination effects that have been a concern in previous corrosion studies on plastically deformed surfaces; this allows comparison of corrosion behavior across grain sizes while controlling for texture and surface roughness. Potentiodynamic polarization in aqueous NaCl solution suggests that different corrosion mechanisms are responsible for samples prepared with the preprocessing step.

  16. Improving stability of prediction models based on correlated omics data by using network approaches.

    PubMed

    Tissier, Renaud; Houwing-Duistermaat, Jeanine; Rodríguez-Girondo, Mar

    2018-01-01

    Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1) network construction, 2) clustering to empirically derive modules or pathways, and 3) building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM) and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  17. Measuring the costs and benefits of conservation programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Einhorn, M.A.

    1985-07-25

    A step-by-step analysis of the effects of utility-sponsored conservation promoting programs begins by identifying several factors which will reduce a program's effectiveness. The framework for measuring cost savings and designing a conservation program needs to consider the size of appliance subsidies, what form incentives should take, and how will customer behavior change as a result of incentives. Continual reevaluation is necessary to determine whether to change the size of rebates or whether to continue the program. Analytical tools for making these determinations are improving as conceptual breakthroughs in econometrics permit more rigorous analysis. 5 figures.

  18. A State Event Detection Algorithm for Numerically Simulating Hybrid Systems with Model Singularities

    DTIC Science & Technology

    2007-01-01

    the case of non- constant step sizes. Therefore the event dynamics after the predictor and corrector phases are, respectively, gpk +1 = g( xk + hk+1{ m...the Extrapolation Polynomial Using a Taylor series expansion of the predicted event function eq.(6) gpk +1 = gk + hk+1 dgp dt ∣∣∣∣ (x,t)=(xk,tk) + h2k...1 2! d2gp dt2 ∣∣∣∣ (x,t)=(xk,tk) + . . . , (8) we can determine the value of gpk +1 as a function of the, yet undetermined, step size hk+1. Recalling

  19. A 10-step safety management framework for construction small and medium-sized enterprises.

    PubMed

    Gunduz, Murat; Laitinen, Heikki

    2017-09-01

    It is of great importance to develop an occupational health and safety management system (OHS MS) to form a systemized approach to improve health and safety. It is a known fact that thousands of accidents and injuries occur in the construction industry. Most of these accidents occur in small and medium-sized enterprises (SMEs). This article provides a 10-step user-friendly OHS MS for the construction industry. A quantitative OHS MS indexing method is also introduced in the article. The practical application of the system to real SMEs and its promising results are also presented.

  20. Analysis of human blood plasma cell-free DNA fragment size distribution using EvaGreen chemistry based droplet digital PCR assays.

    PubMed

    Fernando, M Rohan; Jiang, Chao; Krzyzanowski, Gary D; Ryan, Wayne L

    2018-04-12

    Plasma cell-free DNA (cfDNA) fragment size distribution provides important information required for diagnostic assay development. We have developed and optimized droplet digital PCR (ddPCR) assays that quantify short and long DNA fragments. These assays were used to analyze plasma cfDNA fragment size distribution in human blood. Assays were designed to amplify 76,135, 490 and 905 base pair fragments of human β-actin gene. These assays were used for fragment size analysis of plasma cell-free, exosome and apoptotic body DNA obtained from normal and pregnant donors. The relative percentages for 76, 135, 490 and 905 bp fragments from non-pregnant plasma and exosome DNA were 100%, 39%, 18%, 5.6% and 100%, 40%, 18%,3.3%, respectively. The relative percentages for pregnant plasma and exosome DNA were 100%, 34%, 14%, 23%, and 100%, 30%, 12%, 18%, respectively. The relative percentages for non-pregnant plasma pellet (obtained after 2nd centrifugation step) were 100%, 100%, 87% and 83%, respectively. Non-pregnant Plasma cell-free and exosome DNA share a unique fragment distribution pattern which is different from pregnant donor plasma and exosome DNA fragment distribution indicating the effect of physiological status on cfDNA fragment size distribution. Fragment distribution pattern for plasma pellet that includes apoptotic bodies and nuclear DNA was greatly different from plasma cell-free and exosome DNA. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

Top