Influence of BMI and dietary restraint on self-selected portions of prepared meals in US women.
Labbe, David; Rytz, Andréas; Brunstrom, Jeffrey M; Forde, Ciarán G; Martin, Nathalie
2017-04-01
The rise of obesity prevalence has been attributed in part to an increase in food and beverage portion sizes selected and consumed among overweight and obese consumers. Nevertheless, evidence from observations of adults is mixed and contradictory findings might reflect the use of small or unrepresentative samples. The objective of this study was i) to determine the extent to which BMI and dietary restraint predict self-selected portion sizes for a range of commercially available prepared savoury meals and ii) to consider the importance of these variables relative to two previously established predictors of portion selection, expected satiation and expected liking. A representative sample of female consumers (N = 300, range 18-55 years) evaluated 15 frozen savoury prepared meals. For each meal, participants rated their expected satiation and expected liking, and selected their ideal portion using a previously validated computer-based task. Dietary restraint was quantified using the Dutch Eating Behaviour Questionnaire (DEBQ-R). Hierarchical multiple regression was performed on self-selected portions with age, hunger level, and meal familiarity entered as control variables in the first step of the model, expected satiation and expected liking as predictor variables in the second step, and DEBQ-R and BMI as exploratory predictor variables in the third step. The second and third steps significantly explained variance in portion size selection (18% and 4%, respectively). Larger portion selections were significantly associated with lower dietary restraint and with lower expected satiation. There was a positive relationship between BMI and portion size selection (p = 0.06) and between expected liking and portion size selection (p = 0.06). Our discussion considers future research directions, the limited variance explained by our model, and the potential for portion size underreporting by overweight participants. Copyright © 2016 Nestec S.A. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar
2011-12-01
This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.
Single cardiac ventricular myosins are autonomous motors
Wang, Yihua; Yuan, Chen-Ching; Kazmierczak, Katarzyna; Szczesna-Cordary, Danuta
2018-01-01
Myosin transduces ATP free energy into mechanical work in muscle. Cardiac muscle has dynamically wide-ranging power demands on the motor as the muscle changes modes in a heartbeat from relaxation, via auxotonic shortening, to isometric contraction. The cardiac power output modulation mechanism is explored in vitro by assessing single cardiac myosin step-size selection versus load. Transgenic mice express human ventricular essential light chain (ELC) in wild- type (WT), or hypertrophic cardiomyopathy-linked mutant forms, A57G or E143K, in a background of mouse α-cardiac myosin heavy chain. Ensemble motility and single myosin mechanical characteristics are consistent with an A57G that impairs ELC N-terminus actin binding and an E143K that impairs lever-arm stability, while both species down-shift average step-size with increasing load. Cardiac myosin in vivo down-shifts velocity/force ratio with increasing load by changed unitary step-size selections. Here, the loaded in vitro single myosin assay indicates quantitative complementarity with the in vivo mechanism. Both have two embedded regulatory transitions, one inhibiting ADP release and a second novel mechanism inhibiting actin detachment via strain on the actin-bound ELC N-terminus. Competing regulators filter unitary step-size selection to control force-velocity modulation without myosin integration into muscle. Cardiac myosin is muscle in a molecule. PMID:29669825
NASA Astrophysics Data System (ADS)
Zhou, Yali; Zhang, Qizhi; Yin, Yixin
2015-05-01
In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.
Multiple stage miniature stepping motor
Niven, William A.; Shikany, S. David; Shira, Michael L.
1981-01-01
A stepping motor comprising a plurality of stages which may be selectively activated to effect stepping movement of the motor, and which are mounted along a common rotor shaft to achieve considerable reduction in motor size and minimum diameter, whereby sequential activation of the stages results in successive rotor steps with direction being determined by the particular activating sequence followed.
Variable-mesh method of solving differential equations
NASA Technical Reports Server (NTRS)
Van Wyk, R.
1969-01-01
Multistep predictor-corrector method for numerical solution of ordinary differential equations retains high local accuracy and convergence properties. In addition, the method was developed in a form conducive to the generation of effective criteria for the selection of subsequent step sizes in step-by-step solution of differential equations.
Quail, Michael A; Gu, Yong; Swerdlow, Harold; Mayho, Matthew
2012-12-01
Size selection can be a critical step in preparation of next-generation sequencing libraries. Traditional methods employing gel electrophoresis lack reproducibility, are labour intensive, do not scale well and employ hazardous interchelating dyes. In a high-throughput setting, solid-phase reversible immobilisation beads are commonly used for size-selection, but result in quite a broad fragment size range. We have evaluated and optimised the use of two semi-automated preparative DNA electrophoresis systems, the Caliper Labchip XT and the Sage Science Pippin Prep, for size selection of Illumina sequencing libraries. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
NASA Technical Reports Server (NTRS)
Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John
2016-01-01
In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.
2016-02-10
using bolt hole eddy current (BHEC) techniques. Data was acquired for a wide range of crack sizes and shapes, including mid- bore , corner and through...to select the most appropriate VIC-3D surrogate model for subsequent crack sizing inversion step. Inversion results for select mid- bore , through and...the flaw. 15. SUBJECT TERMS Bolt hole eddy current (BHEC); mid- bore , corner and through-thickness crack types; VIC-3D generated surrogate models
Variable Step-Size Selection Methods for Implicit Integration Schemes
2005-10-01
for ρk numerically. 23 4 Examples In this section we explore this variable step-size selection method for two problems, the Lotka - Volterra model and...the Kepler problem. 4.1 The Lotka - Volterra Model For this example we consider the Lotka - Volterra model of a simple predator- prey system from...problems. Consider this variation to the Lotka - Volterra problem: u̇ v̇ = u2v(v − 2) v2u(1− u) = f(u, v); t ∈ [0, 50
Kyriakis, Efstathios; Psomopoulos, Constantinos; Kokkotis, Panagiotis; Bourtsalas, Athanasios; Themelis, Nikolaos
2017-06-23
This study attempts the development of an algorithm in order to present a step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy, also considering the basic obstacle which is in many cases, the gate fee. Various parameters identified and evaluated in order to formulate the proposed decision making method in the form of an algorithm. The principle simulation input is the amount of municipal solid wastes (MSW) available for incineration and along with its net calorific value are the most important factors for the feasibility of the plant. Moreover, the research is focused both on the parameters that could increase the energy production and those that affect the R1 energy efficiency factor. Estimation of the final gate fee is achieved through the economic analysis of the entire project by investigating both expenses and revenues which are expected according to the selected site and outputs of the facility. In this point, a number of commonly revenue methods were included in the algorithm. The developed algorithm has been validated using three case studies in Greece-Athens, Thessaloniki, and Central Greece, where the cities of Larisa and Volos have been selected for the application of the proposed decision making tool. These case studies were selected based on a previous publication made by two of the authors, in which these areas where examined. Results reveal that the development of a «solid» methodological approach in selecting the site and the size of waste-to-energy (WtE) facility can be feasible. However, the maximization of the energy efficiency factor R1 requires high utilization factors while the minimization of the final gate fee requires high R1 and high metals recovery from the bottom ash as well as economic exploitation of recovered raw materials if any.
Advantages offered by high average power picosecond lasers
NASA Astrophysics Data System (ADS)
Moorhouse, C.
2011-03-01
As electronic devices shrink in size to reduce material costs, device size and weight, thinner material thicknesses are also utilized. Feature sizes are also decreasing, which is pushing manufacturers towards single step laser direct write process as an attractive alternative to conventional, multiple step photolithography processes by eliminating process steps and the cost of chemicals. The fragile nature of these thin materials makes them difficult to machine either mechanically or with conventional nanosecond pulsewidth, Diode Pumped Solids State (DPSS) lasers. Picosecond laser pulses can cut materials with reduced damage regions and selectively remove thin films due to the reduced thermal effects of the shorter pulsewidth. Also, the high repetition rate allows high speed processing for industrial applications. Selective removal of thin films for OLED patterning, silicon solar cells and flat panel displays is discussed, as well as laser cutting of transparent materials with low melting point such as Polyethylene Terephthalate (PET). For many of these thin film applications, where low pulse energy and high repetition rate are required, throughput can be increased by the use of a novel technique to using multiple beams from a single laser source is outlined.
Normalised subband adaptive filtering with extended adaptiveness on degree of subband filters
NASA Astrophysics Data System (ADS)
Samuyelu, Bommu; Rajesh Kumar, Pullakura
2017-12-01
This paper proposes an adaptive normalised subband adaptive filtering (NSAF) to accomplish the betterment of NSAF performance. In the proposed NSAF, an extended adaptiveness is introduced from its variants in two ways. In the first way, the step-size is set adaptive, and in the second way, the selection of subbands is set adaptive. Hence, the proposed NSAF is termed here as variable step-size-based NSAF with selected subbands (VS-SNSAF). Experimental investigations are carried out to demonstrate the performance (in terms of convergence) of the VS-SNSAF against the conventional NSAF and its state-of-the-art adaptive variants. The results report the superior performance of VS-SNSAF over the traditional NSAF and its variants. It is also proved for its stability, robustness against noise and substantial computing complexity.
Equal-mobility bed load transport in a small, step-pool channel in the Ouachita Mountains
Daniel A. Marion; Frank Weirich
2003-01-01
Abstract: Equal-mobility transport (EMT) of bed load is more evident than size-selective transport during near-bankfull flow events in a small, step-pool channel in the Ouachita Mountains of central Arkansas. Bed load transport modes were studied by simulating five separate runoff events with peak discharges between 0.25 and 1.34 m3...
[Selection of patients for transcatheter aortic valve implantation].
Tron, Christophe; Godin, Matthieu; Litzler, Pierre-Yves; Bauer, Fabrice; Caudron, Jérome; Dacher, Jean-Nicolas; Borz, Bogdan; Canville, Alexandre; Kurtz, Baptiste; Bessou, Jean-Paul; Cribier, Alain; Eltchaninoff, Hélène
2012-06-01
A good selection of patients is a crucial step before transcatheter aortic valve implantation (TAVI) in order to select the good indications and choose the access route. TAVI should be considered only in patients with symptomatic severe aortic stenosis and either contraindication or high surgical risk. Indication for TAVI should be discussed in a multidisciplinary team meeting. Echocardiography and/or CT scan are mandatory to evaluate the aortic annulus size and select the good prosthesis size. The possibility of transfemoral implantation is evaluated by angiography and CT scan, and based on the arterial diameters, but also on the presence of tortuosities and arterial calcifications. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Selectivity Mechanism of the Nuclear Pore Complex Characterized by Single Cargo Tracking
Lowe, Alan R.; Siegel, Jake J.; Kalab, Petr; Siu, Merek; Weis, Karsten; Liphardt, Jan T.
2010-01-01
The Nuclear Pore Complex (NPC) mediates all exchange between the cytoplasm and the nucleus. Small molecules can passively diffuse through the NPC, while larger cargos require transport receptors to translocate1. How the NPC facilitates the translocation of transport receptor/cargo complexes remains unclear. Here, we track single protein-functionalized Quantum Dot (QD) cargos as they translocate the NPC. Import proceeds by successive sub-steps comprising cargo capture, filtering and translocation, and release into the nucleus. The majority of QDs are rejected at one of these steps and return to the cytoplasm including very large cargos that abort at a size-selective barrier. Cargo movement in the central channel is subdiffusive and cargos that can bind more transport receptors diffuse more freely. Without Ran, cargos still explore the entire NPC, but have a markedly reduced probability of exit into the nucleus, suggesting that NPC entry and exit steps are not equivalent and that the pore is functionally asymmetric to importing cargos. The overall selectivity of the NPC appears to arise from the cumulative action of multiple reversible sub-steps and a final irreversible exit step. PMID:20811366
Orlenko, Alena; Chi, Peter B; Liberles, David A
2017-05-25
Understanding the genotype-phenotype map is fundamental to our understanding of genomes. Genes do not function independently, but rather as part of networks or pathways. In the case of metabolic pathways, flux through the pathway is an important next layer of biological organization up from the individual gene or protein. Flux control in metabolic pathways, reflecting the importance of mutation to individual enzyme genes, may be evolutionarily variable due to the role of mutation-selection-drift balance. The evolutionary stability of rate limiting steps and the patterns of inter-molecular co-evolution were evaluated in a simulated pathway with a system out of equilibrium due to fluctuating selection, population size, or positive directional selection, to contrast with those under stabilizing selection. Depending upon the underlying population genetic regime, fluctuating population size was found to increase the evolutionary stability of rate limiting steps in some scenarios. This result was linked to patterns of local adaptation of the population. Further, during positive directional selection, as with more complex mutational scenarios, an increase in the observation of inter-molecular co-evolution was observed. Differences in patterns of evolution when systems are in and out of equilibrium, including during positive directional selection may lead to predictable differences in observed patterns for divergent evolutionary scenarios. In particular, this result might be harnessed to detect differences between compensatory processes and directional processes at the pathway level based upon evolutionary observations in individual proteins. Detecting functional shifts in pathways reflects an important milestone in predicting when changes in genotypes result in changes in phenotypes.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286
Finite element mesh refinement criteria for stress analysis
NASA Technical Reports Server (NTRS)
Kittur, Madan G.; Huston, Ronald L.
1990-01-01
This paper discusses procedures for finite-element mesh selection and refinement. The objective is to improve accuracy. The procedures are based on (1) the minimization of the stiffness matrix race (optimizing node location); (2) the use of h-version refinement (rezoning, element size reduction, and increasing the number of elements); and (3) the use of p-version refinement (increasing the order of polynomial approximation of the elements). A step-by-step procedure of mesh selection, improvement, and refinement is presented. The criteria for 'goodness' of a mesh are based on strain energy, displacement, and stress values at selected critical points of a structure. An analysis of an aircraft lug problem is presented as an example.
Compact and broadband antenna based on a step-shaped metasurface.
Li, Ximing; Yang, Jingjing; Feng, Yun; Yang, Meixia; Huang, Ming
2017-08-07
A metasurface (MS) is highly useful for improving the performance of patch antennae and reducing their size due to their inherent and unique electromagnetic properties. In this paper, a compact and broadband antenna based on a step-shaped metasurface (SMS) at an operating frequency of 4.3 GHz is presented, which is fed by a planar monopole and enabled by selecting an SMS with high selectivity. The SMS consists of an array of metallic step-shaped unit cells underneath the monopole, which provide footprint miniaturization and bandwidth expansion. Numerical results show that the SMS-based antenna with a maximum size of 0.42λ02 (where λ 0 is the operating wavelength in free space) exhibits a 22.3% impedance bandwidth (S11 < -10 dB) and a high gain of more than 7.15 dBi within the passband. Experimental results at microwave frequencies verify the performance of the proposed antenna, demonstrating substantial consistency with the simulation results. The compact and broadband antenna therefore predicts numerous potential applications within modern wireless communication systems.
Soft Landing of Bare Nanoparticles with Controlled Size, Composition, and Morphology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Grant E.; Colby, Robert J.; Laskin, Julia
2015-01-01
A kinetically-limited physical synthesis method based on magnetron sputtering and gas aggregation has been coupled with size-selection and ion soft landing to prepare bare metal nanoparticles on surfaces with controlled coverage, size, composition, and morphology. Employing atomic force microscopy (AFM) and scanning electron microscopy (SEM), it is demonstrated that the size and coverage of bare nanoparticles soft landed onto flat glassy carbon and silicon as well as stepped graphite surfaces may be controlled through size-selection with a quadrupole mass filter and the length of deposition, respectively. The bare nanoparticles are observed with AFM to bind randomly to the flat glassymore » carbon surface when soft landed at relatively low coverage (1012 ions). In contrast, on stepped graphite surfaces at intermediate coverage (1013 ions) the soft landed nanoparticles are shown to bind preferentially along step edges forming extended linear chains of particles. At the highest coverage (5 x 1013 ions) examined in this study the nanoparticles are demonstrated with both AFM and SEM to form a continuous film on flat glassy carbon and silicon surfaces. On a graphite surface with defects, however, it is shown with SEM that the presence of localized surface imperfections results in agglomeration of nanoparticles onto these features and the formation of neighboring depletion zones that are devoid of particles. Employing high resolution scanning transmission electron microscopy in the high angular annular dark field imaging mode (STEM-HAADF) and electron energy loss spectroscopy (EELS) it is demonstrated that the magnetron sputtering/gas aggregation synthesis technique produces single metal particles with controlled morphology as well as bimetallic alloy nanoparticles with clearly defined core-shell structure. Therefore, this kinetically-limited physical synthesis technique, when combined with ion soft landing, is a versatile complementary method for preparing a wide range of bare supported nanoparticles with selected properties that are free of the solvent, organic capping agents, and residual reactants present with nanoparticles synthesized in solution.« less
Selection and Characterization of Vegetable Crop Cultivars for use in Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Langhans, Robert W.
1997-01-01
Cultivar evaluation for controlled environments is a lengthy and multifaceted activity. The chapters of this thesis cover eight steps preparatory to yield trials, and the final step of cultivar selection after data are collected. The steps are as follows: 1. Examination of the literature on the crop and crop cultivars to assess the state of knowledge. 2. Selection of standard cultivars with which to explore crop response to major growth factors and determine set points for screening and, later, production. 3. Determination of practical growing techniques for the crop in controlled environments. 4. Design of experiments for determination of crop responses to the major growth factors, with particular emphasis on photoperiod, daily light integral and air temperature. 5. Developing a way of measuring yield appropriate to the crop type by sampling through the harvest period and calculating a productivity function. 6. Narrowing down the pool of cultivars and breeding lines according to a set of criteria and breeding history. 7. Determination of environmental set points for cultivar evaluation through calculating production cost as a function of set points and size of target facility. 8. Design of screening and yield trial experiments emphasizing efficient use of space. 9. Final evaluation of cultivars after data collection, in terms of production cost and value to the consumer. For each of the steps, relevant issues are addressed. In selecting standards to determine set points for screening, set points that optimize cost of production for the standards may not be applicable to all cultivars. Production of uniform and equivalent- sized seedlings is considered as a means of countering possible differences in seed vigor. Issues of spacing and re-spacing are also discussed.
Simulation methods with extended stability for stiff biochemical Kinetics.
Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin
2010-08-11
With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.
Linking seasonal home range size with habitat selection and movement in a mountain ungulate.
Viana, Duarte S; Granados, José Enrique; Fandos, Paulino; Pérez, Jesús M; Cano-Manuel, Francisco Javier; Burón, Daniel; Fandos, Guillermo; Aguado, María Ángeles Párraga; Figuerola, Jordi; Soriguer, Ramón C
2018-01-01
Space use by animals is determined by the interplay between movement and the environment, and is thus mediated by habitat selection, biotic interactions and intrinsic factors of moving individuals. These processes ultimately determine home range size, but their relative contributions and dynamic nature remain less explored. We investigated the role of habitat selection, movement unrelated to habitat selection and intrinsic factors related to sex in driving space use and home range size in Iberian ibex, Capra pyrenaica . We used GPS collars to track ibex across the year in two different geographical areas of Sierra Nevada, Spain, and measured habitat variables related to forage and roost availability. By using integrated step selection analysis (iSSA), we show that habitat selection was important to explain space use by ibex. As a consequence, movement was constrained by habitat selection, as observed displacement rate was shorter than expected under null selection. Selection-independent movement, selection strength and resource availability were important drivers of seasonal home range size. Both displacement rate and directional persistence had a positive relationship with home range size while accounting for habitat selection, suggesting that individual characteristics and state may also affect home range size. Ibex living at higher altitudes, where resource availability shows stronger altitudinal gradients across the year, had larger home ranges. Home range size was larger in spring and autumn, when ibex ascend and descend back, and smaller in summer and winter, when resources are more stable. Therefore, home range size decreased with resource availability. Finally, males had larger home ranges than females, which might be explained by differences in body size and reproductive behaviour. Movement, selection strength, resource availability and intrinsic factors related to sex determined home range size of Iberian ibex. Our results highlight the need to integrate and account for process dependencies, here the interdependence of movement and habitat selection, to understand how animals use space. This study contributes to understand how movement links environmental and geographical space use and determines home range behaviour in large herbivores.
Bayesian SEM for Specification Search Problems in Testing Factorial Invariance.
Shi, Dexin; Song, Hairong; Liao, Xiaolan; Terry, Robert; Snyder, Lori A
2017-01-01
Specification search problems refer to two important but under-addressed issues in testing for factorial invariance: how to select proper reference indicators and how to locate specific non-invariant parameters. In this study, we propose a two-step procedure to solve these issues. Step 1 is to identify a proper reference indicator using the Bayesian structural equation modeling approach. An item is selected if it is associated with the highest likelihood to be invariant across groups. Step 2 is to locate specific non-invariant parameters, given that a proper reference indicator has already been selected in Step 1. A series of simulation analyses show that the proposed method performs well under a variety of data conditions, and optimal performance is observed under conditions of large magnitude of non-invariance, low proportion of non-invariance, and large sample sizes. We also provide an empirical example to demonstrate the specific procedures to implement the proposed method in applied research. The importance and influences are discussed regarding the choices of informative priors with zero mean and small variances. Extensions and limitations are also pointed out.
Method of electrode fabrication and an electrode for metal chloride battery
Bloom, I.D.; Nelson, P.A.; Vissers, D.R.
1993-03-16
A method of fabricating an electrode for use in a metal chloride battery and an electrode are provided. The electrode has relatively larger and more uniform pores than those found in typical electrodes. The fabrication method includes the steps of mixing sodium chloride particles selected from a predetermined size range with metal particles selected from a predetermined size range, and then rigidifying the mixture. The electrode exhibits lower resistivity values of approximately 0.5 [Omega]cm[sup 2] than those resistivity values of approximately 1.0-1.5 [Omega]cm[sup 2] exhibited by currently available electrodes.
Method of electrode fabrication and an electrode for metal chloride battery
Bloom, Ira D.; Nelson, Paul A.; Vissers, Donald R.
1993-01-01
A method of fabricating an electrode for use in a metal chloride battery and an electrode are provided. The electrode has relatively larger and more uniform pores than those found in typical electrodes. The fabrication method includes the steps of mixing sodium chloride particles selected from a predetermined size range with metal particles selected from a predetermined size range, and then rigidifying the mixture. The electrode exhibits lower resistivity values of approximately 0.5 .OMEGA.cm.sup.2 than those resistivity values of approximately 1.0-1.5 .OMEGA.cm.sup.2 exhibited by currently available electrodes.
Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew
2017-12-01
Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.
Line roughness improvements on self-aligned quadruple patterning by wafer stress engineering
NASA Astrophysics Data System (ADS)
Liu, Eric; Ko, Akiteru; Biolsi, Peter; Chae, Soo Doo; Hsieh, Chia-Yun; Kagaya, Munehito; Lee, Choongman; Moriya, Tsuyoshi; Tsujikawa, Shimpei; Suzuki, Yusuke; Okubo, Kazuya; Imai, Kiyotaka
2018-04-01
In integrated circuit and memory devices, size shrinkage has been the most effective method to reduce production cost and enable the steady increment of the number of transistors per unit area over the past few decades. In order to reduce the die size and feature size, it is necessary to minimize pattern formation in the advance node development. In the node of sub-10nm, extreme ultra violet lithography (EUV) and multi-patterning solutions based on 193nm immersionlithography are the two most common options to achieve the size requirement. In such small features of line and space pattern, line width roughness (LWR) and line edge roughness (LER) contribute significant amount of process variation that impacts both physical and electrical performances. In this paper, we focus on optimizing the line roughness performance by using wafer stress engineering on 30nm pitch line and space pattern. This pattern is generated by a self-aligned quadruple patterning (SAQP) technique for the potential application of fin formation. Our investigation starts by comparing film materials and stress levels in various processing steps and material selection on SAQP integration scheme. From the cross-matrix comparison, we are able to determine the best stack of film selection and stress combination in order to achieve the lowest line roughness performance while obtaining pattern validity after fin etch. This stack is also used to study the step-by-step line roughness performance from SAQP to fin etch. Finally, we will show a successful patterning of 30nm pitch line and space pattern SAQP scheme with 1nm line roughness performance.
ERIC Educational Resources Information Center
Sturm, H. Pepper
In 1989, the Nevada Legislature enacted the Class-Size Reduction (CSR) Act. The measure was designed to reduce the pupil-teacher ratio in the public schools, particularly in the earliest grades. The program was scheduled to proceed in several phases. The first step reduced the student-teacher ratio in selected kindergartens and first grade classes…
Absolute phase estimation: adaptive local denoising and global unwrapping.
Bioucas-Dias, Jose; Katkovnik, Vladimir; Astola, Jaakko; Egiazarian, Karen
2008-10-10
The paper attacks absolute phase estimation with a two-step approach: the first step applies an adaptive local denoising scheme to the modulo-2 pi noisy phase; the second step applies a robust phase unwrapping algorithm to the denoised modulo-2 pi phase obtained in the first step. The adaptive local modulo-2 pi phase denoising is a new algorithm based on local polynomial approximations. The zero-order and the first-order approximations of the phase are calculated in sliding windows of varying size. The zero-order approximation is used for pointwise adaptive window size selection, whereas the first-order approximation is used to filter the phase in the obtained windows. For phase unwrapping, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [IEEE Trans. Image Process.16, 698 (2007)] to the denoised wrapped phase. Simulations give evidence that the proposed algorithm yields state-of-the-art performance, enabling strong noise attenuation while preserving image details. (c) 2008 Optical Society of America
Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes
NASA Astrophysics Data System (ADS)
Calvo, M.; González-Pinto, S.; Montijano, J. I.
2008-09-01
Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with the code Gauss2 [S. Gonzalez-Pinto, R. Rojas-Bello, Gauss2, a Fortran 90 code for second order initial value problems,
Sample size calculations for stepped wedge and cluster randomised trials: a unified approach
Hemming, Karla; Taljaard, Monica
2016-01-01
Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808
Low-rank coal oil agglomeration product and process
Knudson, Curtis L.; Timpe, Ronald C.; Potas, Todd A.; DeWall, Raymond A.; Musich, Mark A.
1992-01-01
A selectively-sized, raw, low-rank coal is processed to produce a low ash and relative water-free agglomerate with an enhanced heating value and a hardness sufficient to produce a non-decrepitating, shippable fuel. The low-rank coal is treated, under high shear conditions, in the first stage to cause ash reduction and subsequent surface modification which is necessary to facilitate agglomerate formation. In the second stage the treated low-rank coal is contacted with bridging and binding oils under low shear conditions to produce agglomerates of selected size. The bridging and binding oils may be coal or petroleum derived. The process incorporates a thermal deoiling step whereby the bridging oil may be completely or partially recovered from the agglomerate; whereas, partial recovery of the bridging oil functions to leave as an agglomerate binder, the heavy constituents of the bridging oil. The recovered oil is suitable for recycling to the agglomeration step or can serve as a value-added product.
Low-rank coal oil agglomeration product and process
Knudson, C.L.; Timpe, R.C.; Potas, T.A.; DeWall, R.A.; Musich, M.A.
1992-11-10
A selectively-sized, raw, low-rank coal is processed to produce a low ash and relative water-free agglomerate with an enhanced heating value and a hardness sufficient to produce a non-degradable, shippable fuel. The low-rank coal is treated, under high shear conditions, in the first stage to cause ash reduction and subsequent surface modification which is necessary to facilitate agglomerate formation. In the second stage the treated low-rank coal is contacted with bridging and binding oils under low shear conditions to produce agglomerates of selected size. The bridging and binding oils may be coal or petroleum derived. The process incorporates a thermal deoiling step whereby the bridging oil may be completely or partially recovered from the agglomerate; whereas, partial recovery of the bridging oil functions to leave as an agglomerate binder, the heavy constituents of the bridging oil. The recovered oil is suitable for recycling to the agglomeration step or can serve as a value-added product.
Sexual selection on female ornaments in the sex-role-reversed Gulf pipefish (Syngnathus scovelli).
Flanagan, S P; Johnson, J B; Rose, E; Jones, A G
2014-11-01
Understanding how selection acts on traits individually and in combination is an important step in deciphering the mechanisms driving evolutionary change, but for most species, and especially those in which sexual selection acts more strongly on females than on males, we have no estimates of selection coefficients pertaining to the multivariate sexually selected phenotype. Here, we use a laboratory-based mesocosm experiment to quantify pre- and post-mating selection on female secondary sexual traits in the Gulf pipefish (Syngnathus scovelli), a sexually dimorphic, sex-role-reversed species in which ornamented females compete for access to choosy males. We calculate selection differentials and gradients on female traits, including ornament area, ornament number and body size for three episodes of selection related to female reproductive success (number of mates, number of eggs transferred and number of surviving embryos). Selection is strong on both ornament area and ornament size, and the majority of selection occurs during the premating episode of selection. Interestingly, selection on female body size, which has been detected in previous studies of Gulf pipefish, appears to be indirect, as evidenced by a multivariate analysis of selection gradients. Our results show that sexual selection favours either many bands or larger bands in female Gulf pipefish. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.
Wang, Jiacheng; Liu, Qian
2014-04-21
A series of microporous carbons (MPCs) were successfully prepared by an efficient one-step condensation and activation strategy using commercially available dialdehyde and diamine as carbon sources. The resulting MPCs have large surface areas (up to 1881 m(2) g(-1)), micropore volumes (up to 0.78 cm(3) g(-1)), and narrow micropore size distributions (0.7-1.1 nm). The CO₂ uptakes of the MPCs prepared at high temperatures (700-750 °C) are higher than those prepared under mild conditions (600-650 °C), because the former samples possess optimal micropore sizes (0.7-0.8 nm) that are highly suitable for CO₂ capture due to enhanced adsorbate-adsorbent interactions. At 1 bar, MPC-750 prepared at 750 °C demonstrates the best CO₂ capture performance and can efficiently adsorb CO₂ molecules at 2.86 mmol g(-1) and 4.92 mmol g(-1) at 25 and 0 °C, respectively. In particular, the MPCs with optimal micropore sizes (0.7-0.8 nm) have extremely high CO₂/N₂ adsorption ratios (47 and 52 at 25 and 0 °C, respectively) at 1 bar, and initial CO₂/N₂ adsorption selectivities of up to 81 and 119 at 25 °C and 0 °C, respectively, which are far superior to previously reported values for various porous solids. These excellent results, combined with good adsorption capacities and efficient regeneration/recyclability, make these carbons amongst the most promising sorbents reported so far for selective CO₂ adsorption in practical applications.
Kang, Xinchen; Zhang, Jianling; Shang, Wenting; Wu, Tianbin; Zhang, Peng; Han, Buxing; Wu, Zhonghua; Mo, Guang; Xing, Xueqing
2014-03-12
Stable porous ionic liquid-water gel induced by inorganic salts was created for the first time. The porous gel was used to develop a one-step method to synthesize supported metal nanocatalysts. Au/SiO2, Ru/SiO2, Pd/Cu(2-pymo)2 metal-organic framework (Cu-MOF), and Au/polyacrylamide (PAM) were synthesized, in which the supports had hierarchical meso- and macropores, the size of the metal nanocatalysts could be very small (<1 nm), and the size distribution was very narrow even when the metal loading amount was as high as 8 wt %. The catalysts were extremely active, selective, and stable for oxidative esterification of benzyl alcohol to methyl benzoate, benzene hydrogenation to cyclohexane, and oxidation of benzyl alcohol to benzaldehyde because they combined the advantages of the nanocatalysts of small size and hierarchical porosity of the supports. In addition, this method is very simple.
Seed size selection by olive baboons.
Kunz, Britta Kerstin; Linsenmair, Karl Eduard
2008-10-01
Seed size is an important plant fitness trait that can influence several steps between fruiting and the establishment of a plant's offspring. Seed size varies considerably within many plant species, yet the relevance of the trait for intra-specific fruit choice by primates has received little attention. Primates may select certain seed sizes within a species for a number of reasons, e.g. to decrease indigestible seed load or increase pulp intake per fruit. Olive baboons (Papio anubis, Cercopithecidae) are known to select seed size in unripe and mature pods of Parkia biglobosa (Mimosaceae) differentially, so that pods with small seeds, and an intermediate seed number, contribute most to dispersal by baboons. We tested whether olive baboons likewise select for smaller ripe seeds within each of nine additional fruit species whose fruit pulp baboons commonly consume, and for larger seeds in one species in which baboons feed on the seeds. Species differed in fruit type and seed number per fruit. For five of these species, baboons dispersed seeds that were significantly smaller than seeds extracted manually from randomly collected fresh fruits. In contrast, for three species, baboons swallowed seeds that were significantly longer and/or wider than seeds from fresh fruits. In two species, sizes of ingested seeds and seeds from fresh fruits did not differ significantly. Baboons frequently spat out seeds of Drypetes floribunda (Euphorbiaceae) but not those of other plant species having seeds of equal size. Oral processing of D. floribunda seeds depended on seed size: seeds that were spat out were significantly larger and swallowed seeds smaller, than seeds from randomly collected fresh fruits. We argue that seed size selection in baboons is influenced, among other traits, by the amount of pulp rewarded per fruit relative to seed load, which is likely to vary with fruit and seed shape.
Signs of the Times: Signage in the Library.
ERIC Educational Resources Information Center
Johnson, Carolyn
1993-01-01
Discusses the use of signs in libraries and lists 12 steps to create successful signage. Highlights include consistency, location, color, size, lettering, types of material, user needs, signage policy, planning, in-house fabrication versus vendors, and evaluation, A selected bibliography of 24 sources of information on library signage is included.…
Mutational Effects and Population Dynamics During Viral Adaptation Challenge Current Models
Miller, Craig R.; Joyce, Paul; Wichman, Holly A.
2011-01-01
Adaptation in haploid organisms has been extensively modeled but little tested. Using a microvirid bacteriophage (ID11), we conducted serial passage adaptations at two bottleneck sizes (104 and 106), followed by fitness assays and whole-genome sequencing of 631 individual isolates. Extensive genetic variation was observed including 22 beneficial, several nearly neutral, and several deleterious mutations. In the three large bottleneck lines, up to eight different haplotypes were observed in samples of 23 genomes from the final time point. The small bottleneck lines were less diverse. The small bottleneck lines appeared to operate near the transition between isolated selective sweeps and conditions of complex dynamics (e.g., clonal interference). The large bottleneck lines exhibited extensive interference and less stochasticity, with multiple beneficial mutations establishing on a variety of backgrounds. Several leapfrog events occurred. The distribution of first-step adaptive mutations differed significantly from the distribution of second-steps, and a surprisingly large number of second-step beneficial mutations were observed on a highly fit first-step background. Furthermore, few first-step mutations appeared as second-steps and second-steps had substantially smaller selection coefficients. Collectively, the results indicate that the fitness landscape falls between the extremes of smooth and fully uncorrelated, violating the assumptions of many current mutational landscape models. PMID:21041559
High resolution electron microscopy study of crystal growth mechanisms in chicken bone composites
NASA Astrophysics Data System (ADS)
Cuisinier, F. J. G.; Steuer, P.; Brisson, A.; Voegel, J. C.
1995-12-01
The present study describes the early stages of chicken bone crystal growth, followed by high resolution electron microscopy (HREM). We have developed an original analysis procedure to determine the crystal structure. Images were first digitalized and selected areas were fast Fourier transformed. Numerical masks were selected around the most intense spots and the filtered signal was retransformed back to real space. The filtered images were then compared to computer calculated images to identify the inorganic mineral phase. Nanometer-sized particles were observed on amorphous areas. These particles have a structure loosely related to hydroxyapatite (HA) and a specific orientation. In a more advanced situation, the nanoparticles appeared to grow in two dimensions and to form plate-like crystals. These crystals seem, in a last growth step, to fuse by their (100) faces. These experimental observations allowed us to propose a four-step model for the development and growth of chicken bone crystals. The two initial stages are the ionic adsorption onto the organic substrate followed by the nucleation of nanometer-sized particles. The two following steps, i.e. two-dimensional growth of the nanoparticles leading to the formation of needle-like crystals, and the lateral fusion of these crystals by their (100) faces, are controlled only by spatial constraints inside the extracellular organic matrix.
Adaptive time stepping for fluid-structure interaction solvers
Mayr, M.; Wall, W. A.; Gee, M. W.
2017-12-22
In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less
Adaptive time stepping for fluid-structure interaction solvers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayr, M.; Wall, W. A.; Gee, M. W.
In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less
Low Complexity Compression and Speed Enhancement for Optical Scanning Holography
Tsang, P. W. M.; Poon, T.-C.; Liu, J.-P.; Kim, T.; Kim, Y. S.
2016-01-01
In this paper we report a low complexity compression method that is suitable for compact optical scanning holography (OSH) systems with different optical settings. Our proposed method can be divided into 2 major parts. First, an automatic decision maker is applied to select the rows of holographic pixels to be scanned. This process enhances the speed of acquiring a hologram, and also lowers the data rate. Second, each row of down-sampled pixels is converted into a one-bit representation with delta modulation (DM). Existing DM-based hologram compression techniques suffers from the disadvantage that a core parameter, commonly known as the step size, has to be determined in advance. However, the correct value of the step size for compressing each row of hologram is dependent on the dynamic range of the pixels, which could deviate significantly with the object scene, as well as OSH systems with different opical settings. We have overcome this problem by incorporating a dynamic step-size adjustment scheme. The proposed method is applied in the compression of holograms that are acquired with 2 different OSH systems, demonstrating a compression ratio of over two orders of magnitude, while preserving favorable fidelity on the reconstructed images. PMID:27708410
Performance analysis and kernel size study of the Lynx real-time operating system
NASA Technical Reports Server (NTRS)
Liu, Yuan-Kwei; Gibson, James S.; Fernquist, Alan R.
1993-01-01
This paper analyzes the Lynx real-time operating system (LynxOS), which has been selected as the operating system for the Space Station Freedom Data Management System (DMS). The features of LynxOS are compared to other Unix-based operating system (OS). The tools for measuring the performance of LynxOS, which include a high-speed digital timer/counter board, a device driver program, and an application program, are analyzed. The timings for interrupt response, process creation and deletion, threads, semaphores, shared memory, and signals are measured. The memory size of the DMS Embedded Data Processor (EDP) is limited. Besides, virtual memory is not suitable for real-time applications because page swap timing may not be deterministic. Therefore, the DMS software, including LynxOS, has to fit in the main memory of an EDP. To reduce the LynxOS kernel size, the following steps are taken: analyzing the factors that influence the kernel size; identifying the modules of LynxOS that may not be needed in an EDP; adjusting the system parameters of LynxOS; reconfiguring the device drivers used in the LynxOS; and analyzing the symbol table. The reductions in kernel disk size, kernel memory size and total kernel size reduction from each step mentioned above are listed and analyzed.
Wakayama, Hideki; Henares, Terence G; Jigawa, Kaede; Funano, Shun-ichi; Sueyoshi, Kenji; Endo, Tatsuro; Hisamoto, Hideaki
2013-11-21
A combination of an enzyme-labeled antibody release coating and a novel fluorescent enzyme substrate-copolymerized hydrogel in a microchannel for a single-step, no-wash microfluidic immunoassay is demonstrated. This hydrogel discriminates the free enzyme-conjugated antibody from an antigen-enzyme-conjugated antibody immunocomplex based on the difference in molecular size. A selective and sensitive immunoassay, with 10-1000 ng mL(-1) linear range, is reported.
Remediation of metal-contaminated urban soil using flotation technique.
Dermont, G; Bergeron, M; Richer-Laflèche, M; Mercier, G
2010-02-01
A soil washing process using froth flotation technique was evaluated for the removal of arsenic, cadmium, copper, lead, and zinc from a highly contaminated urban soil (brownfield) after crushing of the particle-size fractions >250microm. The metal contaminants were in particulate forms and distributed in all the particle-size fractions. The particle-by-particle study with SEM-EDS showed that Zn was mainly present as sphalerite (ZnS), whereas Cu and Pb were mainly speciated as various oxide/carbonate compounds. The influence of surfactant collector type (non-ionic and anionic), collector dosage, pulp pH, a chemical activation step (sulfidization), particle size, and process time on metal removal efficiency and flotation selectivity was studied. Satisfactory results in metal recovery (42-52%), flotation selectivity (concentration factor>2.5), and volume reduction (>80%) were obtained with anionic collector (potassium amyl xanthate). The transportation mechanisms involved in the separation process (i.e., the true flotation and the mechanical entrainment) were evaluated by the pulp chemistry, the metal speciation, the metal distribution in the particle-size fractions, and the separation selectivity indices of Zn/Ca and Zn/Fe. The investigations showed that a great proportion of metal-containing particles were recovered in the froth layer by entrainment mechanism rather than by true flotation process. The non-selective entrainment mechanism of the fine particles (<20 microm) caused a flotation selectivity drop, especially with a long flotation time (>5 min) and when a high collector dose is used. The intermediate particle-size fraction (20-125 microm) showed the best flotation selectivity. Copyright 2009 Elsevier B.V. All rights reserved.
Cluster size selectivity in the product distribution of ethene dehydrogenation on niobium clusters.
Parnis, J Mark; Escobar-Cabrera, Eric; Thompson, Matthew G K; Jacula, J Paul; Lafleur, Rick D; Guevara-García, Alfredo; Martínez, Ana; Rayner, David M
2005-08-18
Ethene reactions with niobium atoms and clusters containing up to 25 constituent atoms have been studied in a fast-flow metal cluster reactor. The clusters react with ethene at about the gas-kinetic collision rate, indicating a barrierless association process as the cluster removal step. Exceptions are Nb8 and Nb10, for which a significantly diminished rate is observed, reflecting some cluster size selectivity. Analysis of the experimental primary product masses indicates dehydrogenation of ethene for all clusters save Nb10, yielding either Nb(n)C2H2 or Nb(n)C2. Over the range Nb-Nb6, the extent of dehydrogenation increases with cluster size, then decreases for larger clusters. For many clusters, secondary and tertiary product masses are also observed, showing varying degrees of dehydrogenation corresponding to net addition of C2H4, C2H2, or C2. With Nb atoms and several small clusters, formal addition of at least six ethene molecules is observed, suggesting a polymerization process may be active. Kinetic analysis of the Nb atom and several Nb(n) cluster reactions with ethene shows that the process is consistent with sequential addition of ethene units at rates corresponding approximately to the gas-kinetic collision frequency for several consecutive reacting ethene molecules. Some variation in the rate of ethene pick up is found, which likely reflects small energy barriers or steric constraints associated with individual mechanistic steps. Density functional calculations of structures of Nb clusters up to Nb(6), and the reaction products Nb(n)C2H2 and Nb(n)C2 (n = 1...6) are presented. Investigation of the thermochemistry for the dehydrogenation of ethene to form molecular hydrogen, for the Nb atom and clusters up to Nb6, demonstrates that the exergonicity of the formation of Nb(n)C2 species increases with cluster size over this range, which supports the proposal that the extent of dehydrogenation is determined primarily by thermodynamic constraints. Analysis of the structural variations present in the cluster species studied shows an increase in C-H bond lengths with cluster size that closely correlates with the increased thermodynamic drive to full dehydrogenation. This correlation strongly suggests that all steps in the reaction are barrierless, and that weakening of the C-H bonds is directly reflected in the thermodynamics of the overall dehydrogenation process. It is also demonstrated that reaction exergonicity in the initial partial dehydrogenation step must be carried through as excess internal energy into the second dehydrogenation step.
Scalable Production Method for Graphene Oxide Water Vapor Separation Membranes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fifield, Leonard S.; Shin, Yongsoon; Liu, Wei
ABSTRACT Membranes for selective water vapor separation were assembled from graphene oxide suspension using techniques compatible with high volume industrial production. The large-diameter graphene oxide flake suspensions were synthesized from graphite materials via relatively efficient chemical oxidation steps with attention paid to maintaining flake size and achieving high graphene oxide concentrations. Graphene oxide membranes produced using scalable casting methods exhibited water vapor flux and water/nitrogen selectivity performance meeting or exceeding that of membranes produced using vacuum-assisted laboratory techniques. (PNNL-SA-117497)
Pavlopoulos, Nicholas G.; Dubose, Jeffrey T.; Hartnett, Erin D.; ...
2016-07-26
We report on a versatile synthetic m-shell nanoparticles (NPs) in the backbone, along with semiconductor CdSe@CdS nanorod (NR), or tetrapod (TP) side chain groups. A seven-step colloidal total synthesis enabled the synthesis of well-defined colloidal comonomers composed of a dipolar Au@CoNP attached to a single CdSe@CdS NR, or TP, where magnetic dipolar associations between Au@CoNP units promoted the formation of colloidal co- or terpolymers. The key step in this synthesis was the ability to photodeposit a single AuNP tip onto CdSe@CdS NR or TP that enables selective seeding of a dipolar CoNP onto the AuNP seed. In conclusion, we showmore » that the variation of the AuNP size directly controlled the size and dipolar character of the CoNP tip, where the size modulation of the Au and Au@CoNP tips is analogous to control of comonomer reactivity ratios in classical copolymerization processes.« less
Simulated space environmental effects on a polyetherimide and its carbon fiber-reinforced composites
NASA Technical Reports Server (NTRS)
Kern, Kristen T.; Stancil, Phillip C.; Harries, Wynford L.; Long, Edward R., Jr.; Thibeault, Sheila A.
1993-01-01
The selection of materials for spacecraft construction requires identification of candidate materials which can perform reliably in the space environment. Understanding the effects of the space environment on the materials is an important step in the selection of candidate materials. This work examines the effects of energetic electrons, thermal cycling, electron radiation in conjunction with thermal cycling, and atomic oxygen on a thermoplastic polyetherimide and its carbon-fiber-reinforced composites. Composite materials made with non-sized fibers as well as materials made with fibers sized with an epoxy were evaluated. The mechanical and thermomechanical properties of the materials were studied and spectroscopic techniques were used to investigate the mechanisms for the observed effects. Considerations for future material development are suggested.
A General Method for Solving Systems of Non-Linear Equations
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)
1995-01-01
The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.
Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-11-01
Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.
Strategy Guideline: HVAC Equipment Sizing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdick, A.
The heating, ventilation, and air conditioning (HVAC) system is arguably the most complex system installed in a house and is a substantial component of the total house energy use. A right-sized HVAC system will provide the desired occupant comfort and will run efficiently. This Strategy Guideline discusses the information needed to initially select the equipment for a properly designed HVAC system. Right-sizing of an HVAC system involves the selection of equipment and the design of the air distribution system to meet the accurate predicted heating and cooling loads of the house. Right-sizing the HVAC system begins with an accurate understandingmore » of the heating and cooling loads on a space; however, a full HVAC design involves more than just the load estimate calculation - the load calculation is the first step of the iterative HVAC design procedure. This guide describes the equipment selection of a split system air conditioner and furnace for an example house in Chicago, IL as well as a heat pump system for an example house in Orlando, Florida. The required heating and cooling load information for the two example houses was developed in the Department of Energy Building America Strategy Guideline: Accurate Heating and Cooling Load Calculations.« less
Secrets to Writing Great Papers. The Study Smart Series.
ERIC Educational Resources Information Center
Kesselman-Turkel, Judi; Peterson, Franklynn
This book explains how to work with ideas to hone them into words, providing techniques and exercises for brainstorming, choosing the right approach, working with an unknown or boring assigned topic, and selecting the best point of view. It presents 10 steps, noting related problems: (1) "Decide on Size" (no specific length is assigned);…
Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella
2015-10-30
The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms.
Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella
2015-01-01
The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms. PMID:26528984
Quality testing of an innovative cascade separation system for multiple cell separation
NASA Astrophysics Data System (ADS)
Pierzchalski, Arkadiusz; Moszczynska, Aleksandra; Albrecht, Bernd; Heinrich, Jan-Michael; Tarnok, Attila
2012-03-01
Isolation of different cell types from mixed samples in one separation step by FACS is feasible but expensive and slow. It is cheaper and faster but still challenging by magnetic separation. An innovative bead-based cascade-system (pluriSelect GmbH, Leipzig, Germany) relies on simultaneous physical separation of different cell types. It is based on antibody-mediated binding of cells to beads of different size and isolation with sieves of different mesh-size. We validated pluriSelect system for single parameter (CD3) and simultaneous separation of CD3 and CD15 cells from EDTA blood-samples. Results were compared with those obtained by MACS (Miltenyi-Biotech) magnetic separation (CD3 separation). pluriSelect separation was done in whole blood, MACS on Ficoll gradient isolated leukocytes, according to the manufacturer's protocols. Isolated and residual cells were immunophenotyped (7-color 8-antibody panel (CD3; CD16/56; CD4; CD8; CD14; CD19; CD45; HLADR) on a CyFlowML flow cytometer (Partec GmbH). Cell count (Coulter), purity, yield and viability (7-AAD exclusion) were determined. There were no significant differences between both systems regarding purity (92-98%), yield (50-60%) and viability (92-98%) of isolated cells. PluriSelect separation was slightly faster than MACS (1.15 h versus 1.5h). Moreover, no preenrichment steps were necessary. In conclusion, pluriSelect is a fast, simple and gentle system for efficient simultaneous separation of two cell subpopulation directly from whole blood and can provide a simple alternative to FACS. The isolated cells can be used for further research applications.
Wester, T; Borg, H; Naji, H; Stenström, P; Westbacke, G; Lilja, H E
2014-09-01
Serial transverse enteroplasty (STEP) was first described in 2003 as a method for lengthening and tapering of the bowel in short bowel syndrome. The aim of this multicentre study was to review the outcome of a Swedish cohort of children who underwent STEP. All children who had a STEP procedure at one of the four centres of paediatric surgery in Sweden between September 2005 and January 2013 were included in this observational cohort study. Demographic details, and data from the time of STEP and at follow-up were collected from the case records and analysed. Twelve patients had a total of 16 STEP procedures; four children underwent a second STEP. The first STEP was performed at a median age of 5·8 (range 0·9-19·0) months. There was no death at a median follow-up of 37·2 (range 3·0-87·5) months and no child had small bowel transplantation. Seven of the 12 children were weaned from parenteral nutrition at a median of 19·5 (range 2·3-42·9) months after STEP. STEP is a useful procedure for selected patients with short bowel syndrome and seems to facilitate weaning from parenteral nutrition. At mid-term follow-up a majority of the children had achieved enteral autonomy. The study is limited by the small sample size and lack of a control group. © 2014 The Authors. BJS published by John Wiley & Sons Ltd on behalf of BJS Society Ltd.
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng
2006-12-01
An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.
Zacharatos, Filimon; Karvounis, Panagiotis; Theodorakos, Ioannis; Hatziapostolou, Antonios; Zergioti, Ioanna
2018-06-19
Ag nanowire (NW) networks have exquisite optical and electrical properties which make them ideal candidate materials for flexible transparent conductive electrodes. Despite the compatibility of Ag NW networks with laser processing, few demonstrations of laser fabricated Ag NW based components currently exist. In this work, we report on a novel single step laser transferring and laser curing process of micrometer sized pixels of Ag NW networks on flexible substrates. This process relies on the selective laser heating of the Ag NWs induced by the laser pulse energy and the subsequent localized melting of the polymeric substrate. We demonstrate that a single laser pulse can induce both transfer and curing of the Ag NW network. The feasibility of the process is confirmed experimentally and validated by Finite Element Analysis simulations, which indicate that selective heating is carried out within a submicron-sized heat affected zone. The resulting structures can be utilized as fully functional flexible transparent electrodes with figures of merit even higher than 100. Low sheet resistance (<50 Ohm/sq) and high visible light transparency (>90%) make the reported process highly desirable for a variety of applications, including selective heating or annealing of nanocomposite materials and laser processing of nanostructured materials on a large variety of optically transparent substrates, such as Polydimethylsiloxane (PDMS).
Line profile analysis of ODS steels Fe20Cr5AlTiY milled powders at different Y2O3 concentrations
NASA Astrophysics Data System (ADS)
Afandi, A.; Nisa, R.; Thosin, K. A. Z.
2017-04-01
Mechanical properties of material are largely dictated by constituent microstructure parameters such as dislocation density, lattice microstrain, crystallite size and its distribution. To develop ultra-fine grain alloys such as Oxide Dispersion Strengthened (ODS) alloys, mechanical alloying is crucial step to introduce crystal defects, and refining the crystallite size. In this research the ODS sample powders were mechanically alloyed with different Y2O3 concentration respectively of 0.5, 1, 3, and 5 wt%. MA process was conducted with High Energy Milling (HEM) with the ball to powder ratio of 15:1. The vial and the ball were made of alumina, and the milling condition is set 200 r.p.m constant. The ODS powders were investigated by X-Ray Diffractions (XRD), Bragg-Brentano setup of SmartLab Rigaku with 40 KV, and 30 mA, step size using 0.02°, with scanning speed of 4°min-1. Line Profile Analysis (LPA) of classical Williamson-Hall was carried out, with the aim to investigate the different crystallite size, and microstrain due to the selection of the full wide at half maximum (FWHM) and integral breadth.
Adaptive interference cancel filter for evoked potential using high-order cumulants.
Lin, Bor-Shyh; Lin, Bor-Shing; Chong, Fok-Ching; Lai, Feipei
2004-01-01
This paper is to present evoked potential (EP) processing using adaptive interference cancel (AIC) filter with second and high order cumulants. In conventional ensemble averaging method, people have to conduct repetitively experiments to record the required data. Recently, the use of AIC structure with second statistics in processing EP has proved more efficiency than traditional averaging method, but it is sensitive to both of the reference signal statistics and the choice of step size. Thus, we proposed higher order statistics-based AIC method to improve these disadvantages. This study was experimented in somatosensory EP corrupted with EEG. Gradient type algorithm is used in AIC method. Comparisons with AIC filter on second, third, fourth order statistics are also presented in this paper. We observed that AIC filter with third order statistics has better convergent performance for EP processing and is not sensitive to the selection of step size and reference input.
McGarvey, Daniel J.; Falke, Jeffrey A.; Li, Hiram W.; Li, Judith; Hauer, F. Richard; Lamberti, G.A.
2017-01-01
Methods to sample fishes in stream ecosystems and to analyze the raw data, focusing primarily on assemblage-level (all fish species combined) analyses, are presented in this chapter. We begin with guidance on sample site selection, permitting for fish collection, and information-gathering steps to be completed prior to conducting fieldwork. Basic sampling methods (visual surveying, electrofishing, and seining) are presented with specific instructions for estimating population sizes via visual, capture-recapture, and depletion surveys, in addition to new guidance on environmental DNA (eDNA) methods. Steps to process fish specimens in the field including the use of anesthesia and preservation of whole specimens or tissue samples (for genetic or stable isotope analysis) are also presented. Data analysis methods include characterization of size-structure within populations, estimation of species richness and diversity, and application of fish functional traits. We conclude with three advanced topics in assemblage-level analysis: multidimensional scaling (MDS), ecological networks, and loop analysis.
Yang, Cheng-Huei; Luo, Ching-Hsing; Yang, Cheng-Hong; Chuang, Li-Yeh
2004-01-01
Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, including mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for disabled persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. This restriction is a major hindrance. Therefore, a switch adaptive automatic recognition method with a high recognition rate is needed. The proposed system combines counter-propagation networks with a variable degree variable step size LMS algorithm. It is divided into five stages: space recognition, tone recognition, learning process, adaptive processing, and character recognition. Statistical analyses demonstrated that the proposed method elicited a better recognition rate in comparison to alternative methods in the literature.
Schuler, Friedrich; Schwemmer, Frank; Trotter, Martin; Wadle, Simon; Zengerle, Roland; von Stetten, Felix; Paust, Nils
2015-07-07
Aqueous microdroplets provide miniaturized reaction compartments for numerous chemical, biochemical or pharmaceutical applications. We introduce centrifugal step emulsification for the fast and easy production of monodisperse droplets. Homogenous droplets with pre-selectable diameters in a range from 120 μm to 170 μm were generated with coefficients of variation of 2-4% and zero run-in time or dead volume. The droplet diameter depends on the nozzle geometry (depth, width, and step size) and interfacial tensions only. Droplet size is demonstrated to be independent of the dispersed phase flow rate between 0.01 and 1 μl s(-1), proving the robustness of the centrifugal approach. Centrifugal step emulsification can easily be combined with existing centrifugal microfluidic unit operations, is compatible to scalable manufacturing technologies such as thermoforming or injection moulding and enables fast emulsification (>500 droplets per second and nozzle) with minimal handling effort (2-3 pipetting steps). The centrifugal microfluidic droplet generation was used to perform the first digital droplet recombinase polymerase amplification (ddRPA). It was used for absolute quantification of Listeria monocytogenes DNA concentration standards with a total analysis time below 30 min. Compared to digital droplet polymerase chain reaction (ddPCR), with processing times of about 2 hours, the overall processing time of digital analysis was reduced by more than a factor of 4.
An improved maximum power point tracking method for a photovoltaic system
NASA Astrophysics Data System (ADS)
Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes
2016-06-01
In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.
One size fits all electronics for insole-based activity monitoring.
Hegde, Nagaraj; Bries, Matthew; Melanson, Edward; Sazonov, Edward
2017-07-01
Footwear based wearable sensors are becoming prominent in many areas of monitoring health and wellness, such as gait and activity monitoring. In our previous research we introduced an insole based wearable system SmartStep, which is completely integrated in a socially acceptable package. From a manufacturing perspective, SmartStep's electronics had to be custom made for each shoe size, greatly complicating the manufacturing process. In this work we explore the possibility of making a universal electronics platform for SmartStep - SmartStep 3.0, which can be used in the most common insole sizes without modifications. A pilot human subject experiments were run to compare the accuracy between the one-size fits all (SmartStep 3.0) and custom size SmartStep 2.0. A total of ~10 hours of data was collected in the pilot study involving three participants performing different activities of daily living while wearing SmartStep 2.0 and SmartStep 3.0. Leave one out cross validation resulted in a 98.5% average accuracy from SmartStep 2.0, while SmartStep 3.0 resulted in 98.3% accuracy, suggesting that the SmartStep 3.0 can be as accurate as SmartStep 2.0, while fitting most common shoe sizes.
Selectively Sized Graphene-Based Nanopores for in Situ Single Molecule Sensing
2015-01-01
The use of nanopore biosensors is set to be extremely important in developing precise single molecule detectors and providing highly sensitive advanced analysis of biological molecules. The precise tailoring of nanopore size is a significant step toward achieving this, as it would allow for a nanopore to be tuned to a corresponding analyte. The work presented here details a methodology for selectively opening nanopores in real-time. The tunable nanopores on a quartz nanopipette platform are fabricated using the electroetching of a graphene-based membrane constructed from individual graphene nanoflakes (ø ∼30 nm). The device design allows for in situ opening of the graphene membrane, from fully closed to fully opened (ø ∼25 nm), a feature that has yet to be reported in the literature. The translocation of DNA is studied as the pore size is varied, allowing for subfeatures of DNA to be detected with slower DNA translocations at smaller pore sizes, and the ability to observe trends as the pore is opened. This approach opens the door to creating a device that can be target to detect specific analytes. PMID:26204996
NMR diffusion simulation based on conditional random walk.
Gudbjartsson, H; Patz, S
1995-01-01
The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.
Seo, Hogyu David; Lee, Daeyoup
2018-05-15
Random mutagenesis of a target gene is commonly used to identify mutations that yield the desired phenotype. Of the methods that may be used to achieve random mutagenesis, error-prone PCR is a convenient and efficient strategy for generating a diverse pool of mutants (i.e., a mutant library). Error-prone PCR is the method of choice when a researcher seeks to mutate a pre-defined region, such as the coding region of a gene while leaving other genomic regions unaffected. After the mutant library is amplified by error-prone PCR, it must be cloned into a suitable plasmid. The size of the library generated by error-prone PCR is constrained by the efficiency of the cloning step. However, in the fission yeast, Schizosaccharomyces pombe, the cloning step can be replaced by the use of a highly efficient one-step fusion PCR to generate constructs for transformation. Mutants of desired phenotypes may then be selected using appropriate reporters. Here, we describe this strategy in detail, taking as an example, a reporter inserted at centromeric heterochromatin.
Fabrication of Large Bulk High Temperature Superconducting Articles
NASA Technical Reports Server (NTRS)
Koczor, Ronald (Inventor); Hiser, Robert A. (Inventor)
2003-01-01
A method of fabricating large bulk high temperature superconducting articles which comprises the steps of selecting predetermined sizes of crystalline superconducting materials and mixing these specific sizes of particles into a homogeneous mixture which is then poured into a die. The die is placed in a press and pressurized to predetermined pressure for a predetermined time and is heat treated in the furnace at predetermined temperatures for a predetermined time. The article is left in the furnace to soak at predetermined temperatures for a predetermined period of time and is oxygenated by an oxygen source during the soaking period.
Cellular packing, mechanical stress and the evolution of multicellularity
NASA Astrophysics Data System (ADS)
Jacobeen, Shane; Pentz, Jennifer T.; Graba, Elyes C.; Brandys, Colin G.; Ratcliff, William C.; Yunker, Peter J.
2018-03-01
The evolution of multicellularity set the stage for sustained increases in organismal complexity1-5. However, a fundamental aspect of this transition remains largely unknown: how do simple clusters of cells evolve increased size when confronted by forces capable of breaking intracellular bonds? Here we show that multicellular snowflake yeast clusters6-8 fracture due to crowding-induced mechanical stress. Over seven weeks ( 291 generations) of daily selection for large size, snowflake clusters evolve to increase their radius 1.7-fold by reducing the accumulation of internal stress. During this period, cells within the clusters evolve to be more elongated, concomitant with a decrease in the cellular volume fraction of the clusters. The associated increase in free space reduces the internal stress caused by cellular growth, thus delaying fracture and increasing cluster size. This work demonstrates how readily natural selection finds simple, physical solutions to spatial constraints that limit the evolution of group size—a fundamental step in the evolution of multicellularity.
Connectivity among subpopulations of Louisiana black bears as estimated by a step selection function
Clark, Joseph D.; Jared S. Laufenberg,; Maria Davidson,; Jennifer L. Murrow,
2015-01-01
Habitat fragmentation is a fundamental cause of population decline and increased risk of extinction for many wildlife species; animals with large home ranges and small population sizes are particularly sensitive. The Louisiana black bear (Ursus americanus luteolus) exists only in small, isolated subpopulations as a result of land clearing for agriculture, but the relative potential for inter-subpopulation movement by Louisiana black bears has not been quantified, nor have characteristics of effective travel routes between habitat fragments been identified. We placed and monitored global positioning system (GPS) radio collars on 8 female and 23 male bears located in 4 subpopulations in Louisiana, which included a reintroduced subpopulation located between 2 of the remnant subpopulations. We compared characteristics of sequential radiolocations of bears (i.e., steps) with steps that were possible but not chosen by the bears to develop step selection function models based on conditional logistic regression. The probability of a step being selected by a bear increased as the distance to natural land cover and agriculture at the end of the step decreased and as distance from roads at the end of a step increased. To characterize connectivity among subpopulations, we used the step selection models to create 4,000 hypothetical correlated random walks for each subpopulation representing potential dispersal events to estimate the proportion that intersected adjacent subpopulations (hereafter referred to as successful dispersals). Based on the models, movement paths for males intersected all adjacent subpopulations but paths for females intersected only the most proximate subpopulations. Cross-validation and genetic and independent observation data supported our findings. Our models also revealed that successful dispersals were facilitated by a reintroduced population located between 2 distant subpopulations. Successful dispersals for males were dependent on natural land cover in private ownership. The addition of hypothetical 1,000-m- or 3,000-m-wide corridors between the 4 study areas had minimal effects on connectivity among subpopulations. For females, our model suggested that habitat between subpopulations would probably have to be permanently occupied for demographic rescue to occur. Thus, the establishment of stepping-stone populations, such as the reintroduced population that we studied, may be a more effective conservation measure than long corridors without a population presence in between.
N-terminus of Cardiac Myosin Essential Light Chain Modulates Myosin Step-Size
Wang, Yihua; Ajtai, Katalin; Kazmierczak, Katarzyna; Szczesna-Cordary, Danuta; Burghardt, Thomas P.
2016-01-01
Muscle myosin cyclically hydrolyzes ATP to translate actin. Ventricular cardiac myosin (βmys) moves actin with three distinct unitary step-sizes resulting from its lever-arm rotation and with step-frequencies that are modulated in a myosin regulation mechanism. The lever-arm associated essential light chain (vELC) binds actin by its 43 residue N-terminal extension. Unitary steps were proposed to involve the vELC N-terminal extension with the 8 nm step engaging the vELC/actin bond facilitating an extra ~19 degrees of lever-arm rotation while the predominant 5 nm step forgoes vELC/actin binding. A minor 3 nm step is the unlikely conversion of the completed 5 to the 8 nm step. This hypothesis was tested using a 17 residue N-terminal truncated vELC in porcine βmys (Δ17βmys) and a 43 residue N-terminal truncated human vELC expressed in transgenic mouse heart (Δ43αmys). Step-size and step-frequency were measured using the Qdot motility assay. Both Δ17βmys and Δ43αmys had significantly increased 5 nm step-frequency and coincident loss in the 8 nm step-frequency compared to native proteins suggesting the vELC/actin interaction drives step-size preference. Step-size and step-frequency probability densities depend on the relative fraction of truncated vELC and relate linearly to pure myosin species concentrations in a mixture containing native vELC homodimer, two truncated vELCs in the modified homodimer, and one native and one truncated vELC in the heterodimer. Step-size and step-frequency, measured for native homodimer and at two or more known relative fractions of truncated vELC, are surmised for each pure species by using a new analytical method. PMID:26671638
Sharma, Deepti; Lee, Jongmin; Seo, Junyoung; Shin, Heungjoo
2017-01-01
We developed a versatile and highly sensitive biosensor platform. The platform is based on electrochemical-enzymatic redox cycling induced by selective enzyme immobilization on nano-sized carbon interdigitated electrodes (IDEs) decorated with gold nanoparticles (AuNPs). Without resorting to sophisticated nanofabrication technologies, we used batch wafer-level carbon microelectromechanical systems (C-MEMS) processes to fabricate 3D carbon IDEs reproducibly, simply, and cost effectively. In addition, AuNPs were selectively electrodeposited on specific carbon nanoelectrodes; the high surface-to-volume ratio and fast electron transfer ability of AuNPs enhanced the electrochemical signal across these carbon IDEs. Gold nanoparticle characteristics such as size and morphology were reproducibly controlled by modulating the step-potential and time period in the electrodeposition processes. To detect cholesterol selectively using AuNP/carbon IDEs, cholesterol oxidase (ChOx) was selectively immobilized via the electrochemical reduction of the diazonium cation. The sensitivity of the AuNP/carbon IDE-based biosensor was ensured by efficient amplification of the redox mediators, ferricyanide and ferrocyanide, between selectively immobilized enzyme sites and both of the combs of AuNP/carbon IDEs. The presented AuNP/carbon IDE-based cholesterol biosensor exhibited a wide sensing range (0.005–10 mM) and high sensitivity (~993.91 µA mM−1 cm−2; limit of detection (LOD) ~1.28 µM). In addition, the proposed cholesterol biosensor was found to be highly selective for the cholesterol detection. PMID:28914766
Benito-Peña, Elena; Navarro-Villoslada, Fernando; Carrasco, Sergio; Jockusch, Steffen; Ottaviani, M Francesca; Moreno-Bondi, Maria C
2015-05-27
The effect of the cross-linker on the shape and size of molecular imprinted polymer (MIP) beads prepared by precipitation polymerization has been evaluated using a chemometric approach. Molecularly imprinted microspheres for the selective recognition of fluoroquinolone antimicrobials were prepared in a one-step precipitation polymerization procedure using enrofloxacin (ENR) as the template molecule, methacrylic acid as functional monomer, 2-hydroxyethyl methacrylate as hydrophilic comonomer, and acetonitrile as the porogen. The type and amount of cross-linker, namely ethylene glycol dimethacrylate, divinylbenzene or trimethylolpropane trimethacrylate, to obtain monodispersed MIP spherical beads in the micrometer range was optimized using a simplex lattice design. Particle size and morphology were assessed by scanning electron microscopy, dynamic light scattering, and nitrogen adsorption measurements. Electron paramagnetic resonance spectroscopy in conjunction with a nitroxide as spin probe revealed information about the microviscosity and polarity of the binding sites in imprinted and nonimprinted polymer beads.
NASA Astrophysics Data System (ADS)
Beck, Melanie; Scarlata, Claudia; Fortson, Lucy; Willett, Kyle; Galloway, Melanie
2016-01-01
It is well known that the mass-size distribution evolves as a function of cosmic time and that this evolution is different between passive and star-forming galaxy populations. However, the devil is in the details and the precise evolution is still a matter of debate since this requires careful comparison between similar galaxy populations over cosmic time while simultaneously taking into account changes in image resolution, rest-frame wavelength, and surface brightness dimming in addition to properly selecting representative morphological samples.Here we present the first step in an ambitious undertaking to calculate the bivariate mass-size distribution as a function of time and morphology. We begin with a large sample (~3 x 105) of SDSS galaxies at z ~ 0.1. Morphologies for this sample have been determined by Galaxy Zoo crowdsourced visual classifications and we split the sample not only by disk- and bulge-dominated galaxies but also in finer morphology bins such as bulge strength. Bivariate distribution functions are the only way to properly account for biases and selection effects. In particular, we quantify the mass-size distribution with a version of the parametric Maximum Likelihood estimator which has been modified to account for measurement errors as well as upper limits on galaxy sizes.
NASA Astrophysics Data System (ADS)
Raut, Suyog A.; Mutadak, Pallavi R.; Kumar, Shiv; Kanhe, Nilesh S.; Huprikar, Sameer; Pol, Harshawardhan V.; Phase, Deodatta M.; Bhoraskar, Sudha V.; Mathe, Vikas L.
2018-03-01
In this paper we report single step large scale synthesis of highly crystalline iron oxide nanoparticles viz. magnetite (Fe3O4) and maghemite (γ-Fe2O3) via gas phase condensation process, where micron sized iron metal powder was used as a precursor. Selective phases of iron oxide were obtained by variation of gas flow rate of oxygen and hence partial pressure of oxygen inside the plasma reactor. Most of the particles were found to possesses average crystallite size of about 20-30 nm. The DC magnetization curves recorded indicate almost super-paramagnetic nature of the iron oxide magnetic nanoparticles. Further, iron oxide nanoparticles were analyzed using Raman spectroscopy, X-ray photoelectron spectroscopy and Mossbauer spectroscopy. In order to explore the feasibility of these nanoparticles for magnetic damper application, rheological studies have been carried out and compared with commercially available Carbonyl Iron (CI) particles. The nanoparticles obtained by thermal plasma route show improved dispersion which is useful for rheological applications.
Biofuel manufacturing from woody biomass: effects of sieve size used in biomass size reduction.
Zhang, Meng; Song, Xiaoxu; Deines, T W; Pei, Z J; Wang, Donghai
2012-01-01
Size reduction is the first step for manufacturing biofuels from woody biomass. It is usually performed using milling machines and the particle size is controlled by the size of the sieve installed on a milling machine. There are reported studies about the effects of sieve size on energy consumption in milling of woody biomass. These studies show that energy consumption increased dramatically as sieve size became smaller. However, in these studies, the sugar yield (proportional to biofuel yield) in hydrolysis of the milled woody biomass was not measured. The lack of comprehensive studies about the effects of sieve size on energy consumption in biomass milling and sugar yield in hydrolysis process makes it difficult to decide which sieve size should be selected in order to minimize the energy consumption in size reduction and maximize the sugar yield in hydrolysis. The purpose of this paper is to fill this gap in the literature. In this paper, knife milling of poplar wood was conducted using sieves of three sizes (1, 2, and 4 mm). Results show that, as sieve size increased, energy consumption in knife milling decreased and sugar yield in hydrolysis increased in the tested range of particle sizes.
Steepest descent method implementation on unconstrained optimization problem using C++ program
NASA Astrophysics Data System (ADS)
Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.
2018-03-01
Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.
Technical pitfalls and tips for the valve-in-valve procedure
2017-01-01
Transcatheter aortic valve implantation (TAVI) has emerged as a viable treatment modality for patients with severe aortic valve stenosis and multiple co-morbidities. More recent indications include the use of transcatheter heart valves (THV) to treat degenerated bioprosthetic surgical heart valves (SHV), which are failing due to stenosis or regurgitation. Valve-in-valve (VIV) procedures in the aortic position have been performed with a variety of THV devices, although the balloon-expandable SAPIEN valve platform (Edwards Lifesciences Ltd, Irvine, CA, USA) and self-expandable CoreValve platform (Medtronic Inc., MN, USA) have been used in majority of the patients. VIV treatment is appealing as it is less invasive than conventional surgery but optimal patient selection is vital to avoid complications such as malposition, residual high gradients and coronary obstruction. To minimize the risk of complications, thorough procedural planning is critical. The first step is identification of the degenerated SHV, including its model, size, fluoroscopic appearance. Although label size and stent internal diameter (ID) are provided by the manufacturer, it is important to note the true ID. The true ID is the ID of a SHV after the leaflets are mounted and helps determine the optimal size of THV. The second step is to determine the type and size of the THV. Although this is determined in the majority of the cases by user preference, in certain situations one THV may be more suitable than another. As the procedure is performed under fluoroscopy, the third step is to become familiarized with the fluoroscopic appearance of both the SHV and THV. This helps to determine the landmarks for optimal positioning, which in turn determines the gradients and fixation. The fourth step is to assess the risk of coronary obstruction. This is performed with either aortic root angiography or ECG-gated computerised tomography (CT). Finally, the route of approach must be carefully planned. Once these aspects are addressed, the procedure can be performed efficiently with a low risk of complications. PMID:29062752
NASA Technical Reports Server (NTRS)
1989-01-01
The Simulation Computer System (SCS) is the computer hardware, software, and workstations that will support the Payload Training Complex (PTC) at Marshall Space Flight Center (MSFC). The PTC will train the space station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. In the first step of this task, a methodology was developed to ensure that all relevant design dimensions were addressed, and that all feasible designs could be considered. The development effort yielded the following method for generating and comparing designs in task 4: (1) Extract SCS system requirements (functions) from the system specification; (2) Develop design evaluation criteria; (3) Identify system architectural dimensions relevant to SCS system designs; (4) Develop conceptual designs based on the system requirements and architectural dimensions identified in step 1 and step 3 above; (5) Evaluate the designs with respect to the design evaluation criteria developed in step 2 above. The results of the method detailed in the above 5 steps are discussed. The results of the task 4 work provide the set of designs which two or three candidate designs are to be selected by MSFC as input to task 5-refine SCS conceptual designs. The designs selected for refinement will be developed to a lower level of detail, and further analyses will be done to begin to determine the size and speed of the components required to implement these designs.
A two-step initial mass function:. Consequences of clustered star formation for binary properties
NASA Astrophysics Data System (ADS)
Durisen, R. H.; Sterzik, M. F.; Pickett, B. K.
2001-06-01
If stars originate in transient bound clusters of moderate size, these clusters will decay due to dynamic interactions in which a hard binary forms and ejects most or all the other stars. When the cluster members are chosen at random from a reasonable initial mass function (IMF), the resulting binary characteristics do not match current observations. We find a significant improvement in the trends of binary properties from this scenario when an additional constraint is taken into account, namely that there is a distribution of total cluster masses set by the masses of the cloud cores from which the clusters form. Two distinct steps then determine final stellar masses - the choice of a cluster mass and the formation of the individual stars. We refer to this as a ``two-step'' IMF. Simple statistical arguments are used in this paper to show that a two-step IMF, combined with typical results from dynamic few-body system decay, tends to give better agreement between computed binary characteristics and observations than a one-step mass selection process.
In Vitro and In Vivo Single Myosin Step-Sizes in Striated Muscle a
Burghardt, Thomas P.; Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin
2016-01-01
Myosin in muscle transduces ATP free energy into the mechanical work of moving actin. It has a motor domain transducer containing ATP and actin binding sites, and, mechanical elements coupling motor impulse to the myosin filament backbone providing transduction/mechanical-coupling. The mechanical coupler is a lever-arm stabilized by bound essential and regulatory light chains. The lever-arm rotates cyclically to impel bound filamentous actin. Linear actin displacement due to lever-arm rotation is the myosin step-size. A high-throughput quantum dot labeled actin in vitro motility assay (Qdot assay) measures motor step-size in the context of an ensemble of actomyosin interactions. The ensemble context imposes a constant velocity constraint for myosins interacting with one actin filament. In a cardiac myosin producing multiple step-sizes, a “second characterization” is step-frequency that adjusts longer step-size to lower frequency maintaining a linear actin velocity identical to that from a shorter step-size and higher frequency actomyosin cycle. The step-frequency characteristic involves and integrates myosin enzyme kinetics, mechanical strain, and other ensemble affected characteristics. The high-throughput Qdot assay suits a new paradigm calling for wide surveillance of the vast number of disease or aging relevant myosin isoforms that contrasts with the alternative model calling for exhaustive research on a tiny subset myosin forms. The zebrafish embryo assay (Z assay) performs single myosin step-size and step-frequency assaying in vivo combining single myosin mechanical and whole muscle physiological characterizations in one model organism. The Qdot and Z assays cover “bottom-up” and “top-down” assaying of myosin characteristics. PMID:26728749
Role of step size and max dwell time in anatomy based inverse optimization for prostate implants
Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha
2013-01-01
In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323
Jablonski, Paul D.; Larbalestier, David C.
1993-01-01
Superconductors formed by powder metallurgy have a matrix of niobium-titanium alloy with discrete pinning centers distributed therein which are formed of a compatible metal. The artificial pinning centers in the Nb-Ti matrix are reduced in size by processing steps to sizes on the order of the coherence length, typically in the range of 1 to 10 nm. To produce the superconductor, powders of body centered cubic Nb-Ti alloy and the second phase flux pinning material, such as Nb, are mixed in the desired percentages. The mixture is then isostatically pressed, sintered at a selected temperature and selected time to produce a cohesive structure having desired characteristics without undue chemical reaction, the sintered billet is reduced in size by deformation, such as by swaging, the swaged sample receives heat treatment and recrystallization and additional swaging, if necessary, and is then sheathed in a normal conducting sheath, and the sheathed material is drawn into a wire. The resulting superconducting wire has second phase flux pinning centers distributed therein which provide enhanced J.sub.ct due to the flux pinning effects.
Statistical Analyses of Femur Parameters for Designing Anatomical Plates.
Wang, Lin; He, Kunjin; Chen, Zhengming
2016-01-01
Femur parameters are key prerequisites for scientifically designing anatomical plates. Meanwhile, individual differences in femurs present a challenge to design well-fitting anatomical plates. Therefore, to design anatomical plates more scientifically, analyses of femur parameters with statistical methods were performed in this study. The specific steps were as follows. First, taking eight anatomical femur parameters as variables, 100 femur samples were classified into three classes with factor analysis and Q-type cluster analysis. Second, based on the mean parameter values of the three classes of femurs, three sizes of average anatomical plates corresponding to the three classes of femurs were designed. Finally, based on Bayes discriminant analysis, a new femur could be assigned to the proper class. Thereafter, the average anatomical plate suitable for that new femur was selected from the three available sizes of plates. Experimental results showed that the classification of femurs was quite reasonable based on the anatomical aspects of the femurs. For instance, three sizes of condylar buttress plates were designed. Meanwhile, 20 new femurs are judged to which classes the femurs belong. Thereafter, suitable condylar buttress plates were determined and selected.
Process Parameters Optimization in Single Point Incremental Forming
NASA Astrophysics Data System (ADS)
Gulati, Vishal; Aryal, Ashmin; Katyal, Puneet; Goswami, Amitesh
2016-04-01
This work aims to optimize the formability and surface roughness of parts formed by the single-point incremental forming process for an Aluminium-6063 alloy. The tests are based on Taguchi's L18 orthogonal array selected on the basis of DOF. The tests have been carried out on vertical machining center (DMC70V); using CAD/CAM software (SolidWorks V5/MasterCAM). Two levels of tool radius, three levels of sheet thickness, step size, tool rotational speed, feed rate and lubrication have been considered as the input process parameters. Wall angle and surface roughness have been considered process responses. The influential process parameters for the formability and surface roughness have been identified with the help of statistical tool (response table, main effect plot and ANOVA). The parameter that has the utmost influence on formability and surface roughness is lubrication. In the case of formability, lubrication followed by the tool rotational speed, feed rate, sheet thickness, step size and tool radius have the influence in descending order. Whereas in surface roughness, lubrication followed by feed rate, step size, tool radius, sheet thickness and tool rotational speed have the influence in descending order. The predicted optimal values for the wall angle and surface roughness are found to be 88.29° and 1.03225 µm. The confirmation experiments were conducted thrice and the value of wall angle and surface roughness were found to be 85.76° and 1.15 µm respectively.
Porous Architecture of SPS Thick YSZ Coatings Structured at the Nanometer Scale (~50 nm)
NASA Astrophysics Data System (ADS)
Bacciochini, Antoine; Montavon, Ghislain; Ilavsky, Jan; Denoirjean, Alain; Fauchais, Pierre
2010-01-01
Suspension plasma spraying (SPS) is a fairly recent technology that is able to process sub-micrometer-sized or nanometer-sized feedstock particles and permits the deposition of coatings thinner (from 20 to 100 μm) than those resulting from conventional atmospheric plasma spraying (APS). SPS consists of mechanically injecting within the plasma flow a liquid suspension of particles of average diameter varying between 0.02 and 1 μm. Due to the large volume fraction of the internal interfaces and reduced size of stacking defects, thick nanometer- or sub-micrometer-sized coatings exhibit better properties than conventional micrometer-sized ones (e.g., higher coefficients of thermal expansion, lower thermal diffusivity, higher hardness and toughness, better wear resistance, among other coating characteristics and functional properties). They could hence offer pertinent solutions to numerous emerging applications, particularly for energy production, energy saving, etc. Coatings structured at the nanometer scale exhibit nanometer-sized voids. Depending upon the selection of operating parameters, among which plasma power parameters (operating mode, enthalpy, spray distance, etc.), suspension properties (particle size distribution, powder mass percentage, viscosity, etc.), and substrate characteristics (topology, temperature, etc.), different coating architectures can be manufactured, from dense to porous layers, from connected to non-connected network. Nevertheless, the discrimination of porosity in different classes of criteria such as size, shape, orientation, specific surface area, etc., is essential to describe the coating architecture. Moreover, the primary steps of the coating manufacturing process affect significantly the coating porous architecture. These steps need to be further understood. Different types of imaging experiments were performed to understand, describe and quantify the pore level of thick finely structured ceramics coatings.
Gan, Lin; Rudi, Stefan; Cui, Chunhua; Heggen, Marc; Strasser, Peter
2016-06-01
Dealloyed Pt bimetallic core-shell catalysts derived from low-Pt bimetallic alloy nanoparticles (e.g, PtNi3 ) have recently shown unprecedented activity and stability on the cathodic oxygen reduction reaction (ORR) under realistic fuel cell conditions and become today's catalyst of choice for commercialization of automobile fuel cells. A critical step toward this breakthrough is to control their particle size below a critical value (≈10 nm) to suppress nanoporosity formation and hence reduce significant base metal (e.g., Ni) leaching under the corrosive ORR condition. Fine size control of the sub-10 nm PtNi3 nanoparticles and understanding their size dependent ORR electrocatalysis are crucial to further improve their ORR activity and stability yet still remain unexplored. A robust synthetic approach is presented here for size-controlled PtNi3 nanoparticles between 3 and 10 nm while keeping a constant particle composition and their size-selected growth mechanism is studied comprehensively. This enables us to address their size-dependent ORR activities and stabilities for the first time. Contrary to the previously established monotonic increase of ORR specific activity and stability with increasing particle size on Pt and Pt-rich bimetallic nanoparticles, the Pt-poor PtNi3 nanoparticles exhibit an unusual "volcano-shaped" size dependence, showing the highest ORR activity and stability at the particle sizes between 6 and 8 nm due to their highest Ni retention during long-term catalyst aging. The results of this study provide important practical guidelines for the size selection of the low Pt bimetallic ORR electrocatalysts with further improved durably high activity. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Automatic measurement of images on astrometric plates
NASA Astrophysics Data System (ADS)
Ortiz Gil, A.; Lopez Garcia, A.; Martinez Gonzalez, J. M.; Yershov, V.
1994-04-01
We present some results on the process of automatic detection and measurement of objects in overlapped fields of astrometric plates. The main steps of our algorithm are the following: determination of the Scale and Tilt between charge coupled devices (CCD) and microscope coordinate systems and estimation of signal-to-noise ratio in each field;--image identification and improvement of its position and size;--image final centering;--image selection and storage. Several parameters allow the use of variable criteria for image identification, characterization and selection. Problems related with faint images and crowded fields will be approached by special techniques (morphological filters, histogram properties and fitting models).
Laser furnace and method for zone refining of semiconductor wafers
NASA Technical Reports Server (NTRS)
Griner, Donald B. (Inventor); zur Burg, Frederick W. (Inventor); Penn, Wayne M. (Inventor)
1988-01-01
A method of zone refining a crystal wafer (116 FIG. 1) comprising the steps of focusing a laser beam to a small spot (120) of selectable size on the surface of the crystal wafer (116) to melt a spot on the crystal wafer, scanning the small laser beam spot back and forth across the surface of the crystal wafer (116) at a constant velocity, and moving the scanning laser beam across a predetermined zone of the surface of the crystal wafer (116) in a direction normal to the laser beam scanning direction and at a selectible velocity to melt and refine the entire crystal wafer (116).
Delaine, Maxence; Bernard, Nadine; Gilbert, Daniel; Recourt, Philippe; Armynot du Châtelet, Eric
2017-06-01
Testate amoebae are free-living shelled protists that build a wide range of shells with various sizes, shapes, and compositions. Recent studies showed that xenosomic testate amoebae shells could be indicators of atmospheric particulate matter (PM) deposition. However, no study has yet been conducted to assess the intra-specific mineral, organic, and biologic grain diversity of a single xenosomic species in a natural undisturbed environment. This study aims at providing new information about grain selection to develop the potential use of xenosomic testate amoebae shells as bioindicators of the multiple-origin mineral/organic diversity of their proximal environment. To fulfil these objectives, we analysed the shell content of 38 Bullinularia indica individuals, a single xenosomic testate amoeba species living in Sphagnum capillifolium, by scanning electron microscope (SEM) coupled with X-ray spectroscopy. The shells exhibited high diversities of mineral, organic, and biomineral grains, which confirms their capability to recycle xenosomes. Mineral grain diversity and size of B. indica matched those of the atmospheric natural mineral PM deposited in the peatbog. Calculation of grain size sorting revealed a discrete selection of grains agglutinated by B. indica. These results are a first step towards understanding the mechanisms of particle selection by xenosomic testate amoebae in natural conditions. Copyright © 2017 Elsevier GmbH. All rights reserved.
Guan, Fada; Peeler, Christopher; Bronk, Lawrence; Geng, Changran; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Grosshans, David; Mohan, Radhe; Titt, Uwe
2015-01-01
Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the geant 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from geant 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LETt and dose-averaged LET, LETd) using geant 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LETt and LETd of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LETt but significant for LETd. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in geant 4 can result in incorrect LETd calculation results in the dose plateau region for small step limits. The erroneous LETd results can be attributed to the algorithm to determine fluctuations in energy deposition along the tracking step in geant 4. The incorrect LETd values lead to substantial differences in the calculated RBE. Conclusions: When the geant 4 particle tracking method is used to calculate the average LET values within targets with a small step limit, such as smaller than 500 μm, the authors recommend the use of LETt in the dose plateau region and LETd around the Bragg peak. For a large step limit, i.e., 500 μm, LETd is recommended along the whole Bragg curve. The transition point depends on beam parameters and can be found by determining the location where the gradient of the ratio of LETd and LETt becomes positive. PMID:26520716
An innovative cascade system for simultaneous separation of multiple cell types.
Pierzchalski, Arkadiusz; Mittag, Anja; Bocsi, Jozsef; Tarnok, Attila
2013-01-01
Isolation of different cell types from one sample by fluorescence activated cell sorting is standard but expensive and time consuming. Magnetic separation is more cost effective and faster by but requires substantial effort. An innovative pluriBead-cascade cell isolation system (pluriSelect GmbH, Leipzig, Germany) simultaneously separates two or more different cell types. It is based on antibody-mediated binding of cells to beads of different size and their isolation with sieves of different mesh-size. For the first time, we validated the pluriSelect system for simultaneous separation of CD4+- and CD8+-cells from human EDTA-blood samples. Results were compared with those obtained by magnetic activated cell sorting (MACS; two steps -first isolation of CD4+, then restaining of the residual cell suspension with anti-human CD8+ MACS antibody followed by the second isolation). pluriSelect separation was done in whole blood, MACS separation on density gradient isolated mononuclear cells. Isolated and residual cells were immunophenotyped by 7-color 9-marker panel (CD3; CD16/56; CD4; CD8; CD14; CD19; CD45; HLA-DR) using flow cytometry. Cell count, purity, yield and viability (7-AAD exclusion) were determined. There were no significant differences between both systems regarding purity (MACS (median[range]: 92.4% [91.5-94.9] vs. pluriSelect 95% [94.9-96.8])) of CD4+ cells, however CD8+ isolation showed lower purity by MACS (74.8% [67.6-77.9], pluriSelect 89.9% [89.0-95.7]). Yield was not significantly different for CD4 (MACS 58.5% [54.1-67.5], pluriSelect 67.9% [56.8-69.8]) and for CD8 (MACS 57.2% [41.3-72.0], pluriSelect 67.2% [60.0-78.5]). Viability was slightly higher with MACS for CD4+ (98.4% [97.8-99.0], pluriSelect 94.1% [92.1-95.2]) and for CD8+-cells (98.8% [98.3-99.1], pluriSelect 86.7% [84.2-89.9]). pluriSelect separation was substantially faster than MACS (1h vs. 2.5h) and no pre-enrichment steps were necessary. In conclusion, pluriSelect is a fast, simple and gentle system for efficient simultaneous separation of two and more cell subpopulation directly from whole blood and provides a simple alternative to magnetic separation.
2016-01-01
This review aimed to arrange the process of a systematic review of genome-wide association studies in order to practice and apply a genome-wide meta-analysis (GWMA). The process has a series of five steps: searching and selection, extraction of related information, evaluation of validity, meta-analysis by type of genetic model, and evaluation of heterogeneity. In contrast to intervention meta-analyses, GWMA has to evaluate the Hardy–Weinberg equilibrium (HWE) in the third step and conduct meta-analyses by five potential genetic models, including dominant, recessive, homozygote contrast, heterozygote contrast, and allelic contrast in the fourth step. The ‘genhwcci’ and ‘metan’ commands of STATA software evaluate the HWE and calculate a summary effect size, respectively. A meta-regression using the ‘metareg’ command of STATA should be conducted to evaluate related factors of heterogeneities. PMID:28092928
NASA Technical Reports Server (NTRS)
Janus, J. Mark; Whitfield, David L.
1990-01-01
Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.
Hyperspectral data discrimination methods
NASA Astrophysics Data System (ADS)
Casasent, David P.; Chen, Xuewen
2000-12-01
Hyperspectral data provides spectral response information that provides detailed chemical, moisture, and other description of constituent parts of an item. These new sensor data are useful in USDA product inspection. However, such data introduce problems such as the curse of dimensionality, the need to reduce the number of features used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). Several two-step methods are compared to a new and preferable single-step spectral decomposition algorithm. Initial results on hyperspectral data for good/bad almonds and for good/bad (aflatoxin infested) corn kernels are presented. The hyperspectral application addressed differs greatly from prior USDA work (PLS) in which the level of a specific channel constituent in food was estimated. A validation set (separate from the test set) is used in selecting algorithm parameters. Threshold parameters are varied to select the best Pc operating point. Initial results show that nonlinear features yield improved performance.
NASA Astrophysics Data System (ADS)
Ryu, Inkeon; Kim, Daekeun
2018-04-01
A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.
Improving stability of prediction models based on correlated omics data by using network approaches.
Tissier, Renaud; Houwing-Duistermaat, Jeanine; Rodríguez-Girondo, Mar
2018-01-01
Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1) network construction, 2) clustering to empirically derive modules or pathways, and 3) building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM) and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.
Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin
2017-01-01
Myosin motors in cardiac ventriculum convert ATP free energy to the work of moving blood volume under pressure. The actin bound motor cyclically rotates its lever-arm/light-chain complex linking motor generated torque to the myosin filament backbone and translating actin against resisting force. Previous research showed that the unloaded in vitro motor is described with high precision by single molecule mechanical characteristics including unitary step-sizes of approximately 3, 5, and 8 nm and their relative step-frequencies of approximately 13, 50, and 37%. The 3 and 8 nm unitary step-sizes are dependent on myosin essential light chain (ELC) N-terminus actin binding. Step-size and step-frequency quantitation specifies in vitro motor function including duty-ratio, power, and strain sensitivity metrics. In vivo, motors integrated into the muscle sarcomere form the more complex and hierarchically functioning muscle machine. The goal of the research reported here is to measure single myosin step-size and step-frequency in vivo to assess how tissue integration impacts motor function. A photoactivatable GFP tags the ventriculum myosin lever-arm/light-chain complex in the beating heart of a live zebrafish embryo. Detected single GFP emission reports time-resolved myosin lever-arm orientation interpreted as step-size and step-frequency providing single myosin mechanical characteristics over the active cycle. Following step-frequency of cardiac ventriculum myosin transitioning from low to high force in relaxed to auxotonic to isometric contraction phases indicates that the imposition of resisting force during contraction causes the motor to down-shift to the 3 nm step-size accounting for >80% of all the steps in the near-isometric phase. At peak force, the ATP initiated actomyosin dissociation is the predominant strain inhibited transition in the native myosin contraction cycle. The proposed model for motor down-shifting and strain sensing involves ELC N-terminus actin binding. Overall, the approach is a unique bottom-up single molecule mechanical characterization of a hierarchically functional native muscle myosin. PMID:28423017
NASA Astrophysics Data System (ADS)
Liu, Ligang; Fukumoto, Masahiro; Saiki, Sachio; Zhang, Shiyong
2009-12-01
Proportionate adaptive algorithms have been proposed recently to accelerate convergence for the identification of sparse impulse response. When the excitation signal is colored, especially the speech, the convergence performance of proportionate NLMS algorithms demonstrate slow convergence speed. The proportionate affine projection algorithm (PAPA) is expected to solve this problem by using more information in the input signals. However, its steady-state performance is limited by the constant step-size parameter. In this article we propose a variable step-size PAPA by canceling the a posteriori estimation error. This can result in high convergence speed using a large step size when the identification error is large, and can then considerably decrease the steady-state misalignment using a small step size after the adaptive filter has converged. Simulation results show that the proposed approach can greatly improve the steady-state misalignment without sacrificing the fast convergence of PAPA.
Pauw, Anton; Kahnt, Belinda; Kuhlmann, Michael; Michez, Denis; Montgomery, Graham A; Murray, Elizabeth; Danforth, Bryan N
2017-09-13
Adaptation is evolution in response to natural selection. Hence, an adaptation is expected to originate simultaneously with the acquisition of a particular selective environment. Here we test whether long legs evolve in oil-collecting Rediviva bees when they come under selection by long-spurred, oil-secreting flowers. To quantify the selective environment, we drew a large network of the interactions between Rediviva species and oil-secreting plant species. The selective environment of each bee species was summarized as the average spur length of the interacting plant species weighted by interaction frequency. Using phylogenetically independent contrasts, we calculated divergence in selective environment and evolutionary divergence in leg length between sister species (and sister clades) of Rediviva We found that change in the selective environment explained 80% of evolutionary change in leg length, with change in body size contributing an additional 6% of uniquely explained variance. The result is one of four proposed steps in testing for plant-pollinator coevolution. © 2017 The Author(s).
Zhang, Junqiu; Yan, Juping; Wang, Yingte; Zhang, Yong
2018-07-01
A facile and economic approach to synthesis highly fluorescence carbon dots (CDs) via one-step hydrothermal treatment of D-sorbitol was presented. The as-synthesized CDs were characterized by good water solubility, well monodispersion, and excellent biocompatibility. Spherical CDs had a particle size about 5 nm and exhibited a quantum yield of 8.85% at excitation wavelength of 360 nm. In addition, the CDs can serve as fluorescent probe for sensitive and selective detection of Fe3+ ions with the detection limit of 1.16 μM. Moreover, the potential of the as-prepared carbon dots for biological application was confirmed by employing it for fluorescence imaging in MCF-7 cells.
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L; Armour, Wes; Waterman, David G; Iwata, So; Evans, Gwyndaf
2013-08-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf
2013-01-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484
Speciation of nanoscale objects by nanoparticle imprinted matrices
NASA Astrophysics Data System (ADS)
Hitrik, Maria; Pisman, Yamit; Wittstock, Gunther; Mandler, Daniel
2016-07-01
The toxicity of nanoparticles is not only a function of the constituting material but depends largely on their size, shape and stabilizing shell. Hence, the speciation of nanoscale objects, namely, their detection and separation based on the different species, similarly to heavy metals, is of outmost importance. Here we demonstrate the speciation of gold nanoparticles (AuNPs) and their electrochemical detection using the concept of ``nanoparticles imprinted matrices'' (NAIM). Negatively charged AuNPs are adsorbed as templates on a conducting surface previously modified with polyethylenimine (PEI). The selective matrix is formed by the adsorption of either oleic acid (OA) or poly(acrylic acid) (PAA) on the non-occupied areas. The AuNPs are removed by electrooxidation to form complementary voids. These voids are able to recognize the AuNPs selectively based on their size. Furthermore, the selectivity could be improved by adsorbing an additional layer of 1-hexadecylamine, which deepened the voids. Interestingly, silver nanoparticles (AgNPs) were also recognized if their size matched those of the template AuNPs. The steps in assembling the NAIMs and the reuptake of the nanoparticles were characterized carefully. The prospects for the analytical use of NAIMs, which are simple, of small dimension, cost-efficient and portable, are in the sensing and separation of nanoobjects.The toxicity of nanoparticles is not only a function of the constituting material but depends largely on their size, shape and stabilizing shell. Hence, the speciation of nanoscale objects, namely, their detection and separation based on the different species, similarly to heavy metals, is of outmost importance. Here we demonstrate the speciation of gold nanoparticles (AuNPs) and their electrochemical detection using the concept of ``nanoparticles imprinted matrices'' (NAIM). Negatively charged AuNPs are adsorbed as templates on a conducting surface previously modified with polyethylenimine (PEI). The selective matrix is formed by the adsorption of either oleic acid (OA) or poly(acrylic acid) (PAA) on the non-occupied areas. The AuNPs are removed by electrooxidation to form complementary voids. These voids are able to recognize the AuNPs selectively based on their size. Furthermore, the selectivity could be improved by adsorbing an additional layer of 1-hexadecylamine, which deepened the voids. Interestingly, silver nanoparticles (AgNPs) were also recognized if their size matched those of the template AuNPs. The steps in assembling the NAIMs and the reuptake of the nanoparticles were characterized carefully. The prospects for the analytical use of NAIMs, which are simple, of small dimension, cost-efficient and portable, are in the sensing and separation of nanoobjects. Electronic supplementary information (ESI) available: S1 - instrumentation, S2 - immobilization of AuNPs, S3 - time dependent immobilization, S4 - CVs at matrix-coated substrates, S5 - CVs at AuNP-loaded matrices, S6 - peak potentials for the oxidation of AuNPs of different sizes, S7 - schematics for the change of conductive area of the matrices, S8 - probe CVs before and after AuNPs oxidation, S9 - calculation of adsorbed and reuptaken AuNPs, S10 - CVs of AuNPs adsorbed on non-imprinted matrices, S11 - SEM images of AuNPs adsorbed on non-imprinted matrices, S12 - SEM images after reuptake of AuNPs, S13 - schematic of the effect of thickening the matrix. See DOI: 10.1039/c6nr01106c
An anthropometric analysis of Korean male helicopter pilots for helicopter cockpit design.
Lee, Wonsup; Jung, Kihyo; Jeong, Jeongrim; Park, Jangwoon; Cho, Jayoung; Kim, Heeeun; Park, Seikwon; You, Heecheon
2013-01-01
This study measured 21 anthropometric dimensions (ADs) of 94 Korean male helicopter pilots in their 20s to 40s and compared them with corresponding measurements of Korean male civilians and the US Army male personnel. The ADs and the sample size of the anthropometric survey were determined by a four-step process: (1) selection of ADs related to helicopter cockpit design, (2) evaluation of the importance of each AD, (3) calculation of required sample sizes for selected precision levels and (4) determination of an appropriate sample size by considering both the AD importance evaluation results and the sample size requirements. The anthropometric comparison reveals that the Korean helicopter pilots are larger (ratio of means = 1.01-1.08) and less dispersed (ratio of standard deviations = 0.71-0.93) than the Korean male civilians and that they are shorter in stature (0.99), have shorter upper limbs (0.89-0.96) and lower limbs (0.93-0.97), but are taller on sitting height, sitting eye height and acromial height (1.01-1.03), and less dispersed (0.68-0.97) than the US Army personnel. The anthropometric characteristics of Korean male helicopter pilots were compared with those of Korean male civilians and US Army male personnel. The sample size determination process and the anthropometric comparison results presented in this study are useful to design an anthropometric survey and a helicopter cockpit layout, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Qiuxia; Wang, Jianguo; Wang, Yang-Gang
The effects of structure and size on the selectivity of catalytic furfural conversion over supported Pt catalysts in the presence of hydrogen have been studied using first principles density functional theory (DFT) calculations and microkinetic modeling. Four Pt model systems, i.e., periodic Pt(111), Pt(211) surfaces, as well as small nanoclusters (Pt13 and Pt55) are chosen to represent the terrace, step, and corner sites of Pt nanoparticles. Our DFT results show that the reaction routes for furfural hydrogenation and decarbonylation are strongly dependent on the type of reactive sites, which lead to the different selectivity. On the basis of the size-dependentmore » site distribution rule, we correlate the site distributions as a function of the Pt particle size. Our microkinetic results indicate the critical particle size that controls the furfural selectivity is about 1.0 nm, which is in good agreement with the reported experimental value under reaction conditions. This work was supported by National Basic Research Program of China (973 Program) (2013CB733501) and the National Natural Science Foundation of China (NSFC-21306169, 21176221, 21136001, 21101137 and 91334103). This work was also partially supported by the US Department of Energy (DOE), the Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. Computing time was granted by the grand challenge of computational catalysis of the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL). EMSL is a national scientific user facility located at Pacific Northwest National Laboratory (PNNL) and sponsored by DOE’s Office of Biological and Environmental Research.« less
Preparation of the apical part of the root canal by the Lightspeed and step-back techniques.
Portenier, I; Lutz, F; Barbakow, F
1998-03-01
This study measured in vitro the displacement of natural canal centres in 18 human teeth before and after shaping by the step-back or Lightspeed techniques. Experimental roots (n = 9 per group), embedded in clear plastic, were cross-sectioned using a 0.1-mm-thick band saw at distances 1.25 mm, 3.25 mm and 5.25 mm from the apices. A stereo microscope was used to take 35 mm slides of the cut surfaces of the sectioned roots and canals. The slides of the uninstrumented canals were scanned into a computer and saved. Each sectioned root was then reassembled and the canals shaped by the step-back or Lightspeed technique. File size 40 and instrument size 50 were selected as the master apical file and master apical rotary for the step-back and Lightspeed groups, respectively. The 18 prepared canals were photographed, and the 35 mm slides scanned and computer stored as previously. This allowed the positions of the pre- and postinstrumented roots to be electronically superimposed for subsequent analyses. Displacements of the root canal centres before and after preparation were assessed in relation to the cross-sectional diameter of the files or instruments used. In addition, increases in cross-sectional area of the root canals after preparation were evaluated in relation to the cross-sectional area of the files or instruments used. Engine-driven nickel-titanium Lightspeed instruments caused significantly less (P < 0.001) displacement of the canal centres, so roots in the Lightspeed group remained better centred than those in the step-back group. The mean cross-sectional area after preparation in the Lightspeed group was significantly less (P < 0.001) than that recorded in the step-back group. Clinically, this implies less apical transportation and less dentine destruction with the Lightspeed technique than with the step-back technique.
Night Operations - The Soviet Approach
1978-06-09
4 ,. ,,4 sized engineer equipments. Passive infrared field glasses are provided to Soviet troops and selected marksmen are armed with the Dravunov...up to 300 m Conversation of a few men up to 300 m Steps of a single man up to 40 m Axe blow , sound of a saw up to 500 m Blows of shovels and pickaxes...rocking frame simulators. Electric lightbulbs are popped to simulate the dazzle from the tank’s main gun muzzle flash. At Site Two, individual crew
Determination of Individual Temperatures and Luminosities in Eclipsing Binary Star Systems.
1983-06-20
one to select one of five aperture sizes, ranging from .01" to .199". Each position has two stops: one sets the aperture in the center of the field of...comparing stellar field patterns to the finder chart. This step was the most surprisingly difficult of the entire observational procedure, since star... fields never quite seemed to agree exactly with those published on the SAO atlas. Once the system is located, it is centered in the smallest aperture which
Sexual selection and allometry: a critical reappraisal of the evidence and ideas.
Bonduriansky, Russell
2007-04-01
One of the most pervasive ideas in the sexual selection literature is the belief that sexually selected traits almost universally exhibit positive static allometries (i.e., within a sample of conspecific adults, larger individuals have disproportionally larger traits). In this review, I show that this idea is contradicted by empirical evidence and theory. Although positive allometry is a typical attribute of some sexual traits in certain groups, the preponderance of positively allometric sexual traits in the empirical literature apparently results from a sampling bias reflecting a fascination with unusually exaggerated (bizarre) traits. I review empirical examples from a broad range of taxa illustrating the diversity of allometric patterns exhibited by signal, weapon, clasping and genital traits, as well as nonsexual traits. This evidence suggests that positive allometry may be the exception rather than the rule in sexual traits, that directional sexual selection does not necessarily lead to the evolution of positive allometry and, conversely, that positive allometry is not necessarily a consequence of sexual selection, and that many sexual traits exhibit sex differences in allometric intercept rather than slope. Such diversity in the allometries of secondary sexual traits is to be expected, given that optimal allometry should reflect resource allocation trade-offs, and patterns of sexual and viability selection on both trait size and body size. An unbiased empirical assessment of the relation between sexual selection and allometry is an essential step towards an understanding of this diversity.
Kajbafvala, Marzieh; Farbod, Mansoor
2018-05-14
Although liquid phase exfoliation is a powerful method to produce MoS 2 nanosheets in large scale, but its effectiveness is limited by the diversity of produced nanosheets sizes. Here a novel approach for separation of MoS 2 flakes having various lateral sizes and thicknesses based on the cascaded centrifugation has been introduced. This method involves a pre-separation step which is performed through low-speed centrifugation to avoid the deposition of large area single and few-layers by the heavier particles. The bulk MoS 2 powders were dispersed in an aqueous solution of sodium cholate (SC) and sonicated for 12 h. The main separation step was performed using different speed centrifugation intervals of 10-11, 8-10, 6-8, 4-6, 2-4 and 0.5-2 krpm by which nanosheets containing 2, 4, 7, 8, 14, 18 and 29 layers were obtained respectively. The samples were characterized using XRD, FESEM, AFM, TEM, DLS and also UV-vis, Raman and PL spectroscopy measurements. Dynamic light scattering (DLS) measurements have confirmed the existence of a larger number of single or few-layers MoS 2 nanosheets compared to when the pre-separation step was not used. Finally, Photocurrent and cyclic voltammetry of different samples were measured and found that the flakes with bigger surface area had larger CV loop area. Our results provide a method for the preparation of a MoS 2 monolayer enriched suspension which can be used for different applications. Copyright © 2018 Elsevier Inc. All rights reserved.
Assessing the concept of structure sensitivity or insensitivity for sub-nanometer catalyst materials
NASA Astrophysics Data System (ADS)
Crampton, Andrew S.; Rötzer, Marian D.; Ridge, Claron J.; Yoon, Bokwon; Schweinberger, Florian F.; Landman, Uzi; Heiz, Ueli
2016-10-01
The nature of the nano-catalyzed hydrogenation of ethylene, yielding benchmark information pertaining to the concept of structure sensitivity/insensitivity and its applicability at the bottom of the catalyst particle size-range, is explored with experiments on size-selected Ptn (n = 7-40) clusters soft-landed on MgO, in conjunction with first-principles simulations. As in the case of larger particles both the direct ethylene hydrogenation channel and the parallel hydrogenation-dehydrogenation ethylidyne-producing route must be considered, with the fundamental uncovering that at the < 1 nm size-scale the reaction exhibits characteristics consistent with structure sensitivity, in contrast to the structure insensitivity found for larger particles. In this size-regime, the chemical properties can be modulated and tuned by a single atom, reflected by the onset of low temperature hydrogenation at T > 150 K catalyzed by Ptn (n ≥ 10) clusters, with maximum room temperature reactivity observed for Pt13 using a pulsed molecular beam technique. Structure insensitive behavior, inherent for specific cluster sizes at ambient temperatures, can be induced in the more active sizes, e.g. Pt13, by a temperature increase, up to 400 K, which opens dehydrogenation channels leading to ethylidyne formation. This reaction channel was, however found to be attenuated on Pt20, as catalyst activity remained elevated after the 400 K step. Pt30 displayed behavior which can be understood from extrapolating bulk properties to this size range; in particular the calculated d-band center. In the non-scalable sub-nanometer size regime, however, precise control of particle size may be used for atom-by-atom tuning and manipulation of catalyzed hydrogenation activity and selectivity.
NASA Astrophysics Data System (ADS)
Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes
2004-12-01
In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .
A Conformational Transition in the Myosin VI Converter Contributes to the Variable Step Size
Ovchinnikov, V.; Cecchini, M.; Vanden-Eijnden, E.; Karplus, M.
2011-01-01
Myosin VI (MVI) is a dimeric molecular motor that translocates backwards on actin filaments with a surprisingly large and variable step size, given its short lever arm. A recent x-ray structure of MVI indicates that the large step size can be explained in part by a novel conformation of the converter subdomain in the prepowerstroke state, in which a 53-residue insert, unique to MVI, reorients the lever arm nearly parallel to the actin filament. To determine whether the existence of the novel converter conformation could contribute to the step-size variability, we used a path-based free-energy simulation tool, the string method, to show that there is a small free-energy difference between the novel converter conformation and the conventional conformation found in other myosins. This result suggests that MVI can bind to actin with the converter in either conformation. Models of MVI/MV chimeric dimers show that the variability in the tilting angle of the lever arm that results from the two converter conformations can lead to step-size variations of ∼12 nm. These variations, in combination with other proposed mechanisms, could explain the experimentally determined step-size variability of ∼25 nm for wild-type MVI. Mutations to test the findings by experiment are suggested. PMID:22098742
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Zhanying; Department of Applied Science, University of Québec at Chicoutimi, Saguenay, QC G7H 2B1; Zhao, Gang
2016-04-15
The effects of two homogenization treatments applied to the direct chill (DC) cast billet on the recrystallization behavior in 7150 aluminum alloy during post-rolling annealing have been investigated using the electron backscatter diffraction (EBSD) technique. Following hot and cold rolling to the sheet, measured orientation maps, the recrystallization fraction and grain size, the misorientation angle and the subgrain size were used to characterize the recovery and recrystallization processes at different annealing temperatures. The results were compared between the conventional one-step homogenization and the new two-step homogenization, with the first step being pretreated at 250 °C. Al{sub 3}Zr dispersoids with highermore » densities and smaller sizes were obtained after the two-step homogenization, which strongly retarded subgrain/grain boundary mobility and inhibited recrystallization. Compared with the conventional one-step homogenized samples, a significantly lower recrystallized fraction and a smaller recrystallized grain size were obtained under all annealing conditions after cold rolling in the two-step homogenized samples. - Highlights: • Effects of two homogenization treatments on recrystallization in 7150 Al sheets • Quantitative study on the recrystallization evolution during post-rolling annealing • Al{sub 3}Zr dispersoids with higher densities and smaller sizes after two-step treatment • Higher recrystallization resistance of 7150 sheets with two-step homogenization.« less
NASA Astrophysics Data System (ADS)
von Ruette, Jonas; Lehmann, Peter; Fan, Linfeng; Bickel, Samuel; Or, Dani
2017-04-01
Landslides and subsequent debris-flows initiated by rainfall represent a ubiquitous natural hazard in steep mountainous regions. We integrated a landslide hydro-mechanical triggering model and associated debris flow runout pathways with a graphical user interface (GUI) to represent these natural hazards in a wide range of catchments over the globe. The STEP-TRAMM GUI provides process-based locations and sizes of landslides patterns using digital elevation models (DEM) from SRTM database (30 m resolution) linked with soil maps from global database SoilGrids (250 m resolution) and satellite based information on rainfall statistics for the selected region. In a preprocessing step STEP-TRAMM models soil depth distribution and complements soil information that jointly capture key hydrological and mechanical properties relevant to local soil failure representation. In the presentation we will discuss feature of this publicly available platform and compare landslide and debris flow patterns for different regions considering representative intense rainfall events. Model outcomes will be compared for different spatial and temporal resolutions to test applicability of web-based information on elevation and rainfall for hazard assessment.
Schlenstedt, Christian; Mancini, Martina; Horak, Fay; Peterson, Daniel
2017-07-01
To characterize anticipatory postural adjustments (APAs) across a variety of step initiation tasks in people with Parkinson disease (PD) and healthy subjects. Cross-sectional study. Step initiation was analyzed during self-initiated gait, perceptual cued gait, and compensatory forward stepping after platform perturbation. People with PD were assessed on and off levodopa. University research laboratory. People (N=31) with PD (n=19) and healthy aged-matched subjects (n=12). Not applicable. Mediolateral (ML) size of APAs (calculated from center of pressure recordings), step kinematics, and body alignment. With respect to self-initiated gait, the ML size of APAs was significantly larger during the cued condition and significantly smaller during the compensatory condition (P<.001). Healthy subjects and patients with PD did not differ in body alignment during the stance phase prior to stepping. No significant group effect was found for ML size of APAs between healthy subjects and patients with PD. However, the reduction in APA size from cued to compensatory stepping was significantly less pronounced in PD off medication compared with healthy subjects, as indicated by a significant group by condition interaction effect (P<.01). No significant differences were found comparing patients with PD on and off medications. Specific stepping conditions had a significant effect on the preparation and execution of step initiation. Therefore, APA size should be interpreted with respect to the specific stepping condition. Across-task changes in people with PD were less pronounced compared with healthy subjects. Antiparkinsonian medication did not significantly improve step initiation in this mildly affected PD cohort. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Critical Motor Number for Fractional Steps of Cytoskeletal Filaments in Gliding Assays
Li, Xin; Lipowsky, Reinhard; Kierfeld, Jan
2012-01-01
In gliding assays, filaments are pulled by molecular motors that are immobilized on a solid surface. By varying the motor density on the surface, one can control the number of motors that pull simultaneously on a single filament. Here, such gliding assays are studied theoretically using Brownian (or Langevin) dynamics simulations and taking the local force balance between motors and filaments as well as the force-dependent velocity of the motors into account. We focus on the filament stepping dynamics and investigate how single motor properties such as stalk elasticity and step size determine the presence or absence of fractional steps of the filaments. We show that each gliding assay can be characterized by a critical motor number, . Because of thermal fluctuations, fractional filament steps are only detectable as long as . The corresponding fractional filament step size is where is the step size of a single motor. We first apply our computational approach to microtubules pulled by kinesin-1 motors. For elastic motor stalks that behave as linear springs with a zero rest length, the critical motor number is found to be , and the corresponding distributions of the filament step sizes are in good agreement with the available experimental data. In general, the critical motor number depends on the elastic stalk properties and is reduced to for linear springs with a nonzero rest length. Furthermore, is shown to depend quadratically on the motor step size . Therefore, gliding assays consisting of actin filaments and myosin-V are predicted to exhibit fractional filament steps up to motor number . Finally, we show that fractional filament steps are also detectable for a fixed average motor number as determined by the surface density (or coverage) of the motors on the substrate surface. PMID:22927953
Vessally, Esmail; Siadati, Seyyed Amir; Hosseinian, Akram; Edjlali, Ladan
2017-01-01
OZONE is a key species in forming a layer in the atmosphere of earth that brings vita for our planet and supports the complex life. This three-atom molecule in the ozone-layer, is healing the earth's ecosystem by protecting it from dangerous rays of the sun. Until this molecule is in the stratosphere, it would support the natural order of the life; but, when it appears in our environment, damages will begin against us. In this project, we have tried to find a new way for beaconing ozone species in our environment via physical adsorption by the C 20 fullerene and graphene segment as a sensor. To find the selectivity of this nano-sized segment in sensing ozone (O 3 ), compared to the usual chemically active gasses of the troposphere like O 2 , N 2 , CO 2 , H 2 O, CH 4 , H 2 , and CO, the density of state (DOS) plots were analyzed, for each interacting species. The results showed that ozone could significantly change the electrical conductivity of C 20 fullerene, for each adsorption step. Thus, this fullerene could clearly sense ozone in different adsorption steps; while, the graphene segment could do this only at the second step adsorption (/ΔE g-B /=0.016eV) (at the first adsorption step the /ΔE g-A / is 0.00eV). Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Deyhle, Hans; Schmidli, Fredy; Krastl, Gabriel; Müller, Bert
2010-09-01
Direct composite fillings belong to widespread tooth restoration techniques in dental medicine. The procedure consists of successive steps, which include etching of the prepared tooth surface, bonding and placement of composite in incrementally built up layers. Durability and lifespan of the composite inlays strongly depend on the accurate completion of the individual steps to be also realized by students in dental medicine. Improper handling or nonconformity in the bonding procedure often lead to air enclosures (bubbles) as well as to significant gaps between the composite layers or at the margins of the restoration. Traditionally one analyzes the quality of the restoration cutting the tooth in an arbitrarily selected plane and inspecting this plane by conventional optical microscopy. Although the precision of this established method is satisfactory, it is restricted to the selected two-dimensional plane. Rather simple micro computed tomography (μCT) systems, such as SkyScan 1174™, allows for the non-destructive three-dimensional imaging of restored teeth ex vivo and virtually cutting the tomographic data in any desired direction, offering a powerful tool for inspection of the restored tooth with micrometer resolution before cutting and thus also to select a two-dimensional plane with potential defects. In order to study the influence of the individual steps on the resulted tooth restoration, direct composite fillings were placed in mod cavities of extracted teeth. After etching, an adhesive was applied in half of the specimens. From the tomographic datasets, it becomes clear that gaps occur more frequently when bonding is omitted. The visualization of air enclosures offers to determine the probability to find a micrometer-sized defect using an arbitrarily selected cutting plane for inspection.
Aircraft conceptual design - an adaptable parametric sizing methodology
NASA Astrophysics Data System (ADS)
Coleman, Gary John, Jr.
Aerospace is a maturing industry with successful and refined baselines which work well for traditional baseline missions, markets and technologies. However, when new markets (space tourism) or new constrains (environmental) or new technologies (composite, natural laminar flow) emerge, the conventional solution is not necessarily best for the new situation. Which begs the question "how does a design team quickly screen and compare novel solutions to conventional solutions for new aerospace challenges?" The answer is rapid and flexible conceptual design Parametric Sizing. In the product design life-cycle, parametric sizing is the first step in screening the total vehicle in terms of mission, configuration and technology to quickly assess first order design and mission sensitivities. During this phase, various missions and technologies are assessed. During this phase, the designer is identifying design solutions of concepts and configurations to meet combinations of mission and technology. This research undertaking contributes the state-of-the-art in aircraft parametric sizing through (1) development of a dedicated conceptual design process and disciplinary methods library, (2) development of a novel and robust parametric sizing process based on 'best-practice' approaches found in the process and disciplinary methods library, and (3) application of the parametric sizing process to a variety of design missions (transonic, supersonic and hypersonic transports), different configurations (tail-aft, blended wing body, strut-braced wing, hypersonic blended bodies, etc.), and different technologies (composite, natural laminar flow, thrust vectored control, etc.), in order to demonstrate the robustness of the methodology and unearth first-order design sensitivities to current and future aerospace design problems. This research undertaking demonstrates the importance of this early design step in selecting the correct combination of mission, technologies and configuration to meet current aerospace challenges. Overarching goal is to avoid the reoccurring situation of optimizing an already ill-fated solution.
... One Choose A Body Area: Eyes Step Two Select A Symptom: Itchy eyes Step Three Possible Issues: ... conjunctivitis Mold allergy Pet allergy Nose Step Two Select A Symptom: Nasal congestion Step Three Possible Issues: ...
Modeling myosin VI stepping dynamics
NASA Astrophysics Data System (ADS)
Tehver, Riina
Myosin VI is a molecular motor that transports intracellular cargo as well as acts as an anchor. The motor has been measured to have unusually large step size variation and it has been reported to make both long forward and short inchworm-like forward steps, as well as step backwards. We have been developing a model that incorporates this diverse stepping behavior in a consistent framework. Our model allows us to predict the dynamics of the motor under different conditions and investigate the evolutionary advantages of the large step size variation.
Engineering multi-stage nanovectors for controlled degradation and tunable release kinetics
Martinez, Jonathan O.; Chiappini, Ciro; Ziemys, Arturas; Faust, Ari M.; Kojic, Milos; Liu, Xuewu; Ferrari, Mauro; Tasciotti, Ennio
2013-01-01
Nanovectors hold substantial promise in abating the off-target effects of therapeutics by providing a means to selectively accumulate payloads at the target lesion, resulting in an increase in the therapeutic index. A sophisticated understanding of the factors that govern the degradation and release dynamics of these nanovectors is imperative to achieve these ambitious goals. In this work, we elucidate the relationship that exists between variations in pore size and the impact on the degradation, loading, and release of multistage nanovectors. Larger pored vectors displayed faster degradation and higher loading of nanoparticles, while exhibiting the slowest release rate. The degradation of these particles was characterized to occur in a multi-step progression where they initially decreased in size leaving the porous core isolated, while the pores gradually increased in size. Empirical loading and release studies of nanoparticles along with diffusion modeling revealed that this prolonged release was modulated by the penetration within the porous core of the vectors regulated by their pore size. PMID:23911070
Microstructural Characterization and Modeling of SLM Superalloy 718
NASA Technical Reports Server (NTRS)
Smith, Tim M.; Sudbrack, Chantal K.; Bonacuse, Pete; Rogers, Richard
2017-01-01
Superalloy 718 is an excellent candidate for selective laser melting (SLM) fabrication due to a combination of excellent mechanical properties and workability. Predicting and validating the microstructure of SLM-fabricated Superalloy 718 after potential post heat-treatment paths is an important step towards producing components comparable to those made using conventional methods. At present, obtaining accurate volume fraction and size measurements of gamma-double-prime, gamma-prime and delta precipitates has been challenging due to their size, low volume fractions, and similar chemistries. A technique combining high resolution distortion corrected SEM imaging and with x-ray energy dispersive spectroscopy has been developed to accurately and independently measure the size and volume fractions of the three precipitates. These results were further validated using x-ray diffraction and phase extraction methods and compared to the precipitation kinetics predicted by PANDAT and JMatPro. Discrepancies are discussed in context of materials properties, model assumptions, sampling, and experimental errors.
The complete "how to" guide for selecting a disease management vendor.
Linden, Ariel; Roberts, Nancy; Keck, Kevin
2003-01-01
Decision-makers in health plans, large medical groups, and self-insured employers face many challenges in selecting and implementing disease management programs. One strategy is the "buy" approach, utilizing one or more of the many vendors to provide disease management services for the purchasing organization. As a relatively new field, the disease management vendor landscape is continually changing, uncovering the many uncertainties about demonstrating outcomes, corporate stability, or successful business models. Given the large investment an organization may make in each disease management program (many cost 1 million dollars or more in annual fees for a moderately sized population), careful consideration must be given in selecting a disease management partner. This paper describes, in detail, the specific steps necessary and the issues to consider in achieving a successful contract with a vendor for full-service disease management.
Gill, Arran M; Hinde, Christopher S; Leary, Rowan K; Potter, Matthew E; Jouve, Andrea; Wells, Peter P; Midgley, Paul A; Thomas, John M; Raja, Robert
2016-03-08
Highly active and selective aerobic oxidation of KA-oil to cyclohexanone (precursor for adipic acid and ɛ-caprolactam) has been achieved in high yields using continuous-flow chemistry by utilizing uncapped noble-metal (Au, Pt & Pd) nanoparticle catalysts. These are prepared using a one-step in situ methodology, within three-dimensional porous molecular architectures, to afford robust heterogeneous catalysts. Detailed spectroscopic characterization of the nature of the active sites at the molecular level, coupled with aberration-corrected scanning transmission electron microscopy, reveals that the synthetic methodology and associated activation procedures play a vital role in regulating the morphology, shape and size of the metal nanoparticles. These active centers have a profound influence on the activation of molecular oxygen for selective catalytic oxidations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, Fada; Peeler, Christopher; Taleei, Reza
Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the GEANT 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from GEANT 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LET{sub t} and dose-averaged LET, LET{sub d}) using GEANT 4 for different tracking stepmore » size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LET{sub t} and LET{sub d} of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LET{sub t} but significant for LET{sub d}. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in GEANT 4 can result in incorrect LET{sub d} calculation results in the dose plateau region for small step limits. The erroneous LET{sub d} results can be attributed to the algorithm to determine fluctuations in energy deposition along the tracking step in GEANT 4. The incorrect LET{sub d} values lead to substantial differences in the calculated RBE. Conclusions: When the GEANT 4 particle tracking method is used to calculate the average LET values within targets with a small step limit, such as smaller than 500 μm, the authors recommend the use of LET{sub t} in the dose plateau region and LET{sub d} around the Bragg peak. For a large step limit, i.e., 500 μm, LET{sub d} is recommended along the whole Bragg curve. The transition point depends on beam parameters and can be found by determining the location where the gradient of the ratio of LET{sub d} and LET{sub t} becomes positive.« less
Learn, R; Feigenbaum, E
2016-06-01
Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Learn, R.; Feigenbaum, E.
Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.
Learn, R.; Feigenbaum, E.
2016-05-27
Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.
Tank Investigation of a Powered Dynamic Model of a Large Long-Range Flying Boat
NASA Technical Reports Server (NTRS)
Parkinson, John B; Olson, Roland E; Harr, Marvin I
1947-01-01
Principles for designing the optimum hull for a large long-range flying boat to meet the requirements of seaworthiness, minimum drag, and ability to take off and land at all operational gross loads were incorporated in a 1/12-size powered dynamic model of a four-engine transport flying boat having a design gross load of 165,000 pounds. These design principles included the selection of a moderate beam loading, ample forebody length, sufficient depth of step, and close adherence to the form of a streamline body. The aerodynamic and hydrodynamic characteristics of the model were investigated in Langley tank no. 1. Tests were made to determine the minimum allowable depth of step for adequate landing stability, the suitability of the fore-and-aft location of the step, the take-off performance, the spray characteristics, and the effects of simple spray-control devices. The application of the design criterions used and test results should be useful in the preliminary design of similar large flying boats.
NASA Astrophysics Data System (ADS)
Or, D.; von Ruette, J.; Lehmann, P.
2017-12-01
Landslides and subsequent debris-flows initiated by rainfall represent a common natural hazard in mountainous regions. We integrated a landslide hydro-mechanical triggering model with a simple model for debris flow runout pathways and developed a graphical user interface (GUI) to represent these natural hazards at catchment scale at any location. The STEP-TRAMM GUI provides process-based estimates of the initiation locations and sizes of landslides patterns based on digital elevation models (SRTM) linked with high resolution global soil maps (SoilGrids 250 m resolution) and satellite based information on rainfall statistics for the selected region. In the preprocessing phase the STEP-TRAMM model estimates soil depth distribution to supplement other soil information for delineating key hydrological and mechanical properties relevant to representing local soil failure. We will illustrate this publicly available GUI and modeling platform to simulate effects of deforestation on landslide hazards in several regions and compare model outcome with satellite based information.
Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples
NASA Astrophysics Data System (ADS)
Petit, Johan; Lallemant, Lucile
2017-05-01
In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.
Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir
2016-05-01
Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Aging effect on step adjustments and stability control in visually perturbed gait initiation.
Sun, Ruopeng; Cui, Chuyi; Shea, John B
2017-10-01
Gait adaptability is essential for fall avoidance during locomotion. It requires the ability to rapidly inhibit original motor planning, select and execute alternative motor commands, while also maintaining the stability of locomotion. This study investigated the aging effect on gait adaptability and dynamic stability control during a visually perturbed gait initiation task. A novel approach was used such that the anticipatory postural adjustment (APA) during gait initiation were used to trigger the unpredictable relocation of a foot-size stepping target. Participants (10 young adults and 10 older adults) completed visually perturbed gait initiation in three adjustment timing conditions (early, intermediate, late; all extracted from the stereotypical APA pattern) and two adjustment direction conditions (medial, lateral). Stepping accuracy, foot rotation at landing, and Margin of Dynamic Stability (MDS) were analyzed and compared across test conditions and groups using a linear mixed model. Stepping accuracy decreased as a function of adjustment timing as well as stepping direction, with older subjects exhibited a significantly greater undershoot in foot placement to late lateral stepping. Late adjustment also elicited a reaching-like movement (i.e. foot rotation prior to landing in order to step on the target), regardless of stepping direction. MDS measures in the medial-lateral and anterior-posterior direction revealed both young and older adults exhibited reduced stability in the adjustment step and subsequent steps. However, young adults returned to stable gait faster than older adults. These findings could be useful for future study of screening deficits in gait adaptability and preventing falls. Copyright © 2017 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Code of Federal Regulations, 2013 CFR
2013-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Code of Federal Regulations, 2012 CFR
2012-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Code of Federal Regulations, 2010 CFR
2010-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Effects of an aft facing step on the surface of a laminar flow glider wing
NASA Technical Reports Server (NTRS)
Sandlin, Doral R.; Saiki, Neal
1993-01-01
A motor glider was used to perform a flight test study on the effects of aft facing steps in a laminar boundary layer. This study focuses on two dimensional aft facing steps oriented spanwise to the flow. The size and location of the aft facing steps were varied in order to determine the critical size that will force premature transition. Transition over a step was found to be primarily a function of Reynolds number based on step height. Both of the step height Reynolds numbers for premature and full transition were determined. A hot film anemometry system was used to detect transition.
Kinesin Steps Do Not Alternate in Size☆
Fehr, Adrian N.; Asbury, Charles L.; Block, Steven M.
2008-01-01
Abstract Kinesin is a two-headed motor protein that transports cargo inside cells by moving stepwise on microtubules. Its exact trajectory along the microtubule is unknown: alternative pathway models predict either uniform 8-nm steps or alternating 7- and 9-nm steps. By analyzing single-molecule stepping traces from “limping” kinesin molecules, we were able to distinguish alternate fast- and slow-phase steps and thereby to calculate the step sizes associated with the motions of each of the two heads. We also compiled step distances from nonlimping kinesin molecules and compared these distributions against models predicting uniform or alternating step sizes. In both cases, we find that kinesin takes uniform 8-nm steps, a result that strongly constrains the allowed models. PMID:18083906
NASA Astrophysics Data System (ADS)
Pandey, Praveen K.; Sharma, Kriti; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.
2003-11-01
CdTe quantum dots embedded in glass matrix are grown using two-step annealing method. The results for the optical transmission characterization are analysed and compared with the results obtained from CdTe quantum dots grown using conventional single-step annealing method. A theoretical model for the absorption spectra is used to quantitatively estimate the size dispersion in the two cases. In the present work, it is established that the quantum dots grown using two-step annealing method have stronger quantum confinement, reduced size dispersion and higher volume ratio as compared to the single-step annealed samples. (
Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2011-01-01
A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.
Modeling ultrasound propagation through material of increasing geometrical complexity.
Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen
2018-06-01
Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Unraveling the Water Impermeability Discrepancy in CVD-Grown Graphene.
Kwak, Jinsung; Kim, Se-Yang; Jo, Yongsu; Kim, Na Yeon; Kim, Sung Youb; Lee, Zonghoon; Kwon, Soon-Yong
2018-06-11
Graphene has recently attracted particular interest as a flexible barrier film preventing permeation of gases and moistures. However, it has been proved to be exceptionally challenging to develop large-scale graphene films with little oxygen and moisture permeation suitable for industrial uses, mainly due to the presence of nanometer-sized defects of obscure origins. Here, the origins of water permeable routes on graphene-coated Cu foils are investigated by observing the micrometer-sized rusts in the underlying Cu substrates, and a site-selective passivation method of the nanometer-sized routes is devised. It is revealed that nanometer-sized holes or cracks are primarily concentrated on graphene wrinkles rather than on other structural imperfections, resulting in severe degradation of its water impermeability. They are found to be predominantly induced by the delamination of graphene bound to Cu as a release of thermal stress during the cooling stage after graphene growth, especially at the intersection of the Cu step edges and wrinkles owing to their higher adhesion energy. Furthermore, the investigated routes are site-selectively passivated by an electron-beam-induced amorphous carbon layer, thus a substantial improvement in water impermeability is achieved. This approach is likely to be extended for offering novel barrier properties in flexible films based on graphene and on other atomic crystals. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
International Launch Vehicle Selection for Interplanetary Travel
NASA Technical Reports Server (NTRS)
Ferrone, Kristine; Nguyen, Lori T.
2010-01-01
In developing a mission strategy for interplanetary travel, the first step is to consider launch capabilities which provide the basis for fundamental parameters of the mission. This investigation focuses on the numerous launch vehicles of various characteristics available and in development internationally with respect to upmass, launch site, payload shroud size, fuel type, cost, and launch frequency. This presentation will describe launch vehicles available and in development worldwide, then carefully detail a selection process for choosing appropriate vehicles for interplanetary missions focusing on international collaboration, risk management, and minimization of cost. The vehicles that fit the established criteria will be discussed in detail with emphasis on the specifications and limitations related to interplanetary travel. The final menu of options will include recommendations for overall mission design and strategy.
StochKit2: software for discrete stochastic simulation of biochemical systems with events.
Sanft, Kevin R; Wu, Sheng; Roh, Min; Fu, Jin; Lim, Rone Kwei; Petzold, Linda R
2011-09-01
StochKit2 is the first major upgrade of the popular StochKit stochastic simulation software package. StochKit2 provides highly efficient implementations of several variants of Gillespie's stochastic simulation algorithm (SSA), and tau-leaping with automatic step size selection. StochKit2 features include automatic selection of the optimal SSA method based on model properties, event handling, and automatic parallelism on multicore architectures. The underlying structure of the code has been completely updated to provide a flexible framework for extending its functionality. StochKit2 runs on Linux/Unix, Mac OS X and Windows. It is freely available under GPL version 3 and can be downloaded from http://sourceforge.net/projects/stochkit/. petzold@engineering.ucsb.edu.
Anand, Madhu; McLeod, M Chandler; Bell, Philip W; Roberts, Christopher B
2005-12-08
This paper presents an environmentally friendly, inexpensive, rapid, and efficient process for size-selective fractionation of polydisperse metal nanoparticle dispersions into multiple narrow size populations. The dispersibility of ligand-stabilized silver and gold nanoparticles is controlled by altering the ligand tails-solvent interaction (solvation) by the addition of carbon dioxide (CO2) gas as an antisolvent, thereby tailoring the bulk solvent strength. This is accomplished by adjusting the CO2 pressure over the liquid, resulting in a simple means to tune the nanoparticle precipitation by size. This study also details the influence of various factors on the size-separation process, such as the types of metal, ligand, and solvent, as well as the use of recursive fractionation and the time allowed for settling during each fractionation step. The pressure range required for the precipitation process is the same for both the silver and gold particles capped with dodecanethiol ligands. A change in ligand or solvent length has an effect on the interaction between the solvent and the ligand tails and therefore the pressure range required for precipitation. Stronger interactions between solvent and ligand tails require greater CO2 pressure to precipitate the particles. Temperature is another variable that impacts the dispersibility of the nanoparticles through changes in the density and the mole fraction of CO2 in the gas-expanded liquids. Recursive fractionation for a given system within a particular pressure range (solvent strength) further reduces the polydispersity of the fraction obtained within that pressure range. Specifically, this work utilizes the highly tunable solvent properties of organic/CO2 solvent mixtures to selectively size-separate dispersions of polydisperse nanoparticles (2 to 12 nm) into more monodisperse fractions (+/-2 nm). In addition to providing efficient separation of the particles, this process also allows all of the solvent and antisolvent to be recovered, thereby rendering it a green solvent process.
Sex and Caste-Specific Variation in Compound Eye Morphology of Five Honeybee Species
Streinzer, Martin; Brockmann, Axel; Nagaraja, Narayanappa; Spaethe, Johannes
2013-01-01
Ranging from dwarfs to giants, the species of honeybees show remarkable differences in body size that have placed evolutionary constrains on the size of sensory organs and the brain. Colonies comprise three adult phenotypes, drones and two female castes, the reproductive queen and sterile workers. The phenotypes differ with respect to tasks and thus selection pressures which additionally constrain the shape of sensory systems. In a first step to explore the variability and interaction between species size-limitations and sex and caste-specific selection pressures in sensory and neural structures in honeybees, we compared eye size, ommatidia number and distribution of facet lens diameters in drones, queens and workers of five species (Apis andreniformis, A. florea, A. dorsata, A. mellifera, A. cerana). In these species, male and female eyes show a consistent sex-specific organization with respect to eye size and regional specialization of facet diameters. Drones possess distinctly enlarged eyes with large dorsal facets. Aside from these general patterns, we found signs of unique adaptations in eyes of A. florea and A. dorsata drones. In both species, drone eyes are disproportionately enlarged. In A. dorsata the increased eye size results from enlarged facets, a likely adaptation to crepuscular mating flights. In contrast, the relative enlargement of A. florea drone eyes results from an increase in ommatidia number, suggesting strong selection for high spatial resolution. Comparison of eye morphology and published mating flight times indicates a correlation between overall light sensitivity and species-specific mating flight times. The correlation suggests an important role of ambient light intensities in the regulation of species-specific mating flight times and the evolution of the visual system. Our study further deepens insights into visual adaptations within the genus Apis and opens up future perspectives for research to better understand the timing mechanisms and sensory physiology of mating related signals. PMID:23460896
Real-time inverse planning for Gamma Knife radiosurgery.
Wu, Q Jackie; Chankong, Vira; Jitprapaikulsarn, Suradet; Wessels, Barry W; Einstein, Douglas B; Mathayomchan, Boonyanit; Kinsella, Timothy J
2003-11-01
The challenges of real-time Gamma Knife inverse planning are the large number of variables involved and the unknown search space a priori. With limited collimator sizes, shots have to be heavily overlapped to form a smooth prescription isodose line that conforms to the irregular target shape. Such overlaps greatly influence the total number of shots per plan, making pre-determination of the total number of shots impractical. However, this total number of shots usually defines the search space, a pre-requisite for most of the optimization methods. Since each shot only covers part of the target, a collection of shots in different locations and various collimator sizes selected makes up the global dose distribution that conforms to the target. Hence, planning or placing these shots is a combinatorial optimization process that is computationally expensive by nature. We have previously developed a theory of shot placement and optimization based on skeletonization. The real-time inverse planning process, reported in this paper, is an expansion and the clinical implementation of this theory. The complete planning process consists of two steps. The first step is to determine an optimal number of shots including locations and sizes and to assign initial collimator size to each of the shots. The second step is to fine-tune the weights using a linear-programming technique. The objective function is to minimize the total dose to the target boundary (i.e., maximize the dose conformity). Results of an ellipsoid test target and ten clinical cases are presented. The clinical cases are also compared with physician's manual plans. The target coverage is more than 99% for manual plans and 97% for all the inverse plans. The RTOG PITV conformity indices for the manual plans are between 1.16 and 3.46, compared to 1.36 to 2.4 for the inverse plans. All the inverse plans are generated in less than 2 min, making real-time inverse planning a reality.
El Bakkali, Ahmed; Haouane, Hicham; Moukhli, Abdelmajid; Costes, Evelyne; Van Damme, Patrick; Khadari, Bouchaib
2013-01-01
Phenotypic characterisation of germplasm collections is a decisive step towards association mapping analyses, but it is particularly expensive and tedious for woody perennial plant species. Characterisation could be more efficient if focused on a reasonably sized subset of accessions, or so-called core collection (CC), reflecting the geographic origin and variability of the germplasm. The questions that arise concern the sample size to use and genetic parameters that should be optimized in a core collection to make it suitable for association mapping. Here we investigated these questions in olive (Olea europaea L.), a perennial fruit species. By testing different sampling methods and sizes in a worldwide olive germplasm bank (OWGB Marrakech, Morocco) containing 502 unique genotypes characterized by nuclear and plastid loci, a two-step sampling method was proposed. The Shannon-Weaver diversity index was found to be the best criterion to be maximized in the first step using the Core Hunter program. A primary core collection of 50 entries (CC50) was defined that captured more than 80% of the diversity. This latter was subsequently used as a kernel with the Mstrat program to capture the remaining diversity. 200 core collections of 94 entries (CC94) were thus built for flexibility in the choice of varieties to be studied. Most entries of both core collections (CC50 and CC94) were revealed to be unrelated due to the low kinship coefficient, whereas a genetic structure spanning the eastern and western/central Mediterranean regions was noted. Linkage disequilibrium was observed in CC94 which was mainly explained by a genetic structure effect as noted for OWGB Marrakech. Since they reflect the geographic origin and diversity of olive germplasm and are of reasonable size, both core collections will be of major interest to develop long-term association studies and thus enhance genomic selection in olive species. PMID:23667437
Guo, X; Christensen, O F; Ostersen, T; Wang, Y; Lund, M S; Su, G
2015-02-01
A single-step method allows genetic evaluation using information of phenotypes, pedigree, and markers from genotyped and nongenotyped individuals simultaneously. This paper compared genomic predictions obtained from a single-step BLUP (SSBLUP) method, a genomic BLUP (GBLUP) method, a selection index blending (SELIND) method, and a traditional pedigree-based method (BLUP) for total number of piglets born (TNB), litter size at d 5 after birth (LS5), and mortality rate before d 5 (Mort; including stillbirth) in Danish Landrace and Yorkshire pigs. Data sets of 778,095 litters from 309,362 Landrace sows and 472,001 litters from 190,760 Yorkshire sows were used for the analysis. There were 332,795 Landrace and 207,255 Yorkshire animals in the pedigree data, among which 3,445 Landrace pigs (1,366 boars and 2,079 sows) and 3,372 Yorkshire pigs (1,241 boars and 2,131 sows) were genotyped with the Illumina PorcineSNP60 BeadChip. The results showed that the 3 methods with marker information (SSBLUP, GBLUP, and SELIND) produced more accurate predictions for genotyped animals than the pedigree-based method. For genotyped animals, the average of reliabilities for all traits in both breeds using traditional BLUP was 0.091, which increased to 0.171 w+hen using GBLUP and to 0.179 when using SELIND and further increased to 0.209 when using SSBLUP. Furthermore, the average reliability of EBV for nongenotyped animals was increased from 0.091 for traditional BLUP to 0.105 for the SSBLUP. The results indicate that the SSBLUP is a good approach to practical genomic prediction of litter size and piglet mortality in Danish Landrace and Yorkshire populations.
Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Sun, Yuchun; Wang, Yong
2015-02-01
The purpose of this study was to establish a depth-control method in enamel-cavity ablation by optimizing the timing of the focal-plane-normal stepping and the single-step size of a three axis, numerically controlled picosecond laser. Although it has been proposed that picosecond lasers may be used to ablate dental hard tissue, the viability of such a depth-control method in enamel-cavity ablation remains uncertain. Forty-two enamel slices with approximately level surfaces were prepared and subjected to two-dimensional ablation by a picosecond laser. The additive-pulse layer, n, was set to 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70. A three-dimensional microscope was then used to measure the ablation depth, d, to obtain a quantitative function relating n and d. Six enamel slices were then subjected to three dimensional ablation to produce 10 cavities, respectively, with additive-pulse layer and single-step size set to corresponding values. The difference between the theoretical and measured values was calculated for both the cavity depth and the ablation depth of a single step. These were used to determine minimum-difference values for both the additive-pulse layer (n) and single-step size (d). When the additive-pulse layer and the single-step size were set 5 and 45, respectively, the depth error had a minimum of 2.25 μm, and 450 μm deep enamel cavities were produced. When performing three-dimensional ablating of enamel with a picosecond laser, adjusting the timing of the focal-plane-normal stepping and the single-step size allows for the control of ablation-depth error to the order of micrometers.
Critical motor number for fractional steps of cytoskeletal filaments in gliding assays.
Li, Xin; Lipowsky, Reinhard; Kierfeld, Jan
2012-01-01
In gliding assays, filaments are pulled by molecular motors that are immobilized on a solid surface. By varying the motor density on the surface, one can control the number N of motors that pull simultaneously on a single filament. Here, such gliding assays are studied theoretically using brownian (or Langevin) dynamics simulations and taking the local force balance between motors and filaments as well as the force-dependent velocity of the motors into account. We focus on the filament stepping dynamics and investigate how single motor properties such as stalk elasticity and step size determine the presence or absence of fractional steps of the filaments. We show that each gliding assay can be characterized by a critical motor number, N(c). Because of thermal fluctuations, fractional filament steps are only detectable as long as N < N(c). The corresponding fractional filament step size is l/N where l is the step size of a single motor. We first apply our computational approach to microtubules pulled by kinesin-1 motors. For elastic motor stalks that behave as linear springs with a zero rest length, the critical motor number is found to be N(c) = 4, and the corresponding distributions of the filament step sizes are in good agreement with the available experimental data. In general, the critical motor number N(c) depends on the elastic stalk properties and is reduced to N(c) = 3 for linear springs with a nonzero rest length. Furthermore, N(c) is shown to depend quadratically on the motor step size l. Therefore, gliding assays consisting of actin filaments and myosin-V are predicted to exhibit fractional filament steps up to motor number N = 31. Finally, we show that fractional filament steps are also detectable for a fixed average motor number
Microstructure of room temperature ionic liquids at stepped graphite electrodes
Feng, Guang; Li, Song; Zhao, Wei; ...
2015-07-14
Molecular dynamics simulations of room temperature ionic liquid (RTIL) [emim][TFSI] at stepped graphite electrodes were performed to investigate the influence of the thickness of the electrode surface step on the microstructure of interfacial RTILs. A strong correlation was observed between the interfacial RTIL structure and the step thickness in electrode surface as well as the ion size. Specifically, when the step thickness is commensurate with ion size, the interfacial layering of cation/anion is more evident; whereas, the layering tends to be less defined when the step thickness is close to the half of ion size. Furthermore, two-dimensional microstructure of ionmore » layers exhibits different patterns and alignments of counter-ion/co-ion lattice at neutral and charged electrodes. As the cation/anion layering could impose considerable effects on ion diffusion, the detailed information of interfacial RTILs at stepped graphite presented here would help to understand the molecular mechanism of RTIL-electrode interfaces in supercapacitors.« less
Firefighter Hand Anthropometry and Structural Glove Sizing: A New Perspective.
Hsiao, Hongwei; Whitestone, Jennifer; Kau, Tsui-Ying; Hildreth, Brooke
2015-12-01
We evaluated the current use and fit of structural firefighting gloves and developed an improved sizing scheme that better accommodates the U.S. firefighter population. Among surveys, 24% to 30% of men and 31% to 62% of women reported experiencing problems with the fit or bulkiness of their structural firefighting gloves. An age-, race/ethnicity-, and gender-stratified sample of 863 male and 88 female firefighters across the United States participated in the study. Fourteen hand dimensions relevant to glove design were measured. A cluster analysis of the hand dimensions was performed to explore options for an improved sizing scheme. The current national standard structural firefighting glove-sizing scheme underrepresents firefighter hand size range and shape variation. In addition, mismatch between existing sizing specifications and hand characteristics, such as hand dimensions, user selection of glove size, and the existing glove sizing specifications, is significant. An improved glove-sizing plan based on clusters of overall hand size and hand/finger breadth-to-length contrast has been developed. This study presents the most up-to-date firefighter hand anthropometry and a new perspective on glove accommodation. The new seven-size system contains narrower variations (standard deviations) for almost all dimensions for each glove size than the current sizing practices. The proposed science-based sizing plan for structural firefighting gloves provides a step-forward perspective (i.e., including two women hand model-based sizes and two wide-palm sizes for men) for glove manufacturers to advance firefighter hand protection. © 2015, Human Factors and Ergonomics Society.
A discourse on sensitivity analysis for discretely-modeled structures
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Haftka, Raphael T.
1991-01-01
A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.
Ivezic, Nenad; Potok, Thomas E.
2003-09-30
A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.
High flow rate nozzle system with production of uniform size droplets
Stockel, I.H.
1990-10-16
Method steps for production of substantially uniform size droplets from a flow of liquid include forming the flow of liquid, periodically modulating the momentum of the flow of liquid in the flow direction at controlled frequency, generating a cross flow direction component of momentum and modulation of the cross flow momentum of liquid at substantially the same frequency and phase as the modulation of flow direction momentum, and spraying the so formed modulated flow through a first nozzle outlet to form a desired spray configuration. A second modulated flow through a second nozzle outlet is formed according to the same steps, and the first and second modulated flows impinge upon each other generating a liquid sheet. Nozzle apparatus for modulating each flow includes rotating valving plates interposed in the annular flow of liquid. The plates are formed with radial slots. Rotation of the rotating plates is separably controlled at differential angular velocities for a selected modulating frequency to achieve the target droplet size and production rate for a given flow. The counter rotating plates are spaced to achieve a desired amplitude of modulation in the flow direction, and the angular velocity of the downstream rotating plate is controlled to achieve the desired amplitude of modulation of momentum in the cross flow direction. Amplitude of modulation is set according to liquid viscosity. 5 figs.
High flow rate nozzle system with production of uniform size droplets
Stockel, Ivar H.
1990-01-01
Method steps for production of substantially uniform size droplets from a flow of liquid include forming the flow of liquid, periodically modulating the momentum of the flow of liquid in the flow direction at controlled frequency, generating a cross flow direction component of momentum and modulation of the cross flow momentum of liquid at substantially the same frequency and phase as the modulation of flow direction momentum, and spraying the so formed modulated flow through a first nozzle outlet to form a desired spray configuration. A second modulated flow through a second nozzle outlet is formed according to the same steps, and the first and second modulated flows impinge upon each other generating a liquid sheet. Nozzle apparatus for modulating each flow includes rotating valving plates interposed in the annular flow of liquid. The plates are formed with radial slots. Rotation of the rotating plates is separably controlled at differential angular velocities for a selected modulating frequency to achieve the target droplet size and production rate for a given flow. The counter rotating plates are spaced to achieve a desired amplitude of modulation in the flow direction, and the angular velocity of the downstream rotating plate is controlled to achieve the desired amplitude of modulation of momentum in the cross flow direction. Amplitude of modulation is set according to liquid viscosity.
NASA Astrophysics Data System (ADS)
Wang, Xingrui; Zhao, Yang; Liu, Jie; Chen, Jie; Li, Tongbao; Cheng, Xinbin
2016-09-01
One-dimensional multilayer gratings were prepared by four steps. A periodic Si/SiO2 multilayer was firstly deposited on Si substrate using a magnetron sputtering coating process. Then, the multilayer was been bonded and split into small pieces by diamond wire cutting. The side-wall of the cut sample was subsequently grinded and polished until the surface roughness was less than 1nm. Finally, the SiO2 layers were selective etched using hydrofluoric acid to form the grating structure. In the above steps, special attentions were given to optimize the etching processes to achieve a uniform and smooth grating pattern. Transmission electron microscope (TEM) was used to characterize the multilayer gratings. The pitch size of the grating was evaluated by an offline image analysis algorithm and optimized processes are discussed.
Method for removing metal ions from solution with titanate sorbents
Lundquist, Susan H.; White, Lloyd R.
1999-01-01
A method for removing metal ions from solution comprises the steps of providing titanate particles by spray-drying a solution or slurry comprising sorbent titanates having a particle size up to 20 micrometers, optionally in the presence of polymer free of cellulose functionality as binder, said sorbent being active towards heavy metals from Periodic Table (CAS version) Groups IA, IIA, IB, IIB, IIIB, and VIII, to provide monodisperse, substantially spherical particles in a yield of at least 70 percent of theoretical yield and having a particle size distribution in the range of 1 to 500 micrometers. The particles can be used free flowing in columns or beds, or entrapped in a nonwoven, fibrous web or matrix or a cast porous membrane, to selectively remove metal ions from aqueous or organic liquid.
Manikandan, A.; Biplab, Sarkar; David, Perianayagam A.; Holla, R.; Vivek, T. R.; Sujatha, N.
2011-01-01
For high dose rate (HDR) brachytherapy, independent treatment verification is needed to ensure that the treatment is performed as per prescription. This study demonstrates dosimetric quality assurance of the HDR brachytherapy using a commercially available two-dimensional ion chamber array called IMatriXX, which has a detector separation of 0.7619 cm. The reference isodose length, step size, and source dwell positional accuracy were verified. A total of 24 dwell positions, which were verified for positional accuracy gave a total error (systematic and random) of –0.45 mm, with a standard deviation of 1.01 mm and maximum error of 1.8 mm. Using a step size of 5 mm, reference isodose length (the length of 100% isodose line) was verified for single and multiple catheters of same and different source loadings. An error ≤1 mm was measured in 57% of tests analyzed. Step size verification for 2, 3, 4, and 5 cm was performed and 70% of the step size errors were below 1 mm, with maximum of 1.2 mm. The step size ≤1 cm could not be verified by the IMatriXX as it could not resolve the peaks in dose profile. PMID:21897562
Large forging manufacturing process
Thamboo, Samuel V.; Yang, Ling
2002-01-01
A process for forging large components of Alloy 718 material so that the components do not exhibit abnormal grain growth includes the steps of: a) providing a billet with an average grain size between ASTM 0 and ASTM 3; b) heating the billet to a temperature of between 1750.degree. F. and 1800.degree. F.; c) upsetting the billet to obtain a component part with a minimum strain of 0.125 in at least selected areas of the part; d) reheating the component part to a temperature between 1750.degree. F. and 1800.degree. F.; e) upsetting the component part to a final configuration such that said selected areas receive no strains between 0.01 and 0.125; f) solution treating the component part at a temperature of between 1725.degree. F. and 1750.degree. F.; and g) aging the component part over predetermined times at different temperatures. A modified process achieves abnormal grain growth in selected areas of a component where desirable.
Selectively manipulable acoustic-powered microswimmers
Ahmed, Daniel; Lu, Mengqian; Nourhani, Amir; Lammert, Paul E.; Stratton, Zak; Muddana, Hari S.; Crespi, Vincent H.; Huang, Tony Jun
2015-01-01
Selective actuation of a single microswimmer from within a diverse group would be a first step toward collaborative guided action by a group of swimmers. Here we describe a new class of microswimmer that accomplishes this goal. Our swimmer design overcomes the commonly-held design paradigm that microswimmers must use non-reciprocal motion to achieve propulsion; instead, the swimmer is propelled by oscillatory motion of an air bubble trapped within the swimmer's polymer body. This oscillatory motion is driven by the application of a low-power acoustic field, which is biocompatible with biological samples and with the ambient liquid. This acoustically-powered microswimmer accomplishes controllable and rapid translational and rotational motion, even in highly viscous liquids (with viscosity 6,000 times higher than that of water). And by using a group of swimmers each with a unique bubble size (and resulting unique resonance frequencies), selective actuation of a single swimmer from among the group can be readily achieved. PMID:25993314
Zhang, Patrick; Liang, Haijun; Jin, Zhen; ...
2017-11-01
We report phosphate beneficiation in Florida generates more than one tonne of phosphatic clay, or slime, per tonne of phosphate rock produced. Since the start of the practice of large-scale washing and desliming for phosphate beneficiation, more than 2 Gt of slime has accumulated, containing approximately 600 Mt of phosphate rock, 600 kt of rare earth elements (REEs) and 80 million kilograms of uranium. The recovery of these valuable elements from the phosphatic clay is one of the most challenging endeavors in mineral processing, because the clay is extremely dilute, with an average solids concentration of 3 percent, and finemore » in size, with more than 50 percent having particle size smaller than 2 μm, and it contains nearly 50 percent clay minerals as well as large amounts of magnesium, iron and aluminum. With industry support and under funding from the Critical Materials Institute, the Florida Industrial and Phosphate Research Institute in conjunction with the Oak Ridge National Laboratory undertook the task to recover phosphorus, rare earths and uranium from Florida phosphatic clay. This paper presents the results from the preliminary testing of two approaches. The first approach involves three-stage cycloning using cyclones with diameters of 12.4 cm (5 in.), 5.08 cm (2 in.) and 2.54 cm (1 in.), respectively, to remove clay minerals followed by flotation and leaching. The second approach is a two-step leaching process. In the first step, selective leaching was conducted to remove magnesium, thus allowing the production of phosphoric acid suitable for the manufacture of diammonium phosphate (DAP) in the second leaching step. The results showed that multistage cycloning with small cyclones is necessary to remove clay minerals. Finally, selective leaching at about pH 3.2 using sulfuric acid was found to be effective for removing more than 80 percent of magnesium from the feed with minimal loss of phosphorus.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Patrick; Liang, Haijun; Jin, Zhen
We report phosphate beneficiation in Florida generates more than one tonne of phosphatic clay, or slime, per tonne of phosphate rock produced. Since the start of the practice of large-scale washing and desliming for phosphate beneficiation, more than 2 Gt of slime has accumulated, containing approximately 600 Mt of phosphate rock, 600 kt of rare earth elements (REEs) and 80 million kilograms of uranium. The recovery of these valuable elements from the phosphatic clay is one of the most challenging endeavors in mineral processing, because the clay is extremely dilute, with an average solids concentration of 3 percent, and finemore » in size, with more than 50 percent having particle size smaller than 2 μm, and it contains nearly 50 percent clay minerals as well as large amounts of magnesium, iron and aluminum. With industry support and under funding from the Critical Materials Institute, the Florida Industrial and Phosphate Research Institute in conjunction with the Oak Ridge National Laboratory undertook the task to recover phosphorus, rare earths and uranium from Florida phosphatic clay. This paper presents the results from the preliminary testing of two approaches. The first approach involves three-stage cycloning using cyclones with diameters of 12.4 cm (5 in.), 5.08 cm (2 in.) and 2.54 cm (1 in.), respectively, to remove clay minerals followed by flotation and leaching. The second approach is a two-step leaching process. In the first step, selective leaching was conducted to remove magnesium, thus allowing the production of phosphoric acid suitable for the manufacture of diammonium phosphate (DAP) in the second leaching step. The results showed that multistage cycloning with small cyclones is necessary to remove clay minerals. Finally, selective leaching at about pH 3.2 using sulfuric acid was found to be effective for removing more than 80 percent of magnesium from the feed with minimal loss of phosphorus.« less
Linear micromechanical stepping drive for pinhole array positioning
NASA Astrophysics Data System (ADS)
Endrödy, Csaba; Mehner, Hannes; Grewe, Adrian; Hoffmann, Martin
2015-05-01
A compact linear micromechanical stepping drive for positioning a 7 × 5.5 mm2 optical pinhole array is presented. The system features a step size of 13.2 µm and a full displacement range of 200 µm. The electrostatic inch-worm stepping mechanism shows a compact design capable of positioning a payload 50% of its own weight. The stepping drive movement, step sizes and position accuracy are characterized. The actuated pinhole array is integrated in a confocal chromatic hyperspectral imaging system, where coverage of the object plane, and therefore the useful picture data, can be multiplied by 14 in contrast to a non-actuated array.
Blind beam-hardening correction from Poisson measurements
NASA Astrophysics Data System (ADS)
Gu, Renliang; Dogandžić, Aleksandar
2016-02-01
We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Rong; Zhao, Jie; Yuan, Bing
The Hayashi–Ito aldol reaction of methyl isocyanoacetate (MI) and benzaldehydes, a classic homogeneous Au(I)-catalyzed reaction, was studied with heterogenized homogeneous catalysts. Among dendrimer encapsulated nanoparticles (NPs) of Au, Pd, Rh, or Pt loaded in mesoporous supports and the homogeneous analogues, the Au NPs led to the highest yield and highest diastereoselectivity of products in toluene at room temperature. The Au catalyst was stable and was recycled for at least six runs without substantial deactivation. Moreover, larger pore sizes of the support and the use of a hydrophobic solvent led to a high selectivity for the trans diastereomer of the product.more » The activation energy is sensitive to neither the size of Au NPs nor the support. A linear Hammett plot was obtained with a positive slope, suggesting an increased electron density on the carbonyl carbon atom in the rate-limiting step. As a result, IR studies revealed a strong interaction between MI and the gold catalyst, supporting the proposed mechanism, in which rate-limiting step involves an electrophilic attack of the aldehyde on the enolate formed from the deprotonated MI.« less
Ye, Rong; Zhao, Jie; Yuan, Bing; ...
2016-12-14
The Hayashi–Ito aldol reaction of methyl isocyanoacetate (MI) and benzaldehydes, a classic homogeneous Au(I)-catalyzed reaction, was studied with heterogenized homogeneous catalysts. Among dendrimer encapsulated nanoparticles (NPs) of Au, Pd, Rh, or Pt loaded in mesoporous supports and the homogeneous analogues, the Au NPs led to the highest yield and highest diastereoselectivity of products in toluene at room temperature. The Au catalyst was stable and was recycled for at least six runs without substantial deactivation. Moreover, larger pore sizes of the support and the use of a hydrophobic solvent led to a high selectivity for the trans diastereomer of the product.more » The activation energy is sensitive to neither the size of Au NPs nor the support. A linear Hammett plot was obtained with a positive slope, suggesting an increased electron density on the carbonyl carbon atom in the rate-limiting step. As a result, IR studies revealed a strong interaction between MI and the gold catalyst, supporting the proposed mechanism, in which rate-limiting step involves an electrophilic attack of the aldehyde on the enolate formed from the deprotonated MI.« less
Development of automatic body condition scoring using a low-cost 3-dimensional Kinect camera.
Spoliansky, Roii; Edan, Yael; Parmet, Yisrael; Halachmi, Ilan
2016-09-01
Body condition scoring (BCS) is a farm-management tool for estimating dairy cows' energy reserves. Today, BCS is performed manually by experts. This paper presents a 3-dimensional algorithm that provides a topographical understanding of the cow's body to estimate BCS. An automatic BCS system consisting of a Kinect camera (Microsoft Corp., Redmond, WA) triggered by a passive infrared motion detector was designed and implemented. Image processing and regression algorithms were developed and included the following steps: (1) image restoration, the removal of noise; (2) object recognition and separation, identification and separation of the cows; (3) movie and image selection, selection of movies and frames that include the relevant data; (4) image rotation, alignment of the cow parallel to the x-axis; and (5) image cropping and normalization, removal of irrelevant data, setting the image size to 150×200 pixels, and normalizing image values. All steps were performed automatically, including image selection and classification. Fourteen individual features per cow, derived from the cows' topography, were automatically extracted from the movies and from the farm's herd-management records. These features appear to be measurable in a commercial farm. Manual BCS was performed by a trained expert and compared with the output of the training set. A regression model was developed, correlating the features with the manual BCS references. Data were acquired for 4 d, resulting in a database of 422 movies of 101 cows. Movies containing cows' back ends were automatically selected (389 movies). The data were divided into a training set of 81 cows and a test set of 20 cows; both sets included the identical full range of BCS classes. Accuracy tests gave a mean absolute error of 0.26, median absolute error of 0.19, and coefficient of determination of 0.75, with 100% correct classification within 1 step and 91% correct classification within a half step for BCS classes. Results indicated good repeatability, with all standard deviations under 0.33. The algorithm is independent of the background and requires 10 cows for training with approximately 30 movies of 4 s each. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghobadi, Kimia; Ghaffari, Hamid R.; Aleman, Dionne M.
2013-09-15
Purpose: The purpose of this work is to advance the two-step approach for Gamma Knife{sup ®} Perfexion™ (PFX) optimization to account for dose homogeneity and overlap between the planning target volume (PTV) and organs-at-risk (OARs).Methods: In the first step, a geometry-based algorithm is used to quickly select isocentre locations while explicitly accounting for PTV-OARs overlaps. In this approach, the PTV is divided into subvolumes based on the PTV-OARs overlaps and the distance of voxels to the overlaps. Only a few isocentres are selected in the overlap volume, and a higher number of isocentres are carefully selected among voxels that aremore » immediately close to the overlap volume. In the second step, a convex optimization is solved to find the optimal combination of collimator sizes and their radiation duration for each isocentre location.Results: This two-step approach is tested on seven clinical cases (comprising 11 targets) for which the authors assess coverage, OARs dose, and homogeneity index and relate these parameters to the overlap fraction for each case. In terms of coverage, the mean V{sub 99} for the gross target volume (GTV) was 99.8% while the V{sub 95} for the PTV averaged at 94.6%, thus satisfying the clinical objectives of 99% for GTV and 95% for PTV, respectively. The mean relative dose to the brainstem was 87.7% of the prescription dose (with maximum 108%), while on average, 11.3% of the PTV overlapped with the brainstem. The mean beam-on time per fraction per dose was 8.6 min with calibration dose rate of 3.5 Gy/min, and the computational time averaged at 205 min. Compared with previous work involving single-fraction radiosurgery, the resulting plans were more homogeneous with average homogeneity index of 1.18 compared to 1.47.Conclusions: PFX treatment plans with homogeneous dose distribution can be achieved by inverse planning using geometric isocentre selection and mathematical modeling and optimization techniques. The quality of the obtained treatment plans are clinically satisfactory while the homogeneity index is improved compared to conventional PFX plans.« less
Measuring perceived video quality of MPEG enhancement by people with impaired vision
Fullerton, Matthew; Woods, Russell L.; Vera-Diaz, Fuensanta A.; Peli, Eli
2007-01-01
We used a new method to measure the perceived quality of contrast-enhanced motion video. Patients with impaired vision (n = 24) and normally-sighted subjects (n = 6) adjusted the level of MPEG-based enhancement of 8 videos (4 minutes each) drawn from 4 categories. They selected the level of enhancement that provided the preferred view of the videos, using a reducing-step-size staircase procedure. Most patients made consistent selections of the preferred level of enhancement, indicating an appreciation of and a perceived benefit from the MPEG-based enhancement. The selections varied between patients and were correlated with letter contrast sensitivity, but the selections were not affected by training, experience or video category. We measured just noticeable differences (JNDs) directly for videos, and mapped the image manipulation (enhancement in our case) onto an approximately linear perceptual space. These tools and approaches will be of value in other evaluations of the image quality of motion video manipulations. PMID:18059909
Selection of regularization parameter in total variation image restoration.
Liao, Haiyong; Li, Fang; Ng, Michael K
2009-11-01
We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.
2017-01-01
A triazine based disc shaped molecule with two hydrolyzable units, imine and ester groups, was polymerized via acyclic diene metathesis in the columnar hexagonal (Colhex) LC phase. Fabrication of a cationic nanoporous polymer (pore diameter ∼1.3 nm) lined with ammonium groups at the pore surface was achieved by hydrolysis of the imine linkage. Size selective aldehyde uptake by the cationic porous polymer was demonstrated. The anilinium groups in the pores were converted to azide as well as phenyl groups by further chemical treatment, leading to porous polymers with neutral functional groups in the pores. The pores were enlarged by further hydrolysis of the ester groups to create ∼2.6 nm pores lined with −COONa surface groups. The same pores could be obtained in a single step without first hydrolyzing the imine linkage. XRD studies demonstrated that the Colhex order of the monomer was preserved after polymerization as well as in both the nanoporous polymers. The porous anionic polymer lined with −COOH groups was further converted to the −COOLi, −COONa, −COOK, −COOCs, and −COONH4 salts. The porous polymer lined with −COONa groups selectively adsorbs a cationic dye, methylene blue, over an anionic dye. PMID:28416888
Statistical Modeling of Robotic Random Walks on Different Terrain
NASA Astrophysics Data System (ADS)
Naylor, Austin; Kinnaman, Laura
Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.
NASA Astrophysics Data System (ADS)
Omar, M. A.; Parvataneni, R.; Zhou, Y.
2010-09-01
Proposed manuscript describes the implementation of a two step processing procedure, composed of the self-referencing and the Principle Component Thermography (PCT). The combined approach enables the processing of thermograms from transient (flash), steady (halogen) and selective (induction) thermal perturbations. Firstly, the research discusses the three basic processing schemes typically applied for thermography; namely mathematical transformation based processing, curve-fitting processing, and direct contrast based calculations. Proposed algorithm utilizes the self-referencing scheme to create a sub-sequence that contains the maximum contrast information and also compute the anomalies' depth values. While, the Principle Component Thermography operates on the sub-sequence frames by re-arranging its data content (pixel values) spatially and temporally then it highlights the data variance. The PCT is mainly used as a mathematical mean to enhance the defects' contrast thus enabling its shape and size retrieval. The results show that the proposed combined scheme is effective in processing multiple size defects in sandwich steel structure in real-time (<30 Hz) and with full spatial coverage, without the need for a priori defect-free area.
Method for analyzing soil structure according to the size of structural elements
NASA Astrophysics Data System (ADS)
Wieland, Ralf; Rogasik, Helmut
2015-02-01
The soil structure in situ is the result of cropping history and soil development over time. It can be assessed by the size distribution of soil structural elements such as air-filled macro-pores, aggregates and stones, which are responsible for important water and solute transport processes, gas exchange, and the stability of the soil against compacting and shearing forces exerted by agricultural machinery. A method was developed to detect structural elements of the soil in selected horizontal slices of soil core samples with different soil structures in order for them to be implemented accordingly. In the second step, a fitting tool (Eureqa) based on artificial programming was used to find a general function to describe ordered sets of detected structural elements. It was shown that all the samples obey a hyperbolic function: Y(k) = A /(B + k) , k ∈ { 0 , 1 , 2 , … }. This general behavior can be used to develop a classification method based on parameters {A and B}. An open source software program in Python was developed, which can be downloaded together with a selection of soil samples.
Corl, Ammon; Davis, Alison R; Kuchta, Shawn R; Sinervo, Barry
2010-03-02
Polymorphism may play an important role in speciation because new species could originate from the distinctive morphs observed in polymorphic populations. However, much remains to be understood about the process by which morphs found new species. To detail the steps of this mode of speciation, we studied the geographic variation and evolutionary history of a throat color polymorphism that distinguishes the "rock-paper-scissors" mating strategies of the side-blotched lizard, Uta stansburiana. We found that the polymorphism is geographically widespread and has been maintained for millions of years. However, there are many populations with reduced numbers of throat color morphs. Phylogenetic reconstruction showed that the polymorphism is ancestral, but it has been independently lost eight times, often giving rise to morphologically distinct subspecies/species. Changes to the polymorphism likely involved selection because the allele for one particular male strategy, the "sneaker" morph, has been lost in all cases. Polymorphism loss was associated with accelerated evolution of male size, female size, and sexual dimorphism, which suggests that polymorphism loss can promote rapid divergence among populations and aid species formation.
Phansalkar, Rasika S; Nam, Joo-Won; Chen, Shao-Nong; McAlpine, James B; Leme, Ariene A; Aydin, Berdan; Bedran-Russo, Ana-Karina; Pauli, Guido F
2018-02-02
Proanthocyanidins (PACs) find wide applications for human use including food, cosmetics, dietary supplements, and pharmaceuticals. The chemical complexity associated with PACs has triggered the development of various chromatographic techniques, with countercurrent separation (CCS) gaining in popularity. This study applied the recently developed DESIGNER (Depletion and Enrichment of Select Ingredients Generating Normalized Extract Resources) approach for the selective enrichment of trimeric and tetrameric PACs using centrifugal partition chromatography (CPC). This CPC method aims at developing PAC based biomaterials, particularly for their application in restoring and repairing dental hard tissue. A general separation scheme beginning with the depletion of polymeric PACs, followed by the removal of monomeric flavan-3-ols and a final enrichment step produced PAC trimer and tetramer enriched fractions. A successful application of this separation scheme is demonstrated for four polyphenol rich plant sources: grape seeds, pine bark, cinnamon bark, and cocoa seeds. Minor modifications to the generic DESIGNER CCS method were sufficient to accommodate the varying chemical complexities of the individual source materials. The step-wise enrichment of PAC trimers and tetramers was monitored using normal phase TLC and Diol-HPLC-UV analyses. CPC proved to be a reliable tool for the selective enrichment of medium size oligomeric PACs (OPACs). This method plays a key role in the development of dental biomaterials considering its reliability and reproducibility, as well as its scale-up capabilities for possible larger-scale manufacturing. Copyright © 2017 Elsevier B.V. All rights reserved.
Halstead, S B; Marchette, N J; Diwan, A R; Palumbo, N E; Putvatana, R
1984-07-01
Uncloned dengue (DEN) 4 (H-241) which had been passaged 15, 30 and 50 times in primary dog kidney (PDK) cells were subjected to two successive terminal dilution procedures. In the first (3Cl), virus was diluted in 10-fold steps in 10 replicate tubes. An infected tube from a dilution row with three or fewer virus-infected tubes was selected for two further passages. In the second (TD3), virus was triple terminal diluted using 2-fold dilution steps and selecting one positive tube out of 10. Both procedures selected virus population which differed from antecedents. Plaque size of PDK 15 was medium, PDK 30, small and PDK 50, pin-point. PDK 19-3Cl were medium and 56-3Cl, 24-TD3, 35-TD3 and 61-TD3 were all small. All cloned virus replication was completely shut-off at 38.5 degrees C; PDK 15 and 30 continued to replicate at this temperature. Uncloned viruses showed a graduated decrease in monkey virulence with PDK passage; cloned viruses were either avirulent for monkeys (19-3Cl, 56-31Cl, 24-TD3 and 35-TD3) or produced revertant large plaque parental-type viremia (35-3Cl and 61-TD3). Those cloned viruses which exhibited temperature sensitivity, reduced monkey virulence and stability after monkey passage may be suitable as vaccine candidates for evaluation in human beings.
Dependence of Hurricane intensity and structures on vertical resolution and time-step size
NASA Astrophysics Data System (ADS)
Zhang, Da-Lin; Wang, Xiaoxue
2003-09-01
In view of the growing interests in the explicit modeling of clouds and precipitation, the effects of varying vertical resolution and time-step sizes on the 72-h explicit simulation of Hurricane Andrew (1992) are studied using the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR) mesoscale model (i.e., MM5) with the finest grid size of 6 km. It is shown that changing vertical resolution and time-step size has significant effects on hurricane intensity and inner-core cloud/precipitation, but little impact on the hurricane track. In general, increasing vertical resolution tends to produce a deeper storm with lower central pressure and stronger three-dimensional winds, and more precipitation. Similar effects, but to a less extent, occur when the time-step size is reduced. It is found that increasing the low-level vertical resolution is more efficient in intensifying a hurricane, whereas changing the upper-level vertical resolution has little impact on the hurricane intensity. Moreover, the use of a thicker surface layer tends to produce higher maximum surface winds. It is concluded that the use of higher vertical resolution, a thin surface layer, and smaller time-step sizes, along with higher horizontal resolution, is desirable to model more realistically the intensity and inner-core structures and evolution of tropical storms as well as the other convectively driven weather systems.
How Haptic Size Sensations Improve Distance Perception
Battaglia, Peter W.; Kersten, Daniel; Schrater, Paul R.
2011-01-01
Determining distances to objects is one of the most ubiquitous perceptual tasks in everyday life. Nevertheless, it is challenging because the information from a single image confounds object size and distance. Though our brains frequently judge distances accurately, the underlying computations employed by the brain are not well understood. Our work illuminates these computions by formulating a family of probabilistic models that encompass a variety of distinct hypotheses about distance and size perception. We compare these models' predictions to a set of human distance judgments in an interception experiment and use Bayesian analysis tools to quantitatively select the best hypothesis on the basis of its explanatory power and robustness over experimental data. The central question is: whether, and how, human distance perception incorporates size cues to improve accuracy. Our conclusions are: 1) humans incorporate haptic object size sensations for distance perception, 2) the incorporation of haptic sensations is suboptimal given their reliability, 3) humans use environmentally accurate size and distance priors, 4) distance judgments are produced by perceptual “posterior sampling”. In addition, we compared our model's estimated sensory and motor noise parameters with previously reported measurements in the perceptual literature and found good correspondence between them. Taken together, these results represent a major step forward in establishing the computational underpinnings of human distance perception and the role of size information. PMID:21738457
Assembly, growth, and catalytic activity of gold nanoparticles in hollow carbon nanofibers.
La Torre, Alessandro; Giménez-López, Maria del Carmen; Fay, Michael W; Rance, Graham A; Solomonsz, William A; Chamberlain, Thomas W; Brown, Paul D; Khlobystov, Andrei N
2012-03-27
Graphitized carbon nanofibers (GNFs) act as efficient templates for the growth of gold nanoparticles (AuNPs) adsorbed on the interior (and exterior) of the tubular nanostructures. Encapsulated AuNPs are stabilized by interactions with the step-edges of the individual graphitic nanocones, of which GNFs are composed, and their size is limited to approximately 6 nm, while AuNPs adsorbed on the atomically flat graphitic surfaces of the GNF exterior continue their growth to 13 nm and beyond under the same heat treatment conditions. The corrugated structure of the GNF interior imposes a significant barrier for the migration of AuNPs, so that their growth mechanism is restricted to Ostwald ripening. Conversely, nanoparticles adsorbed on smooth GNF exterior surfaces are more likely to migrate and coalesce into larger nanoparticles, as revealed by in situ transmission electron microscopy imaging. The presence of alkyl thiol surfactant within the GNF channels changes the dynamics of the AuNP transformations, as surfactant molecules adsorbed on the surface of the AuNPs diminished the stabilization effect of the step-edges, thus allowing nanoparticles to grow until their diameters reach the internal diameter of the host nanofiber. Nanoparticles thermally evolved within the GNF channel exhibit alignment, perpendicular to the GNF axis due to interactions with the step-edges and parallel to the axis because of graphitic facets of the nanocones. Despite their small size, AuNPs in GNF possess high stability and remain unchanged at temperatures up to 300 °C in ambient atmosphere. Nanoparticles immobilized at the step-edges within GNF are shown to act as effective catalysts promoting the transformation of dimethylphenylsilane to bis(dimethylphenyl)disiloxane with a greater than 10-fold enhancement of selectivity as compared to free-standing or surface-adsorbed nanoparticles. © 2012 American Chemical Society
Stone, Orrin J; Biette, Kelly M; Murphy, Patrick J M
2014-01-01
Hydrophobic interaction chromatography (HIC) most commonly requires experimental determination (i.e., scouting) in order to select an optimal chromatographic medium for purifying a given target protein. Neither a two-step purification of untagged green fluorescent protein (GFP) from crude bacterial lysate using sequential HIC and size exclusion chromatography (SEC), nor HIC column scouting elution profiles of GFP, have been previously reported. Bacterial lysate expressing recombinant GFP was sequentially adsorbed to commercially available HIC columns containing butyl, octyl, and phenyl-based HIC ligands coupled to matrices of varying bead size. The lysate was fractionated using a linear ammonium phosphate salt gradient at constant pH. Collected HIC eluate fractions containing retained GFP were then pooled and further purified using high-resolution preparative SEC. Significant differences in presumptive GFP elution profiles were observed using in-line absorption spectrophotometry (A395) and post-run fluorimetry. SDS-PAGE and western blot demonstrated that fluorometric detection was the more accurate indicator of GFP elution in both HIC and SEC purification steps. Comparison of composite HIC column scouting data indicated that a phenyl ligand coupled to a 34 µm matrix produced the highest degree of target protein capture and separation. Conducting two-step protein purification using the preferred HIC medium followed by SEC resulted in a final, concentrated product with >98% protein purity. In-line absorbance spectrophotometry was not as precise of an indicator of GFP elution as post-run fluorimetry. These findings demonstrate the importance of utilizing a combination of detection methods when evaluating purification strategies. GFP is a well-characterized model protein, used heavily in educational settings and by researchers with limited protein purification experience, and the data and strategies presented here may aid in development other of HIC-compatible protein purification schemes.
Gregorini, P; Waghorn, G C; Kuhn-Sherlock, B; Romera, A J; Macdonald, K A
2015-09-01
The aim of this study was to investigate and assess differences in the grazing pattern of 2 groups of mature dairy cows selected as calves for divergent residual feed intake (RFI). Sixteen Holstein-Friesian cows (471±31kg of body weight, 100 d in milk), comprising 8 cows selected as calves (6-8 mo old) for low (most efficient: CSCLowRFI) and 8 cows selected as calves for high (least efficient: CSCHighRFI) RFI, were used for the purpose of this study. Cows (n=16) were managed as a single group, and strip-grazed (24-h pasture allocation at 0800h) a perennial ryegrass sward for 31 d, with measurements taken during the last 21 d. All cows were equipped with motion sensors for the duration of the study, and jaw movements were measured for three 24-h periods during 3 random nonconsecutive days. Measurements included number of steps and jaw movements during grazing and rumination, plus fecal particle size distribution. Jaw movements were analyzed to identify bites, mastication (oral processing of ingesta) during grazing bouts, chewing during rumination, and to calculate grazing and rumination times for 24-h periods. Grazing and walking behavior were also analyzed in relation to the first meal of the day after the new pasture was allocated. Measured variables were subjected to multivariate analysis. Cows selected for low RFI as calves appeared to (a) prioritize grazing and rumination over idling; (b) take fewer steps, but with a higher proportion of grazing steps at the expense of nongrazing steps; and (c) increase the duration of the first meal and commenced their second meal earlier than CSCHighRFI. The CSCLowRFI had fewer jaw movements during eating (39,820 vs. 45,118 for CSCLowRFI and CSCHighRFI, respectively), more intense rumination (i.e., 5 more chews per bolus), and their feces had 30% less large particles than CSCHighRFI. These results suggest that CSCLowRFI concentrate their grazing activity to the time when fresh pasture is allocated, and graze more efficiently by walking and masticating less, hence they are more efficient grazers than CSCHighRFI. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Balabin, Roman M; Smirnov, Sergey V
2011-04-29
During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Design of a side coupled standing wave accelerating tube for NSTRI e-Linac
NASA Astrophysics Data System (ADS)
Zarei, S.; Abbasi Davani, F.; Lamehi Rachti, M.; Ghasemi, F.
2017-09-01
The design and construction of a 6 MeV electron linear accelerator (e-Linac) was defined in the Institute of Nuclear Science and Technology (NSTRI) for cargo inspection and medical applications. For this accelerator, a side coupled standing wave tube resonant at a frequency of 2998.5 MHZ in π/2 mode was selected. In this article, the authors provide a step-by-step explanation of the process of the design for this tube. The design and simulation of the accelerating and coupling cavities were carried out in five steps; (1) separate design of the accelerating and coupling cavities, (2) design of the coupling aperture between the cavities, (3) design of the entire structure for resonance at the nominal frequency, (4) design of the buncher, and (5) design of the power coupling port. At all design stages, in addition to finding the dimensions of the cavity, the impact of construction tolerances and simulation errors on the electromagnetic parameters were investigated. The values obtained for the coupling coefficient, coupling constant, quality factor and capture efficiency are 2.11, 0.011, 16203 and 36%, respectively. The results of beam dynamics study of the simulated tube in ASTRA have yielded a value of 5.14 π-mm-mrad for the horizontal emittance, 5.06 π-mm-mrad for the vertical emittance, 1.17 mm for the horizontal beam size, 1.16 mm for the vertical beam size and 1090 keV for the energy spread of the output beam.
Evaluation of TOPLATS on three Mediterranean catchments
NASA Astrophysics Data System (ADS)
Loizu, Javier; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel
2016-08-01
Physically based hydrological models are complex tools that provide a complete description of the different processes occurring on a catchment. The TOPMODEL-based Land-Atmosphere Transfer Scheme (TOPLATS) simulates water and energy balances at different time steps, in both lumped and distributed modes. In order to gain insight on the behavior of TOPLATS and its applicability in different conditions a detailed evaluation needs to be carried out. This study aimed to develop a complete evaluation of TOPLATS including: (1) a detailed review of previous research works using this model; (2) a sensitivity analysis (SA) of the model with two contrasted methods (Morris and Sobol) of different complexity; (3) a 4-step calibration strategy based on a multi-start Powell optimization algorithm; and (4) an analysis of the influence of simulation time step (hourly vs. daily). The model was applied on three catchments of varying size (La Tejeria, Cidacos and Arga), located in Navarre (Northern Spain), and characterized by different levels of Mediterranean climate influence. Both Morris and Sobol methods showed very similar results that identified Brooks-Corey Pore Size distribution Index (B), Bubbling pressure (ψc) and Hydraulic conductivity decay (f) as the three overall most influential parameters in TOPLATS. After calibration and validation, adequate streamflow simulations were obtained in the two wettest catchments, but the driest (Cidacos) gave poor results in validation, due to the large climatic variability between calibration and validation periods. To overcome this issue, an alternative random and discontinuous method of cal/val period selection was implemented, improving model results.
Alba, Annia; Marcet, Ricardo; Otero, Oscar; Hernández, Hilda M; Figueredo, Mabel; Sarracent, Jorge
2016-02-01
Purification of immunoglobulin M (IgM) antibodies could be challenging, and is often characterized by the optimization of the purification protocol to best suit the particular features of the molecule. Here, two different schemes were compared to purify, from ascites, the 1E4 IgM monoclonal antibody (mAb) previously raised against the stage of redia of the trematode Fasciola hepatica. This immunoglobulin is used as capture antibody in an immunoenzymatic assay to detect parasite ongoing infection in its intermediate hosts. The first purification protocol of the 1E4 mAb involved two chromatographic steps: an affinity chromatography on a Concanavalin A matrix followed by size exclusion chromatography. An immunoaffinity chromatography was selected as the second protocol for one-step purification of the antibody using the crude extract of adult parasites coupled to a commercial matrix. Immunoreactivity of the fractions during purification schemes was assessed by indirect immunoenzymatic assays against the crude extract of F. hepatica rediae, while purity was estimated by protein electrophoresis. Losses on the recovery of the antibody isolated by the first purification protocol occurred due to protein precipitation during the concentration of the sample and to low resolution of the size exclusion molecular chromatography step regarding this particular immunoglobulin. The immunoaffinity chromatography using F. hepatica antigens as ligands proved to be the most suitable protocol yielding a pure and immunoreactive antibody. The purification protocols used are discussed regarding efficiency and difficulties.
Control Circuit For Two Stepping Motors
NASA Technical Reports Server (NTRS)
Ratliff, Roger; Rehmann, Kenneth; Backus, Charles
1990-01-01
Control circuit operates two independent stepping motors, one at a time. Provides following operating features: After selected motor stepped to chosen position, power turned off to reduce dissipation; Includes two up/down counters that remember at which one of eight steps each motor set. For selected motor, step indicated by illumination of one of eight light-emitting diodes (LED's) in ring; Selected motor advanced one step at time or repeatedly at rate controlled; Motor current - 30 mA at 90 degree positions, 60 mA at 45 degree positions - indicated by high or low intensity of LED that serves as motor-current monitor; Power-on reset feature provides trouble-free starts; To maintain synchronism between control circuit and motors, stepping of counters inhibited when motor power turned off.
NASA Astrophysics Data System (ADS)
Roslindar Yaziz, Siti; Zakaria, Roslinazairimah; Hura Ahmad, Maizah
2017-09-01
The model of Box-Jenkins - GARCH has been shown to be a promising tool for forecasting higher volatile time series. In this study, the framework of determining the optimal sample size using Box-Jenkins model with GARCH is proposed for practical application in analysing and forecasting higher volatile data. The proposed framework is employed to daily world gold price series from year 1971 to 2013. The data is divided into 12 different sample sizes (from 30 to 10200). Each sample is tested using different combination of the hybrid Box-Jenkins - GARCH model. Our study shows that the optimal sample size to forecast gold price using the framework of the hybrid model is 1250 data of 5-year sample. Hence, the empirical results of model selection criteria and 1-step-ahead forecasting evaluations suggest that the latest 12.25% (5-year data) of 10200 data is sufficient enough to be employed in the model of Box-Jenkins - GARCH with similar forecasting performance as by using 41-year data.
Towards a Mars base - Critical steps for life support on the moon and beyond
NASA Technical Reports Server (NTRS)
Rummel, John D.
1992-01-01
In providing crew life support for future exploration missions, overall exploration objectives will drive the life support solutions selected. Crew size, mission tasking, and exploration strategy will determine the performance required from life support systems. Human performance requirements, for example, may be offset by the availability of robotic assistance. Once established, exploration requirements for life support will be weighed against the financial and technical risks of developing new technologies and systems. Other considerations will include the demands that a particular life support strategy will make on planetary surface site selection, and the availability of precursor mission data to support EVA and in situ resource recovery planning. As space exploration progresses, the diversity of life support solutions that are implemented is bound to increase.
Six mode selective fiber optic spatial multiplexer.
Velazquez-Benitez, A M; Alvarado, J C; Lopez-Galmiche, G; Antonio-Lopez, J E; Hernández-Cordero, J; Sanchez-Mondragon, J; Sillard, P; Okonkwo, C M; Amezcua-Correa, R
2015-04-15
Low-loss all-fiber photonic lantern (PL) mode multiplexers (MUXs) capable of selectively exciting the first six fiber modes of a multimode fiber (LP01, LP11a, LP11b, LP21a, LP21b, and LP02) are demonstrated. Fabrication of the spatial mode multiplexers was successfully achieved employing a combination of either six step or six graded index fibers of four different core sizes. Insertion losses of 0.2-0.3 dB and mode purities above 9 dB are achieved. Moreover, it is demonstrated that the use of graded index fibers in a PL eases the length requirements of the adiabatic tapered transition and could enable scaling to large numbers.
Stoller, Marco; Pulido, Javier Miguel Ochando; Palma, Luca Di
2014-08-04
Membrane fouling is one of the main issues in membrane processes, leading to a progressive decrease of permeability. High fouling rates strongly reduce the productivity of the membrane plant, and negatively affect the surviving rate of the membrane modules, especially when real wastewater is treated. On the other hand, since selectivity must meet certain target requirements, fouling may lead to unexpected selectivity improvements due to the formation of an additional superficial layer formed of foulants and that act like a selective secondary membrane layer. In this case, a certain amount of fouling may be profitable to the point where selectivity targets were reached and productivity is not significantly affected. In this work, the secondary clarifier of a step sludge recirculation bioreactor treating municipal wastewater was replaced by a membrane unit, aiming at recovering return sludge and producing purified water. Fouling issues of such a system were checked by boundary flux measurements. A simple model for the description of the observed productivity and selectivity values as a function of membrane fouling is proposed.
Stoller, Marco; Pulido, Javier Miguel Ochando; Di Palma, Luca
2014-01-01
Membrane fouling is one of the main issues in membrane processes, leading to a progressive decrease of permeability. High fouling rates strongly reduce the productivity of the membrane plant, and negatively affect the surviving rate of the membrane modules, especially when real wastewater is treated. On the other hand, since selectivity must meet certain target requirements, fouling may lead to unexpected selectivity improvements due to the formation of an additional superficial layer formed of foulants and that act like a selective secondary membrane layer. In this case, a certain amount of fouling may be profitable to the point where selectivity targets were reached and productivity is not significantly affected. In this work, the secondary clarifier of a step sludge recirculation bioreactor treating municipal wastewater was replaced by a membrane unit, aiming at recovering return sludge and producing purified water. Fouling issues of such a system were checked by boundary flux measurements. A simple model for the description of the observed productivity and selectivity values as a function of membrane fouling is proposed. PMID:25093867
NASA Technical Reports Server (NTRS)
Majda, G.
1985-01-01
A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.
Firefighter Hand Anthropometry and Structural Glove Sizing: A New Perspective
Hsiao, Hongwei; Whitestone, Jennifer; Kau, Tsui-Ying; Hildreth, Brooke
2015-01-01
Objective We evaluated the current use and fit of structural firefighting gloves and developed an improved sizing scheme that better accommodates the U.S. firefighter population. Background Among surveys, 24% to 30% of men and 31% to 62% of women reported experiencing problems with the fit or bulkiness of their structural firefighting gloves. Method An age-, race/ethnicity-, and gender-stratified sample of 863 male and 88 female firefighters across the United States participated in the study. Fourteen hand dimensions relevant to glove design were measured. A cluster analysis of the hand dimensions was performed to explore options for an improved sizing scheme. Results The current national standard structural firefighting glove-sizing scheme underrepresents firefighter hand size range and shape variation. In addition, mismatch between existing sizing specifications and hand characteristics, such as hand dimensions, user selection of glove size, and the existing glove sizing specifications, is significant. An improved glove-sizing plan based on clusters of overall hand size and hand/finger breadth-to-length contrast has been developed. Conclusion This study presents the most up-to-date firefighter hand anthropometry and a new perspective on glove accommodation. The new seven-size system contains narrower variations (standard deviations) for almost all dimensions for each glove size than the current sizing practices. Application The proposed science-based sizing plan for structural firefighting gloves provides a step-forward perspective (i.e., including two women hand model–based sizes and two wide-palm sizes for men) for glove manufacturers to advance firefighter hand protection. PMID:26169309
Input variable selection and calibration data selection for storm water quality regression models.
Sun, Siao; Bertrand-Krajewski, Jean-Luc
2013-01-01
Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.
Persistent directional selection on body size and a resolution to the paradox of stasis.
Rollinson, Njal; Rowe, Locke
2015-09-01
Directional selection on size is common but often fails to result in microevolution in the wild. Similarly, macroevolutionary rates in size are low relative to the observed strength of selection in nature. We show that many estimates of selection on size have been measured on juveniles, not adults. Further, parents influence juvenile size by adjusting investment per offspring. In light of these observations, we help resolve this paradox by suggesting that the observed upward selection on size is balanced by selection against investment per offspring, resulting in little or no net selection gradient on size. We find that trade-offs between fecundity and juvenile size are common, consistent with the notion of selection against investment per offspring. We also find that median directional selection on size is positive for juveniles but no net directional selection exists for adult size. This is expected because parent-offspring conflict exists over size, and juvenile size is more strongly affected by investment per offspring than adult size. These findings provide qualitative support for the hypothesis that upward selection on size is balanced by selection against investment per offspring, where parent-offspring conflict over size is embodied in the opposing signs of the two selection gradients. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.
Duri, Simon; Tran, Chieu D.
2013-01-01
We have successfully developed a simple and one step method to prepare high performance supramolecular polysaccharide composites from cellulose (CEL), chitosan (CS) and (2,3,6-tri-O-acetyl)-α-, β- and γ-cyclodextrin (α-, β- and γ-TCD). In this method, [BMIm+Cl−], an ionic liquid (IL), was used as a solvent to dissolve and prepare the composites. Since majority (>88%) of the IL used was recovered for reuse, the method is recyclable. XRD, FT-IR, NIR and SEM were used to monitor the dissolution process and to confirm that the polysaccharides were regenerated without any chemical modifications. It was found that unique properties of each component including superior mechanical properties (from CEL), excellent adsorbent for pollutants and toxins (from CS) and size/structure selectivity through inclusion complex formation (from TCDs) remain intact in the composites. Specifically, results from kinetics and adsorption isotherms show that while CS-based composites can effectively adsorb the endocrine disruptors (polychlrophenols, bisphenol-A), its adsorption is independent on the size and structure of the analytes. Conversely, the adsorption by γ-TCD-based composites exhibits strong dependency on size and structure of the analytes. For example, while all three TCD-based composites (i.e., α-, β- and γ-TCD) can effectively adsorb 2-, 3- and 4-chlorophenol, only γ-TCD-based composite can adsorb analytes with bulky groups including 3,4-dichloro- and 2,4,5-trichlorophenol. Furthermore, equilibrium sorption capacities for the analytes with bulky groups by γ-TCD-based composite are much higher than those by CS-based composites. Together, these results indicate that γ-TCD-based composite with its relatively larger cavity size can readily form inclusion complexes with analytes with bulky groups, and through inclusion complex formation, it can strongly adsorb much more analytes and with size/structure selectivity compared to CS-based composites which can adsorb the analyte only by surface adsorption. PMID:23517477
Respiratory mechanics to understand ARDS and guide mechanical ventilation.
Mauri, Tommaso; Lazzeri, Marta; Bellani, Giacomo; Zanella, Alberto; Grasselli, Giacomo
2017-11-30
As precision medicine is becoming a standard of care in selecting tailored rather than average treatments, physiological measurements might represent the first step in applying personalized therapy in the intensive care unit (ICU). A systematic assessment of respiratory mechanics in patients with the acute respiratory distress syndrome (ARDS) could represent a step in this direction, for two main reasons. Approach and Main results: On the one hand, respiratory mechanics are a powerful physiological method to understand the severity of this syndrome in each single patient. Decreased respiratory system compliance, for example, is associated with low end expiratory lung volume and more severe lung injury. On the other hand, respiratory mechanics might guide protective mechanical ventilation settings. Improved gravitationally dependent regional lung compliance could support the selection of positive end-expiratory pressure and maximize alveolar recruitment. Moreover, the association between driving airway pressure and mortality in ARDS patients potentially underlines the importance of sizing tidal volume on respiratory system compliance rather than on predicted body weight. The present review article aims to describe the main alterations of respiratory mechanics in ARDS as a potent bedside tool to understand severity and guide mechanical ventilation settings, thus representing a readily available clinical resource for ICU physicians.
Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems
NASA Astrophysics Data System (ADS)
Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo
With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.
NASA Astrophysics Data System (ADS)
Nanev, Christo N.; Petrov, Kostadin P.
2017-12-01
The use of the classical nucleation-growth-separation principle (NGSP) was restricted hitherto to nucleation kinetics studies only. A novel application of the NGSP is proposed. To reduce crystal polydispersity internal seeding of equally-sized crystals is suggested, the advantage being avoidance of crystal grinding, sieving and any introduction of impurities. In the present study, size distributions of grown insulin crystals are interpreted retrospectively to select the proper nucleation stage parameters. The conclusion is that when steering a crystallization process aimed at reducing crystal polydispersity, the shortest possible nucleation stage duration has to be chosen because it renders the closest size distribution of the nucleated crystal seeds. Causes of inherent propensity to increasing crystal polydispersity during prolonged growth are also explored. Step sources of increased activity, present in some crystals while absent in others, are pointed as the major polydispersity cause. Insulin crystal morphology is also considered since it determines the dissolution rate of a crystalline medicine.
Visual saliency-based fast intracoding algorithm for high efficiency video coding
NASA Astrophysics Data System (ADS)
Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin
2017-01-01
Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.
Riekel, C.; Burghammer, M.; Davies, R. J.; Di Cola, E.; König, C.; Lemke, H.T.; Putaux, J.-L.; Schöder, S.
2010-01-01
X-ray radiation damage propagation is explored for hydrated starch granules in order to reduce the step resolution in raster-microdiffraction experiments to the nanometre range. Radiation damage was induced by synchrotron radiation microbeams of 5, 1 and 0.3 µm size with ∼0.1 nm wavelength in B-type potato, Canna edulis and Phajus grandifolius starch granules. A total loss of crystallinity of granules immersed in water was found at a dose of ∼1.3 photons nm−3. The temperature dependence of radiation damage suggests that primary radiation damage prevails up to about 120 K while secondary radiation damage becomes effective at higher temperatures. Primary radiation damage remains confined to the beam track at 100 K. Propagation of radiation damage beyond the beam track at room temperature is assumed to be due to reactive species generated principally by water radiolysis induced by photoelectrons. By careful dose selection during data collection, raster scans with 500 nm step-resolution could be performed for granules immersed in water. PMID:20975219
NASA Astrophysics Data System (ADS)
Gómez, José J. Arroyo; Zubieta, Carolina; Ferullo, Ricardo M.; García, Silvana G.
2016-02-01
The electrochemical formation of Au nanoparticles on a highly ordered pyrolytic graphite (HOPG) substrate using conventional electrochemical techniques and ex-situ AFM is reported. From the potentiostatic current transients studies, the Au electrodeposition process on HOPG surfaces was described, within the potential range considered, by a model involving instantaneous nucleation and diffusion controlled 3D growth, which was corroborated by the microscopic analysis. Initially, three-dimensional (3D) hemispherical nanoparticles distributed on surface defects (step edges) of the substrate were observed, with increasing particle size at more negative potentials. The double potential pulse technique allowed the formation of rounded deposits at low deposition potentials, which tend to form lines of nuclei aligned in defined directions leading to 3D ordered structures. By choosing suitable nucleation and growth pulses, one-dimensional (1D) deposits were possible, preferentially located on step edges of the HOPG substrate. Quantum-mechanical calculations confirmed the tendency of Au atoms to join selectively on surface defects, such as the HOPG step edges, at the early stages of Au electrodeposition.
Supramolecular "Step Polymerization" of Preassembled Micelles: A Study of "Polymerization" Kinetics.
Yang, Chaoying; Ma, Xiaodong; Lin, Jiaping; Wang, Liquan; Lu, Yingqing; Zhang, Liangshun; Cai, Chunhua; Gao, Liang
2018-03-01
In nature, sophisticated functional materials are created through hierarchical self-assembly of nanoscale motifs, which has inspired the fabrication of man-made materials with complex architectures for a variety of applications. Herein, a kinetic study on the self-assembly of spindle-like micelles preassembled from polypeptide graft copolymers is reported. The addition of dimethylformamide and, subsequently, a selective solvent (water) can generate a "reactive point" at both ends of the spindles as a result of the existence of structural defects, which induces the "polymerization" of the spindles into nanowires. Experimental results combined with dissipative particle dynamics simulations show that the polymerization of the micellar subunits follows a step-growth polymerization mechanism with a second-order reaction characteristic. The assembly rate of the micelles is dependent on the subunit concentration and on the activity of the reactive points. The present work reveals a law governing the self-assembly kinetics of micelles with structural defects and opens the door for the construction of hierarchical structures with a controllable size through supramolecular step polymerization. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Stepwise and stagewise approaches for spatial cluster detection
Xu, Jiale
2016-01-01
Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either hypothesis testing framework or Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic area. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power of detections. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. PMID:27246273
Stepwise and stagewise approaches for spatial cluster detection.
Xu, Jiale; Gangnon, Ronald E
2016-05-01
Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either a hypothesis testing framework or a Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with a tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic areas. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bera, P.; Wędrychowicz, D.
2016-09-01
The article presents the influence of number and values of ratios in stepped gearbox on mileage fuel consumption in a city passenger car. The simulations were conducted for a particular vehicle characterized by its mass, body shape, size of tires and equipped with a combustion engine for which the characteristic of fuel consumption in dynamic states was already designated on the basis of engine test bed measurements. Several designs of transmission with different number of gears and their ratios were used in virtual simulations of road traffic, particularly in the NEDC test, to calculate mileage fuel consumption. This allows for a quantitative assessment of transmission parameters in terms of both vehicle economy and dynamic properties. Also, based on obtained results, recommendations for the selection of a particular vehicle for a specific type of exploitation have been formulated.
Microlens fabrication by replica molding of frozen laser-printed droplets
NASA Astrophysics Data System (ADS)
Surdo, Salvatore; Diaspro, Alberto; Duocastella, Martí
2017-10-01
In this work, we synergistically combine laser-induced forward transfer (LIFT) and replica molding for the fabrication of microlenses with control of their geometry and size independent of the material or substrate used. Our approach is based on a multistep process in which liquid microdroplets of an aqueous solution are first printed on a substrate by LIFT. Following a freezing step, the microdroplets are used as a master to fabricate a polydimethylsiloxane (PDMS) mold. A subsequent replica molding step enables the creation of microlenses and microlens arrays on arbitrary selected substrates and by using different curable polymers. Thus, our method combines the rapid fabrication capabilities of LIFT and the perfectively smooth surface quality of the generated microdroplets, with the advantages of replica molding in terms of parallelization and materials flexibility. We demonstrate our strategy by generating microlenses of different photocurable polymers and by characterizing their optical and morphological properties.
Frequency optimization in the eddy current test for high purity niobium
NASA Astrophysics Data System (ADS)
Joung, Mijoung; Jung, Yoochul; Kim, Hyungjin
2017-01-01
The eddy current test (ECT) is frequently used as a non-destructive method to check for the defects of high purity niobium (RRR300, Residual Resistivity Ratio) in a superconducting radio frequency (SRF) cavity. Determining an optimal frequency corresponding to specific material properties and probe specification is a very important step. The ECT experiments for high purity Nb were performed to determine the optimal frequency using the standard sample of high purity Nb having artificial defects. The target depth was considered with the treatment step that the niobium receives as the SRF cavity material. The results were analysed via the selectivity that led to a specific result, depending on the size of the defects. According to the results, the optimal frequency was determined to be 200 kHz, and a few features of the ECT for the high purity Nb were observed.
Advanced Design Methodology for Robust Aircraft Sizing and Synthesis
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.
1997-01-01
Contract efforts are focused on refining the Robust Design Methodology for Conceptual Aircraft Design. Robust Design Simulation (RDS) was developed earlier as a potential solution to the need to do rapid trade-offs while accounting for risk, conflict, and uncertainty. The core of the simulation revolved around Response Surface Equations as approximations of bounded design spaces. An ongoing investigation is concerned with the advantages of using Neural Networks in conceptual design. Thought was also given to the development of systematic way to choose or create a baseline configuration based on specific mission requirements. Expert system was developed, which selects aerodynamics, performance and weights model from several configurations based on the user's mission requirements for subsonic civil transport. The research has also resulted in a step-by-step illustration on how to use the AMV method for distribution generation and the search for robust design solutions to multivariate constrained problems.
Analysis of stability for stochastic delay integro-differential equations.
Zhang, Yu; Li, Longsuo
2018-01-01
In this paper, we concern stability of numerical methods applied to stochastic delay integro-differential equations. For linear stochastic delay integro-differential equations, it is shown that the mean-square stability is derived by the split-step backward Euler method without any restriction on step-size, while the Euler-Maruyama method could reproduce the mean-square stability under a step-size constraint. We also confirm the mean-square stability of the split-step backward Euler method for nonlinear stochastic delay integro-differential equations. The numerical experiments further verify the theoretical results.
Peng, Ting; Sun, Xiaochun; Mumm, Rita H
2014-01-01
From a breeding standpoint, multiple trait integration (MTI) is a four-step process of converting an elite variety/hybrid for value-added traits (e.g. transgenic events) using backcross breeding, ultimately regaining the performance attributes of the target hybrid along with reliable expression of the value-added traits. In the light of the overarching goal of recovering equivalent performance in the finished conversion, this study focuses on the first step of MTI, single event introgression, exploring the feasibility of marker-aided backcross conversion of a target maize hybrid for 15 transgenic events, incorporating eight events into the female hybrid parent and seven into the male parent. Single event introgression is conducted in parallel streams to convert the recurrent parent (RP) for individual events, with the primary objective of minimizing residual non-recurrent parent (NRP) germplasm, especially in the chromosomal proximity to the event (i.e. linkage drag). In keeping with a defined lower limit of 96.66 % overall RP germplasm recovery (i.e. ≤120 cM NRP germplasm given a genome size of 1,788 cM), a breeding goal for each of the 15 single event conversions was developed: <8 cM of residual NRP germplasm across the genome with ~1 cM in the 20 cM region flanking the event. Using computer simulation, we aimed to identify optimal breeding strategies for single event introgression to achieve this breeding goal, measuring efficiency in terms of number of backcross generations required, marker data points needed, and total population size across generations. Various selection schemes classified as three-stage, modified two-stage, and combined selection conducted from BC1 through BC3, BC4, or BC5 were compared. The breeding goal was achieved with a selection scheme involving five generations of marker-aided backcrossing, with BC1 through BC3 selected for the event of interest and minimal linkage drag at population size of 600, and BC4 and BC5 selected for the event of interest and recovery of the RP germplasm across the genome at population size of 400, with selection intensity of 0.01 for all generations. In addition, strategies for choice of donor parent to facilitate conversion efficiency and quality were evaluated. Two essential criteria for choosing an optimal donor parent for a given RP were established: introgression history showing reduction of linkage drag to ~1 cM in the 20 cM region flanking the event and genetic similarity between the RP and potential donor parents. Computer simulation demonstrated that single event conversions with <8 cM residual NRP germplasm can be accomplished by BC5 with no genetic similarity, by BC4 with 30 % genetic similarity, and by BC3 with 86 % genetic similarity using previously converted RPs as event donors. This study indicates that MTI to produce a 'quality' 15-event-stacked hybrid conversion is achievable. Furthermore, it lays the groundwork for a comprehensive approach to MTI by outlining a pathway to produce appropriate starting materials with which to proceed with event pyramiding and trait fixation before version testing.
Real-time feedback control of twin-screw wet granulation based on image analysis.
Madarász, Lajos; Nagy, Zsombor Kristóf; Hoffer, István; Szabó, Barnabás; Csontos, István; Pataki, Hajnalka; Démuth, Balázs; Szabó, Bence; Csorba, Kristóf; Marosi, György
2018-06-04
The present paper reports the first dynamic image analysis-based feedback control of continuous twin-screw wet granulation process. Granulation of the blend of lactose and starch was selected as a model process. The size and size distribution of the obtained particles were successfully monitored by a process camera coupled with an image analysis software developed by the authors. The validation of the developed system showed that the particle size analysis tool can determine the size of the granules with an error of less than 5 µm. The next step was to implement real-time feedback control of the process by controlling the liquid feeding rate of the pump through a PC, based on the real-time determined particle size results. After the establishment of the feedback control, the system could correct different real-life disturbances, creating a Process Analytically Controlled Technology (PACT), which guarantees the real-time monitoring and controlling of the quality of the granules. In the event of changes or bad tendencies in the particle size, the system can automatically compensate the effect of disturbances, ensuring proper product quality. This kind of quality assurance approach is especially important in the case of continuous pharmaceutical technologies. Copyright © 2018 Elsevier B.V. All rights reserved.
Construction of human chromosome 21-specific yeast artificial chromosomes
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCormick, M.K.; Shero, J.H.; Hieter, P.A.
1989-12-01
Chromosome 21-specific yeast artificial chromosomes (YACs) have been constructed by a method that performs all steps in agarose, allowing size selection by pulsed-field gel electrophoresis and the use of nanogram to microgram quantities of DNA. The DNA sources used were hybrid cell line WAV-17, containing chromosome 21 as the only human chromosome and flow-sorted chromosome 21. The transformation efficiency of ligation products was similar to that obtained in aqueous transformations and yielded YACs with sizes ranging from 100 kilobases (kb) to > 1 megabase when polyamines were included in the transformation procedure. Twenty-five YACs containing human DNA have been obtainedmore » from a mouse-human hybrid, ranging in size from 200 to > 1000 kb, with an average size of 410 kb. Ten of these YACs were localized to subregions of chromosome 21 by hybridization of RNA probes to a panel of somatic cell hybrid DNA. Twenty-one human YACs, ranging in size from 100 to 500 kb, with an average size of 150 kb, were obtained from {approx} 50 ng of flow-sorted chromosome 21 DNA. Three were localized to subregions of chromosome 21. YACs will aid the construction of a physical map of human chromosome 21 and the study of disorders associated with chromosome 21 such as Alzheimer disease and Down syndrome.« less
Process for preparation of large-particle-size monodisperse latexes
NASA Technical Reports Server (NTRS)
Vanderhoff, J. W.; Micale, F. J.; El-Aasser, M. S.; Kornfeld, D. M. (Inventor)
1981-01-01
Monodisperse latexes having a particle size in the range of 2 to 40 microns are prepared by seeded emulsion polymerization in microgravity. A reaction mixture containing smaller monodisperse latex seed particles, predetermined amounts of monomer, emulsifier, initiator, inhibitor and water is placed in a microgravity environment, and polymerization is initiated by heating. The reaction is allowed to continue until the seed particles grow to a predetermined size, and the resulting enlarged particles are then recovered. A plurality of particle-growing steps can be used to reach larger sizes within the stated range, with enlarge particles from the previous steps being used as seed particles for the succeeding steps. Microgravity enables preparation of particles in the stated size range by avoiding gravity related problems of creaming and settling, and flocculation induced by mechanical shear that have precluded their preparation in a normal gravity environment.
SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
2015-06-15
Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less
Size-dependent avoidance of a strong magnetic anomaly in Caribbean spiny lobsters.
Ernst, David A; Lohmann, Kenneth J
2018-03-01
On a global scale, the geomagnetic field varies predictably across the Earth's surface, providing animals that migrate long distances with a reliable source of directional and positional information that can be used to guide their movements. In some locations, however, magnetic minerals in the Earth's crust generate an additional field that enhances or diminishes the overall field, resulting in unusually steep gradients of field intensity within a limited area. How animals respond to such magnetic anomalies is unclear. The Caribbean spiny lobster, Panulirus argus , is a benthic marine invertebrate that possesses a magnetic sense and is likely to encounter magnetic anomalies during migratory movements and homing. As a first step toward investigating whether such anomalies affect the behavior of lobsters, a two-choice preference experiment was conducted in which lobsters were allowed to select one of two artificial dens, one beneath a neodymium magnet and the other beneath a non-magnetic weight of similar size and mass (control). Significantly more lobsters selected the control den, demonstrating avoidance of the magnetic anomaly. In addition, lobster size was found to be a significant predictor of den choice: lobsters that selected the anomaly den were significantly smaller as a group than those that chose the control den. Taken together, these findings provide additional evidence for magnetoreception in spiny lobsters, raise the possibility of an ontogenetic shift in how lobsters respond to magnetic fields, and suggest that magnetic anomalies might influence lobster movement in the natural environment. © 2018. Published by The Company of Biologists Ltd.
NASA Astrophysics Data System (ADS)
Bera, Amrita Mandal; Wargulski, Dan Ralf; Unold, Thomas
2018-04-01
Hybrid organometal perovskites have been emerged as promising solar cell material and have exhibited solar cell efficiency more than 20%. Thin films of Methylammonium lead iodide CH3NH3PbI3 perovskite materials have been synthesized by two different (one step and two steps) methods and their morphological properties have been studied by scanning electron microscopy and optical microscope imaging. The morphology of the perovskite layer is one of the most important parameters which affect solar cell efficiency. The morphology of the films revealed that two steps method provides better surface coverage than the one step method. However, the grain sizes were smaller in case of two steps method. The films prepared by two steps methods on different substrates revealed that the grain size also depend on the substrate where an increase of the grain size was found from glass substrate to FTO with TiO2 blocking layer to FTO without any change in the surface coverage area. Present study reveals that an improved quality of films can be obtained by two steps method by an optimization of synthesis processes.
[Influence on microstructure of dental zirconia ceramics prepared by two-step sintering].
Jian, Chao; Li, Ning; Wu, Zhikai; Teng, Jing; Yan, Jiazhen
2013-10-01
To investigate the microstructure of dental zirconia ceramics prepared by two-step sintering. Nanostructured zirconia powder was dry compacted, cold isostatic pressed, and pre-sintered. The pre-sintered discs were cut processed into samples. Conventional sintering, single-step sintering, and two-step sintering were carried out, and density and grain size of the samples were measured. Afterward, T1 and/or T2 of two-step sintering ranges were measured. Effects on microstructure of different routes, which consisted of two-step sintering and conventional sintering were discussed. The influence of T1 and/or T2 on density and grain size were analyzed as well. The range of T1 was between 1450 degrees C and 1550 degrees C, and the range of T2 was between 1250 degrees C and 1350 degrees C. Compared with conventional sintering, finer microstructure of higher density and smaller grain could be obtained by two-step sintering. Grain growth was dependent on T1, whereas density was not much related with T1. However, density was dependent on T2, and grain size was minimally influenced. Two-step sintering could ensure a sintering body with high density and small grain, which is good for optimizing the microstructure of dental zirconia ceramics.
Hierarchical nanostructured WO3-SnO2 for selective sensing of volatile organic compounds
NASA Astrophysics Data System (ADS)
Nayak, Arpan Kumar; Ghosh, Ruma; Santra, Sumita; Guha, Prasanta Kumar; Pradhan, Debabrata
2015-07-01
It remains a challenge to find a suitable gas sensing material that shows a high response and shows selectivity towards various gases simultaneously. Here, we report a mixed metal oxide WO3-SnO2 nanostructured material synthesized in situ by a simple, single-step, one-pot hydrothermal method at 200 °C in 12 h, and demonstrate its superior sensing behavior towards volatile organic compounds (VOCs) such as ammonia, ethanol and acetone. SnO2 nanoparticles with controlled size and density were uniformly grown on WO3 nanoplates by varying the tin precursor. The density of the SnO2 nanoparticles on the WO3 nanoplates plays a crucial role in the VOC selectivity. The responses of the present mixed metal oxides are found to be much higher than the previously reported results based on single/mixed oxides and noble metal-doped oxides. In addition, the VOC selectivity is found to be highly temperature-dependent, with optimum performance obtained at 200 °C, 300 °C and 350 °C for ammonia, ethanol and acetone, respectively. The present results on the cost-effective noble metal-free WO3-SnO2 sensor could find potential application in human breath analysis by non-invasive detection.It remains a challenge to find a suitable gas sensing material that shows a high response and shows selectivity towards various gases simultaneously. Here, we report a mixed metal oxide WO3-SnO2 nanostructured material synthesized in situ by a simple, single-step, one-pot hydrothermal method at 200 °C in 12 h, and demonstrate its superior sensing behavior towards volatile organic compounds (VOCs) such as ammonia, ethanol and acetone. SnO2 nanoparticles with controlled size and density were uniformly grown on WO3 nanoplates by varying the tin precursor. The density of the SnO2 nanoparticles on the WO3 nanoplates plays a crucial role in the VOC selectivity. The responses of the present mixed metal oxides are found to be much higher than the previously reported results based on single/mixed oxides and noble metal-doped oxides. In addition, the VOC selectivity is found to be highly temperature-dependent, with optimum performance obtained at 200 °C, 300 °C and 350 °C for ammonia, ethanol and acetone, respectively. The present results on the cost-effective noble metal-free WO3-SnO2 sensor could find potential application in human breath analysis by non-invasive detection. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr02571k
Evans, Christopher M; Love, Alyssa M; Weiss, Emily A
2012-10-17
This article reports control of the competition between step-growth and living chain-growth polymerization mechanisms in the formation of cadmium chalcogenide colloidal quantum dots (QDs) from CdSe(S) clusters by varying the concentration of anionic surfactant in the synthetic reaction mixture. The growth of the particles proceeds by step-addition from initially nucleated clusters in the absence of excess phosphinic or carboxylic acids, which adsorb as their anionic conjugate bases, and proceeds indirectly by dissolution of clusters, and subsequent chain-addition of monomers to stable clusters (Ostwald ripening) in the presence of excess phosphinic or carboxylic acid. Fusion of clusters by step-growth polymerization is an explanation for the consistent observation of so-called "magic-sized" clusters in QD growth reactions. Living chain-addition (chain addition with no explicit termination step) produces QDs over a larger range of sizes with better size dispersity than step-addition. Tuning the molar ratio of surfactant to Se(2-)(S(2-)), the limiting ionic reagent, within the living chain-addition polymerization allows for stoichiometric control of QD radius without relying on reaction time.
Seven Steps to Responsible Software Selection. ERIC Digest.
ERIC Educational Resources Information Center
Komoski, P. Kenneth; Plotnick, Eric
Microcomputers in schools contribute significantly to the learning process, and software selection is taken as seriously as the selection of text books. The seven step process for responsible software selection are: (1) analyzing needs, including the differentiation between needs and objectives; (2) specification of requirements; (3) identifying…
Optimal setups for forced-choice staircases with fixed step sizes.
García-Pérez, M A
2000-01-01
Forced-choice staircases with fixed step sizes are used in a variety of formats whose relative merits have never been studied. This paper presents a comparative study aimed at determining their optimal format. Factors included in the study were the up/down rule, the length (number of reversals), and the size of the steps. The study also addressed the issue of whether a protocol involving three staircases running for N reversals each (with a subsequent average of the estimates provided by each individual staircase) has better statistical properties than an alternative protocol involving a single staircase running for 3N reversals. In all cases the size of a step up was different from that of a step down, in the appropriate ratio determined by García-Pérez (Vision Research, 1998, 38, 1861 - 1881). The results of a simulation study indicate that a) there are no conditions in which the 1-down/1-up rule is advisable; b) different combinations of up/down rule and number of reversals appear equivalent in terms of precision and cost: c) using a single long staircase with 3N reversals is more efficient than running three staircases with N reversals each: d) to avoid bias and attain sufficient accuracy, threshold estimates should be based on at least 30 reversals: and e) to avoid excessive cost and imprecision, the size of the step up should be between 2/3 and 3/3 the (known or presumed) spread of the psychometric function. An empirical study with human subjects confirmed the major characteristics revealed by the simulations.
Double emulsion formation through hierarchical flow-focusing microchannel
NASA Astrophysics Data System (ADS)
Azarmanesh, Milad; Farhadi, Mousa; Azizian, Pooya
2016-03-01
A microfluidic device is presented for creating double emulsions, controlling their sizes and also manipulating encapsulation processes. As a result of three immiscible liquids' interaction using dripping instability, double emulsions can be produced elegantly. Effects of dimensionless numbers are investigated which are Weber number of the inner phase (Wein), Capillary number of the inner droplet (Cain), and Capillary number of the outer droplet (Caout). They affect the formation process, inner and outer droplet size, and separation frequency. Direct numerical simulation of governing equations was done using volume of fluid method and adaptive mesh refinement technique. Two kinds of double emulsion formation, the two-step and the one-step, were simulated in which the thickness of the sheath of double emulsions can be adjusted. Altering each dimensionless number will change detachment location, outer droplet size and droplet formation period. Moreover, the decussate regime of the double-emulsion/empty-droplet is observed in low Wein. This phenomenon can be obtained by adjusting the Wein in which the maximum size of the sheath is discovered. Also, the results show that Cain has significant influence on the outer droplet size in the two-step process, while Caout affects the sheath in the one-step formation considerably.
Ultrafast learning in a hard-limited neural network pattern recognizer
NASA Astrophysics Data System (ADS)
Hu, Chia-Lun J.
1996-03-01
As we published in the last five years, the supervised learning in a hard-limited perceptron system can be accomplished in a noniterative manner if the input-output mapping to be learned satisfies a certain positive-linear-independency (or PLI) condition. When this condition is satisfied (for most practical pattern recognition applications, this condition should be satisfied,) the connection matrix required to meet this mapping can be obtained noniteratively in one step. Generally, there exist infinitively many solutions for the connection matrix when the PLI condition is satisfied. We can then select an optimum solution such that the recognition of any untrained patterns will become optimally robust in the recognition mode. The learning speed is very fast and close to real-time because the learning process is noniterative and one-step. This paper reports the theoretical analysis and the design of a practical charter recognition system for recognizing hand-written alphabets. The experimental result is recorded in real-time on an unedited video tape for demonstration purposes. It is seen from this real-time movie that the recognition of the untrained hand-written alphabets is invariant to size, location, orientation, and writing sequence, even the training is done with standard size, standard orientation, central location and standard writing sequence.
NASA Astrophysics Data System (ADS)
Taurino, Irene; Sanzó, Gabriella; Mazzei, Franco; Favero, Gabriele; de Micheli, Giovanni; Carrara, Sandro
2015-10-01
Novel methods to obtain Pt nanostructured electrodes have raised particular interest due to their high performance in electrochemistry. Several nanostructuration methods proposed in the literature use costly and bulky equipment or are time-consuming due to the numerous steps they involve. Here, Pt nanostructures were produced for the first time by one-step template-free electrodeposition on Pt bare electrodes. The change in size and shape of the nanostructures is proven to be dependent on the deposition parameters and on the ratio between sulphuric acid and chloride-complexes (i.e., hexachloroplatinate or tetrachloroplatinate). To further improve the electrochemical properties of electrodes, depositions of Pt nanostructures on previously synthesised Pt nanostructures are also performed. The electroactive surface areas exhibit a two order of magnitude improvement when Pt nanostructures with the smallest size are used. All the biosensors based on Pt nanostructures and immobilised glucose oxidase display higher sensitivity as compared to bare Pt electrodes. Pt nanostructures retained an excellent electrocatalytic activity towards the direct oxidation of glucose. Finally, the nanodeposits were proven to be an excellent solid contact for ion measurements, significantly improving the time-stability of the potential. The use of these new nanostructured coatings in electrochemical sensors opens new perspectives for multipanel monitoring of human metabolism.
Herculano-Houzel, Suzana; Messeder, Débora J.; Fonseca-Azevedo, Karina; Pantoja, Nilma A.
2015-01-01
There is a strong trend toward increased brain size in mammalian evolution, with larger brains composed of more and larger neurons than smaller brains across species within each mammalian order. Does the evolution of increased numbers of brain neurons, and thus larger brain size, occur simply through the selection of individuals with more and larger neurons, and thus larger brains, within a population? That is, do individuals with larger brains also have more, and larger, neurons than individuals with smaller brains, such that allometric relationships across species are simply an extension of intraspecific scaling? Here we show that this is not the case across adult male mice of a similar age. Rather, increased numbers of neurons across individuals are accompanied by increased numbers of other cells and smaller average cell size of both types, in a trade-off that explains how increased brain mass does not necessarily ensue. Fundamental regulatory mechanisms thus must exist that tie numbers of neurons to numbers of other cells and to average cell size within individual brains. Finally, our results indicate that changes in brain size in evolution are not an extension of individual variation in numbers of neurons, but rather occur through step changes that must simultaneously increase numbers of neurons and cause cell size to increase, rather than decrease. PMID:26082686
Herculano-Houzel, Suzana; Messeder, Débora J; Fonseca-Azevedo, Karina; Pantoja, Nilma A
2015-01-01
There is a strong trend toward increased brain size in mammalian evolution, with larger brains composed of more and larger neurons than smaller brains across species within each mammalian order. Does the evolution of increased numbers of brain neurons, and thus larger brain size, occur simply through the selection of individuals with more and larger neurons, and thus larger brains, within a population? That is, do individuals with larger brains also have more, and larger, neurons than individuals with smaller brains, such that allometric relationships across species are simply an extension of intraspecific scaling? Here we show that this is not the case across adult male mice of a similar age. Rather, increased numbers of neurons across individuals are accompanied by increased numbers of other cells and smaller average cell size of both types, in a trade-off that explains how increased brain mass does not necessarily ensue. Fundamental regulatory mechanisms thus must exist that tie numbers of neurons to numbers of other cells and to average cell size within individual brains. Finally, our results indicate that changes in brain size in evolution are not an extension of individual variation in numbers of neurons, but rather occur through step changes that must simultaneously increase numbers of neurons and cause cell size to increase, rather than decrease.
Development of Portable Aerosol Mobility Spectrometer for Personal and Mobile Aerosol Measurement
Kulkarni, Pramod; Qi, Chaolong; Fukushima, Nobuhiko
2017-01-01
We describe development of a Portable Aerosol Mobility Spectrometer (PAMS) for size distribution measurement of submicrometer aerosol. The spectrometer is designed for use in personal or mobile aerosol characterization studies and measures approximately 22.5 × 22.5 × 15 cm and weighs about 4.5 kg including the battery. PAMS uses electrical mobility technique to measure number-weighted particle size distribution of aerosol in the 10–855 nm range. Aerosol particles are electrically charged using a dual-corona bipolar corona charger, followed by classification in a cylindrical miniature differential mobility analyzer. A condensation particle counter is used to detect and count particles. The mobility classifier was operated at an aerosol flow rate of 0.05 L/min, and at two different user-selectable sheath flows of 0.2 L/min (for wider size range 15–855 nm) and 0.4 L/min (for higher size resolution over the size range of 10.6–436 nm). The instrument was operated in voltage stepping mode to retrieve the size distribution, which took approximately 1–2 minutes, depending on the configuration. Sizing accuracy and resolution were probed and found to be within the 25% limit of NIOSH criterion for direct-reading instruments (NIOSH 2012). Comparison of size distribution measurements from PAMS and other commercial mobility spectrometers showed good agreement. The instrument offers unique measurement capability for on-person or mobile size distribution measurements of ultrafine and nanoparticle aerosol. PMID:28413241
Correlative feature analysis of FFDM images
NASA Astrophysics Data System (ADS)
Yuan, Yading; Giger, Maryellen L.; Li, Hui; Sennett, Charlene
2008-03-01
Identifying the corresponding image pair of a lesion is an essential step for combining information from different views of the lesion to improve the diagnostic ability of both radiologists and CAD systems. Because of the non-rigidity of the breasts and the 2D projective property of mammograms, this task is not trivial. In this study, we present a computerized framework that differentiates the corresponding images from different views of a lesion from non-corresponding ones. A dual-stage segmentation method, which employs an initial radial gradient index(RGI) based segmentation and an active contour model, was initially applied to extract mass lesions from the surrounding tissues. Then various lesion features were automatically extracted from each of the two views of each lesion to quantify the characteristics of margin, shape, size, texture and context of the lesion, as well as its distance to nipple. We employed a two-step method to select an effective subset of features, and combined it with a BANN to obtain a discriminant score, which yielded an estimate of the probability that the two images are of the same physical lesion. ROC analysis was used to evaluate the performance of the individual features and the selected feature subset in the task of distinguishing between corresponding and non-corresponding pairs. By using a FFDM database with 124 corresponding image pairs and 35 non-corresponding pairs, the distance feature yielded an AUC (area under the ROC curve) of 0.8 with leave-one-out evaluation by lesion, and the feature subset, which includes distance feature, lesion size and lesion contrast, yielded an AUC of 0.86. The improvement by using multiple features was statistically significant as compared to single feature performance. (p<0.001)
Pootakham, Wirulda; Sonthirod, Chutima; Naktang, Chaiwat; Jomchai, Nukoon; Sangsrakru, Duangjai; Tangphatsornruang, Sithichoke
2016-01-01
Advances in next generation sequencing have facilitated a large-scale single nucleotide polymorphism (SNP) discovery in many crop species. Genotyping-by-sequencing (GBS) approach couples next generation sequencing with genome complexity reduction techniques to simultaneously identify and genotype SNPs. Choice of enzymes used in GBS library preparation depends on several factors including the number of markers required, the desired level of multiplexing, and whether the enrichment of genic SNP is preferred. We evaluated various combinations of methylation-sensitive ( Aat II, Pst I, Msp I) and methylation-insensitive ( Sph I, Mse I) enzymes for their effectiveness in genome complexity reduction and enrichment of genic SNPs. We discovered that the use of two methylation-sensitive enzymes effectively reduced genome complexity and did not require a size selection step. On the contrary, the genome coverage of libraries constructed with methylation-insensitive enzymes was quite high, and the additional size selection step may be required to increase the overall read depth. We also demonstrated the effectiveness of methylation-sensitive enzymes in enriching for SNPs located in genic regions. When two methylation-insensitive enzymes were used, only 16% of SNPs identified were located in genes and 18% in the vicinity (± 5 kb) of the genic regions, while most SNPs resided in the intergenic regions. In contrast, a remarkable degree of enrichment was observed when two methylation-sensitive enzymes were employed. Almost two thirds of the SNPs were located either inside (32-36%) or in the vicinity (28-31%) of the genic regions. These results provide useful information to help researchers choose appropriate GBS enzymes in oil palm and other crop species.
NASA Astrophysics Data System (ADS)
Green, Kim; Brardinoni, Francesco; Alila, Younes
2014-05-01
We monitor bedload transport and water discharge at six stations in two forested headwater streams of the Columbia Mountains, Canada. The monitoring network of sediment traps is designed to examine the effects of channel bed texture, and the influence of alluvial (i.e., step pools, and riffle pools) and semi-alluvial morphologies (i.e., boulder cascades and forced step pools) on bedload entrainment and transport. Results suggest that patterns of bedload entrainment are influenced by flow resistance while the value of the critical dimensionless shear stress for mobilization of the surface D50 varies due to channel gradient, grain sheltering effects and, to a less extent, flow resistance. Regardless of channel morphology we observe: (i) equal-threshold entrainment for all mobile grains in channels with high grain and/or form resistance; and (ii) initial equal-threshold entrainment of calibers ≤ 22mm, and subsequent size-selective entrainment of coarser material in channels with low form resistance (e.g. riffle pool). Scaled fractional analysis reveals that in reaches with high flow resistance most bedload transport occurs in partial mobility fashion relative to the available bed material and that only material finer than 16mm attains full mobility during over-bank flows. Equal mobility transport for a wider range of grain sizes is achieved in reaches with reduced flow resistance. Evaluation of bedload rating curves across sites identifies that grain effects predominate with respect to bedload flux whereas morphological effects (i.e. form resistance) play a secondary role. Application of selected empirical formulae developed in steep alpine channels present variable success in predicting transport rates in the study reaches.
Fortuna, Sara; Fogolari, Federico; Scoles, Giacinto
2015-01-01
The design of new strong and selective binders is a key step towards the development of new sensing devices and effective drugs. Both affinity and selectivity can be increased through chelation and here we theoretically explore the possibility of coupling two binders through a flexible linker. We prove the enhanced ability of double binders of keeping their target with a simple model where a polymer composed by hard spheres interacts with a spherical macromolecule, such as a protein, through two sticky spots. By Monte Carlo simulations and thermodynamic integration we show the chelating effect to hold for coupling polymers whose radius of gyration is comparable to size of the chelated particle. We show the binding free energy of flexible double binders to be higher than that of two single binders and to be maximized when the binding sites are at distances comparable to the mean free polymer end-to-end distance. The affinity of two coupled binders is therefore predicted to increase non linearly and in turn, by targeting two non-equivalent binding sites, this will lead to higher selectivity. PMID:26496975
Contact Electrification of Individual Dielectric Microparticles Measured by Optical Tweezers in Air.
Park, Haesung; LeBrun, Thomas W
2016-12-21
We measure charging of single dielectric microparticles after interaction with a glass substrate using optical tweezers to control the particle, measure its charge with a sensitivity of a few electrons, and precisely contact the particle with the substrate. Polystyrene (PS) microparticles adhered to the substrate can be selected based on size, shape, or optical properties and repeatedly loaded into the optical trap using a piezoelectric (PZT) transducer. Separation from the substrate leads to charge transfer through contact electrification. The charge on the trapped microparticles is measured from the response of the particle motion to a step excitation of a uniform electric field. The particle is then placed onto a target location of the substrate in a controlled manner. Thus, the triboelectric charging profile of the selected PS microparticle can be measured and controlled through repeated cycles of trap loading followed by charge measurement. Reversible optical trap loading and manipulation of the selected particle leads to new capabilities to study and control successive and small changes in surface interactions.
Shen, Chongfei; Liu, Hongtao; Xie, Xb; Luk, Keith Dk; Hu, Yong
2007-01-01
Adaptive noise canceller (ANC) has been used to improve signal to noise ratio (SNR) of somsatosensory evoked potential (SEP). In order to efficiently apply the ANC in hardware system, fixed-point algorithm based ANC can achieve fast, cost-efficient construction, and low-power consumption in FPGA design. However, it is still questionable whether the SNR improvement performance by fixed-point algorithm is as good as that by floating-point algorithm. This study is to compare the outputs of ANC by floating-point and fixed-point algorithm ANC when it was applied to SEP signals. The selection of step-size parameter (micro) was found different in fixed-point algorithm from floating-point algorithm. In this simulation study, the outputs of fixed-point ANC showed higher distortion from real SEP signals than that of floating-point ANC. However, the difference would be decreased with increasing micro value. In the optimal selection of micro, fixed-point ANC can get as good results as floating-point algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumuluru, Jaya Shankar; McCulloch, Richard Chet James
In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the mostmore » improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.« less
Uemura, Kazuhiro; Yamasaki, Yukari; Onishi, Fumiaki; Kita, Hidetoshi; Ebihara, Masahiro
2010-11-01
A preliminary study of isopropanol (IPA) adsorption/desorption isotherms on a jungle-gym-type porous coordination polymer, [Zn(2)(bdc)(2)(dabco)](n) (1, H(2)bdc = 1,4-benzenedicarboxylic acid, dabco =1,4-diazabicyclo[2.2.2]octane), showed unambiguous two-step profiles via a highly shrunk intermediate framework. The results of adsorption measurements on 1, using probing gas molecules of alcohol (MeOH and EtOH) for the size effect and Me(2)CO for the influence of hydrogen bonding, show that alcohol adsorption isotherms are gradual two-step profiles, whereas the Me(2)CO isotherm is a typical type-I isotherm, indicating that a two-step adsorption/desorption is involved with hydrogen bonds. To further clarify these characteristic adsorption/desorption behaviors, selecting nitroterephthalate (bdc-NO(2)), bromoterephthalate (bdc-Br), and 2,5-dichloroterephthalate (bdc-Cl(2)) as substituted dicarboxylate ligands, isomorphous jungle-gym-type porous coordination polymers, {[Zn(2)(bdc-NO(2))(2)(dabco)]·solvents}(n) (2 ⊃ solvents), {[Zn(2)(bdc-Br)(2)(dabco)]·solvents}(n) (3 ⊃ solvents), and {[Zn(2)(bdc-Cl(2))(2)(dabco)]·solvents}(n) (4 ⊃ solvents), were synthesized and characterized by single-crystal X-ray analyses. Thermal gravimetry, X-ray powder diffraction, and N(2) adsorption at 77 K measurements reveal that [Zn(2)(bdc-NO(2))(2)(dabco)](n) (2), [Zn(2)(bdc-Br)(2)(dabco)](n) (3), and [Zn(2)(bdc-Cl(2))(2)(dabco)](n) (4) maintain their frameworks without guest molecules with Brunauer-Emmett-Teller (BET) surface areas of 1568 (2), 1292 (3), and 1216 (4) m(2) g(-1). As found in results of MeOH, EtOH, IPA, and Me(2)CO adsorption/desorption on 2-4, only MeOH adsorption on 2 shows an obvious two-step profile. Considering the substituent effects and adsorbate sizes, the hydrogen bonds, which are triggers for two-step adsorption, are formed between adsorbates and carboxylate groups at the corners in the pores, inducing wide pores to become narrow pores. Interestingly, such a two-step MeOH adsorption on 2 depends on the temperature, attributed to the small free-energy difference (ΔF(host)) between the two guest-free forms, wide and narrow pores.
An improved VSS NLMS algorithm for active noise cancellation
NASA Astrophysics Data System (ADS)
Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan
2017-08-01
In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.
Analysis Techniques for Microwave Dosimetric Data.
1985-10-01
the number of steps in the frequency list . 0062 C ----------------------------------------------------------------------- 0063 CALL FILE2() 0064...starting frequency, 0061 C the step size, and the number of steps in the frequency list . 0062 C
Peltola, Tomi; Marttinen, Pekka; Vehtari, Aki
2012-01-01
High-dimensional datasets with large amounts of redundant information are nowadays available for hypothesis-free exploration of scientific questions. A particular case is genome-wide association analysis, where variations in the genome are searched for effects on disease or other traits. Bayesian variable selection has been demonstrated as a possible analysis approach, which can account for the multifactorial nature of the genetic effects in a linear regression model. Yet, the computation presents a challenge and application to large-scale data is not routine. Here, we study aspects of the computation using the Metropolis-Hastings algorithm for the variable selection: finite adaptation of the proposal distributions, multistep moves for changing the inclusion state of multiple variables in a single proposal and multistep move size adaptation. We also experiment with a delayed rejection step for the multistep moves. Results on simulated and real data show increase in the sampling efficiency. We also demonstrate that with application specific proposals, the approach can overcome a specific mixing problem in real data with 3822 individuals and 1,051,811 single nucleotide polymorphisms and uncover a variant pair with synergistic effect on the studied trait. Moreover, we illustrate multimodality in the real dataset related to a restrictive prior distribution on the genetic effect sizes and advocate a more flexible alternative. PMID:23166669
Testing electroexplosive devices by programmed pulsing techniques
NASA Technical Reports Server (NTRS)
Rosenthal, L. A.; Menichelli, V. J.
1976-01-01
A novel method for testing electroexplosive devices is proposed wherein capacitor discharge pulses, with increasing energy in a step-wise fashion, are delivered to the device under test. The size of the energy increment can be programmed so that firing takes place after many, or after only a few, steps. The testing cycle is automatically terminated upon firing. An energy-firing contour relating the energy required to the programmed step size describes the single-pulse firing energy and the possible sensitization or desensitization of the explosive device.
Meyers, Robert W; Oliver, Jon L; Hughes, Michael G; Lloyd, Rhodri S; Cronin, John B
2017-04-01
Meyers, RW, Oliver, JL, Hughes, MG, Lloyd, RS, and Cronin, JB. Influence of age, maturity, and body size on the spatiotemporal determinants of maximal sprint speed in boys. J Strength Cond Res 31(4): 1009-1016, 2017-The aim of this study was to investigate the influence of age, maturity, and body size on the spatiotemporal determinants of maximal sprint speed in boys. Three-hundred and seventy-five boys (age: 13.0 ± 1.3 years) completed a 30-m sprint test, during which maximal speed, step length, step frequency, contact time, and flight time were recorded using an optical measurement system. Body mass, height, leg length, and a maturity offset represented somatic variables. Step frequency accounted for the highest proportion of variance in speed (∼58%) in the pre-peak height velocity (pre-PHV) group, whereas step length explained the majority of the variance in speed (∼54%) in the post-PHV group. In the pre-PHV group, mass was negatively related to speed, step length, step frequency, and contact time; however, measures of stature had a positive influence on speed and step length yet a negative influence on step frequency. Speed and step length were also negatively influence by mass in the post-PHV group, whereas leg length continued to positively influence step length. The results highlighted that pre-PHV boys may be deemed step frequency reliant, whereas those post-PHV boys may be marginally step length reliant. Furthermore, the negative influence of body mass, both pre-PHV and post-PHV, suggests that training to optimize sprint performance in youth should include methods such as plyometric and strength training, where a high neuromuscular focus and the development force production relative to body weight are key foci.
Creating ligand-free silicon germanium alloy nanocrystal inks.
Erogbogbo, Folarin; Liu, Tianhang; Ramadurai, Nithin; Tuccarione, Phillip; Lai, Larry; Swihart, Mark T; Prasad, Paras N
2011-10-25
Particle size is widely used to tune the electronic, optical, and catalytic properties of semiconductor nanocrystals. This contrasts with bulk semiconductors, where properties are tuned based on composition, either through doping or through band gap engineering of alloys. Ideally, one would like to control both size and composition of semiconductor nanocrystals. Here, we demonstrate production of silicon-germanium alloy nanoparticles by laser pyrolysis of silane and germane. We have used FTIR, TEM, XRD, EDX, SEM, and TOF-SIMS to conclusively determine their structure and composition. Moreover, we show that upon extended sonication in selected solvents, these bare nanocrystals can be stably dispersed without ligands, thereby providing the possibility of using them as an ink to make patterned films, free of organic surfactants, for device fabrication. The engineering of these SiGe alloy inks is an important step toward the low-cost fabrication of group IV nanocrystal optoelectronic, thermoelectric, and photovoltaic devices.
Bulk Preparation of Holey Graphene via Controlled Catalytic Oxidation
NASA Technical Reports Server (NTRS)
Connell, John (Inventor); Watson, Kent (Inventor); Ghose, Sayata (Inventor); Lin, Yi (Inventor)
2015-01-01
A scalable method allows preparation of bulk quantities of holey carbon allotropes with holes ranging from a few to over 100 nm in diameter. Carbon oxidation catalyst nanoparticles are first deposited onto a carbon allotrope surface in a facile, controllable, and solvent-free process. The catalyst-loaded carbons are then subjected to thermal treatment in air. The carbons in contact with the carbon oxidation catalyst nanoparticles are selectively oxidized into gaseous byproducts such as CO or CO.sub.2, leaving the surface with holes. The catalyst is then removed via refluxing in diluted nitric acid to obtain the final holey carbon allotropes. The average size of the holes correlates strongly with the size of the catalyst nanoparticles and is controlled by adjusting the catalyst precursor concentration. The temperature and time of the air oxidation step, and the catalyst removal treatment conditions, strongly affect the morphology of the holes.
Scanning tunneling microscope with a rotary piezoelectric stepping motor
NASA Astrophysics Data System (ADS)
Yakimov, V. N.
1996-02-01
A compact scanning tunneling microscope (STM) with a novel rotary piezoelectric stepping motor for coarse positioning has been developed. An inertial method for rotating of the rotor by the pair of piezoplates has been used in the piezomotor. Minimal angular step size was about several arcsec with the spindle working torque up to 1 N×cm. Design of the STM was noticeably simplified by utilization of the piezomotor with such small step size. A shaft eccentrically attached to the piezomotor spindle made it possible to push and pull back the cylindrical bush with the tubular piezoscanner. A linear step of coarse positioning was about 50 nm. STM resolution in vertical direction was better than 0.1 nm without an external vibration isolation.
Lee, Yoo-Jung; Seo, Tae Hoon; Lee, Seula; Jang, Wonhee; Kim, Myung Jong; Sung, Jung-Suk
2018-01-01
Graphene is a noncytotoxic monolayer platform with unique physical, chemical, and biological properties. It has been demonstrated that graphene substrate may provide a promising biocompatible scaffold for stem cell therapy. Because chemical vapor deposited graphene has a two dimensional polycrystalline structure, it is important to control the individual domain size to obtain desirable properties for nano-material. However, the biological effects mediated by differences in domain size of graphene have not yet been reported. On the basis of the control of graphene domain achieved by one-step growth (1step-G, small domain) and two-step growth (2step-G, large domain) process, we found that the neuronal differentiation of bone marrow-derived human mesenchymal stem cells (hMSCs) highly depended on the graphene domain size. The defects at the domain boundaries in 1step-G graphene was higher (×8.5) and had a relatively low (13% lower) contact angle of water droplet than 2step-G graphene, leading to enhanced cell-substrate adhesion and upregulated neuronal differentiation of hMSCs. We confirmed that the strong interactions between cells and defects at the domain boundaries in 1step-G graphene can be obtained due to their relatively high surface energy, which is stronger than interactions between cells and graphene surfaces. Our results may provide valuable information on the development of graphene-based scaffold by understanding which properties of graphene domain influence cell adhesion efficacy and stem cell differentiation. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 106A: 43-51, 2018. © 2017 Wiley Periodicals, Inc.
A Selection Method That Succeeds!
ERIC Educational Resources Information Center
Weitman, Catheryn J.
Provided a structural selection method is carried out, it is possible to find quality early childhood personnel. The hiring process involves five definite steps, each of which establishes a base for the next. A needs assessment formulating basic minimal qualifications is the first step. The second step involves review of current job descriptions…
Jing, X; Cimino, J J
2014-01-01
Graphical displays can make data more understandable; however, large graphs can challenge human comprehension. We have previously described a filtering method to provide high-level summary views of large data sets. In this paper we demonstrate our method for setting and selecting thresholds to limit graph size while retaining important information by applying it to large single and paired data sets, taken from patient and bibliographic databases. Four case studies are used to illustrate our method. The data are either patient discharge diagnoses (coded using the International Classification of Diseases, Clinical Modifications [ICD9-CM]) or Medline citations (coded using the Medical Subject Headings [MeSH]). We use combinations of different thresholds to obtain filtered graphs for detailed analysis. The thresholds setting and selection, such as thresholds for node counts, class counts, ratio values, p values (for diff data sets), and percentiles of selected class count thresholds, are demonstrated with details in case studies. The main steps include: data preparation, data manipulation, computation, and threshold selection and visualization. We also describe the data models for different types of thresholds and the considerations for thresholds selection. The filtered graphs are 1%-3% of the size of the original graphs. For our case studies, the graphs provide 1) the most heavily used ICD9-CM codes, 2) the codes with most patients in a research hospital in 2011, 3) a profile of publications on "heavily represented topics" in MEDLINE in 2011, and 4) validated knowledge about adverse effects of the medication of rosiglitazone and new interesting areas in the ICD9-CM hierarchy associated with patients taking the medication of pioglitazone. Our filtering method reduces large graphs to a manageable size by removing relatively unimportant nodes. The graphical method provides summary views based on computation of usage frequency and semantic context of hierarchical terminology. The method is applicable to large data sets (such as a hundred thousand records or more) and can be used to generate new hypotheses from data sets coded with hierarchical terminologies.
NASA Astrophysics Data System (ADS)
Amalia, E.; Moelyadi, M. A.; Ihsan, M.
2018-04-01
The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.
Multipinhole SPECT helical scan parameters and imaging volume
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang
Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less
NASA Astrophysics Data System (ADS)
Song, Bongyong; Park, Justin C.; Song, William Y.
2014-11-01
The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires ‘at most one function evaluation’ in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a ‘smoothed TV’ or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.
Song, Bongyong; Park, Justin C; Song, William Y
2014-11-07
The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires 'at most one function evaluation' in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a 'smoothed TV' or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.
NASA Astrophysics Data System (ADS)
Syamsuri, B. S.; Anwar, S.; Sumarna, O.
2017-09-01
This research aims to develop oxidation-reduction reactions (redox) teaching material used the Four Steps Teaching Material Development (4S TMD) method consists of four steps: selection, structuring, characterization and didactical reduction. This paper is the first part of the development of teaching material that includes selection and structuring steps. At the selection step, the development of teaching material begins with the development concept of redox based on curriculum demands, then the development of fundamental concepts sourced from the international textbook, and last is the development of values or skills can be integrated with redox concepts. The results of this selection step are the subject matter of the redox concept and values can be integrated with it. In the structuring step was developed concept map that provide on the relationship between redox concepts; Macro structure that guide systematic on the writing of teaching material; And multiple representations which are the development of teaching material that connection between macroscopic, submicroscopic, and symbolic level representations. The result of the two steps in this first part of the study produced a draft of teaching material. Evaluation of the draft of teaching material is done by an expert lecturer in the field of chemical education to assess the feasibility of teaching material.
2017-01-01
Area-selective atomic layer deposition (ALD) is rapidly gaining interest because of its potential application in self-aligned fabrication schemes for next-generation nanoelectronics. Here, we introduce an approach for area-selective ALD that relies on the use of chemoselective inhibitor molecules in a three-step (ABC-type) ALD cycle. A process for area-selective ALD of SiO2 was developed comprising acetylacetone inhibitor (step A), bis(diethylamino)silane precursor (step B), and O2 plasma reactant (step C) pulses. Our results show that this process allows for selective deposition of SiO2 on GeO2, SiNx, SiO2, and WO3, in the presence of Al2O3, TiO2, and HfO2 surfaces. In situ Fourier transform infrared spectroscopy experiments and density functional theory calculations underline that the selectivity of the approach stems from the chemoselective adsorption of the inhibitor. The selectivity between different oxide starting surfaces and the compatibility with plasma-assisted or ozone-based ALD are distinct features of this approach. Furthermore, the approach offers the opportunity of tuning the substrate-selectivity by proper selection of inhibitor molecules. PMID:28850774
Genomic prediction in a nuclear population of layers using single-step models.
Yan, Yiyuan; Wu, Guiqin; Liu, Aiqiao; Sun, Congjiao; Han, Wenpeng; Li, Guangqi; Yang, Ning
2018-02-01
Single-step genomic prediction method has been proposed to improve the accuracy of genomic prediction by incorporating information of both genotyped and ungenotyped animals. The objective of this study is to compare the prediction performance of single-step model with a 2-step models and the pedigree-based models in a nuclear population of layers. A total of 1,344 chickens across 4 generations were genotyped by a 600 K SNP chip. Four traits were analyzed, i.e., body weight at 28 wk (BW28), egg weight at 28 wk (EW28), laying rate at 38 wk (LR38), and Haugh unit at 36 wk (HU36). In predicting offsprings, individuals from generation 1 to 3 were used as training data and females from generation 4 were used as validation set. The accuracies of predicted breeding values by pedigree BLUP (PBLUP), genomic BLUP (GBLUP), SSGBLUP and single-step blending (SSBlending) were compared for both genotyped and ungenotyped individuals. For genotyped females, GBLUP performed no better than PBLUP because of the small size of training data, while the 2 single-step models predicted more accurately than the PBLUP model. The average predictive ability of SSGBLUP and SSBlending were 16.0% and 10.8% higher than the PBLUP model across traits, respectively. Furthermore, the predictive abilities for ungenotyped individuals were also enhanced. The average improvements of prediction abilities were 5.9% and 1.5% for SSGBLUP and SSBlending model, respectively. It was concluded that single-step models, especially the SSGBLUP model, can yield more accurate prediction of genetic merits and are preferable for practical implementation of genomic selection in layers. © 2017 Poultry Science Association Inc.
Biolistic- and Agrobacterium-mediated transformation protocols for wheat.
Tamás-Nyitrai, Cecília; Jones, Huw D; Tamás, László
2012-01-01
After rice, wheat is considered to be the most important world food crop, and the demand for high-quality wheat flour is increasing. Although there are no GM varieties currently grown, wheat is an important target for biotechnology, and we anticipate that GM wheat will be commercially available in 10-15 years. In this chapter, we summarize the main features and challenges of wheat transformation and then describe detailed protocols for the production of transgenic wheat plants both by biolistic and Agrobacterium-mediated DNA-delivery. Although these methods are used mainly for bread wheat (Triticum aestivum L.), they can also be successfully applied, with slight modifications, to tetraploid durum wheat (T. turgidum L. var. durum). The appropriate size and developmental stage of explants (immature embryo-derived scutella), the conditions to produce embryogenic callus tissues, and the methods to regenerate transgenic plants under increasing selection pressure are provided in the protocol. To illustrate the application of herbicide selection system, we have chosen to describe the use of the plasmid pAHC25 for biolistic transformation, while for Agrobacterium-mediated transformation the binary vector pAL156 (incorporating both the bar gene and the uidA gene) has been chosen. Beside the step-by-step methodology for obtaining stably transformed and normal fertile plants, procedures for screening and testing transgenic wheat plants are also discussed.
Adsorbent for metal ions and method of making and using
White, Lloyd R.; Lundquist, Susan H.
1999-01-01
A method comprises the step of spray-drying a solution or slurry comprising (alkali metal or ammonium) (metal) hexacyanoferrate particles in a liquid, to provide monodisperse, substantially spherical particles in a yield of at least 70 percent of theoretical yield and having a particle size in the range of 1 to 500 micrometers, said particles being active towards Cs ions. The particles, which can be of a single salt or a combination of salts, can be used free flowing, in columns or beds, or entrapped in a nonwoven, fibrous web or matrix or a cast porous membrane, to selectively remove Cs ions from aqueous solutions.
Adsorbent for metal ions and method of making and using
White, L.R.; Lundquist, S.H.
1999-08-10
A method comprises the step of spray-drying a solution or slurry comprising (alkali metal or ammonium) (metal) hexacyanoferrate particles in a liquid, to provide monodisperse, substantially spherical particles in a yield of at least 70 percent of theoretical yield and having a particle size in the range of 1 to 500 micrometers, said particles being active towards Cs ions. The particles, which can be of a single salt or a combination of salts, can be used free flowing, in columns or beds, or entrapped in a nonwoven, fibrous web or matrix or a cast porous membrane, to selectively remove Cs ions from aqueous solutions. 2 figs.
Adsorbent for metal ions and method of making and using
White, Lloyd R.; Lundquist, Susan H.
2000-01-01
A method comprises the step of spray-drying a solution or slurry comprising (alkali metal or ammonium) (metal) hexacyanoferrate particles in a liquid, to provide monodisperse, substantially spherical particles in a yield of at least 70 percent of theoretical yield and having a particle size in the range of 1 to 500 micrometers, said particles being active towards Cs ions. The particles, which can be of a single salt or a combination of salts, can be used free flowing, in columns or beds, or entrapped in a nonwoven, fibrous web or matrix or a cast porous membrane, to selectively remove Cs ions from aqueous solutions.
Photoluminescent carbon dots synthesized by microwave treatment for selective image of cancer cells.
Yang, Xudong; Yang, Xue; Li, Zhenyu; Li, Shouying; Han, Yexuan; Chen, Yang; Bu, Xinyuan; Su, Chunyan; Xu, Hong; Jiang, Yingnan; Lin, Quan
2015-10-15
In this work, a simple, low-cost and one-step microwave approach has been demonstrated for the synthesis of water-soluble carbon dots (C-dots). The average size of the resulting C-dots is about 4 nm. From the photoluminescence (PL) measurements, the C-dots exhibit excellent biocompatibility and intense PL with the high quantum yield (QY) at Ca. 25%. Significantly, the C-dots have excellent biocompatibility and the capacity to specifically target the cells overexpressing the folate receptor (FR). These exciting results indicate the as-prepared C-dots are promising biocompatible probe for cancer diagnosis and treatment. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Smith, G. E.; Springer, G. S.; Rimon, Y.
1983-01-01
A method is presented for formulating the boundary conditions in implicit finite-difference form needed for obtaining solutions to the compressible Navier-Stokes equations by the Beam and Warming implicit factored method. The usefulness of the method was demonstrated (a) by establishing the boundary conditions applicable to the analysis of the flow inside an axisymmetric piston-cylinder configuration and (b) by calculating velocities and mass fractions inside the cylinder for different geometries and different operating conditions. Stability, selection of time step and grid sizes, and computer time requirements are discussed in reference to the piston-cylinder problem analyzed.
A random rule model of surface growth
NASA Astrophysics Data System (ADS)
Mello, Bernardo A.
2015-02-01
Stochastic models of surface growth are usually based on randomly choosing a substrate site to perform iterative steps, as in the etching model, Mello et al. (2001) [5]. In this paper I modify the etching model to perform sequential, instead of random, substrate scan. The randomicity is introduced not in the site selection but in the choice of the rule to be followed in each site. The change positively affects the study of dynamic and asymptotic properties, by reducing the finite size effect and the short-time anomaly and by increasing the saturation time. It also has computational benefits: better use of the cache memory and the possibility of parallel implementation.
Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I
2009-01-01
Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.
Damianos, Konstantina; Ferrando, Riccardo
2012-02-21
The structural modifications of small supported gold clusters caused by realistic surface defects (steps) in the MgO(001) support are investigated by computational methods. The most stable gold cluster structures on a stepped MgO(001) surface are searched for in the size range up to 24 Au atoms, and locally optimized by density-functional calculations. Several structural motifs are found within energy differences of 1 eV: inclined leaflets, arched leaflets, pyramidal hollow cages and compact structures. We show that the interaction with the step clearly modifies the structures with respect to adsorption on the flat defect-free surface. We find that leaflet structures clearly dominate for smaller sizes. These leaflets are either inclined and quasi-horizontal, or arched, at variance with the case of the flat surface in which vertical leaflets prevail. With increasing cluster size pyramidal hollow cages begin to compete against leaflet structures. Cage structures become more and more favourable as size increases. The only exception is size 20, at which the tetrahedron is found as the most stable isomer. This tetrahedron is however quite distorted. The comparison of two different exchange-correlation functionals (Perdew-Burke-Ernzerhof and local density approximation) show the same qualitative trends. This journal is © The Royal Society of Chemistry 2012
The effect of external forces on discrete motion within holographic optical tweezers.
Eriksson, E; Keen, S; Leach, J; Goksör, M; Padgett, M J
2007-12-24
Holographic optical tweezers is a widely used technique to manipulate the individual positions of optically trapped micron-sized particles in a sample. The trap positions are changed by updating the holographic image displayed on a spatial light modulator. The updating process takes a finite time, resulting in a temporary decrease of the intensity, and thus the stiffness, of the optical trap. We have investigated this change in trap stiffness during the updating process by studying the motion of an optically trapped particle in a fluid flow. We found a highly nonlinear behavior of the change in trap stiffness vs. changes in step size. For step sizes up to approximately 300 nm the trap stiffness is decreasing. Above 300 nm the change in trap stiffness remains constant for all step sizes up to one particle radius. This information is crucial for optical force measurements using holographic optical tweezers.
Finite-difference modeling with variable grid-size and adaptive time-step in porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Wu, Guochen
2014-04-01
Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.
A two-step super-Gaussian independent component analysis approach for fMRI data.
Ge, Ruiyang; Yao, Li; Zhang, Hang; Long, Zhiying
2015-09-01
Independent component analysis (ICA) has been widely applied to functional magnetic resonance imaging (fMRI) data analysis. Although ICA assumes that the sources underlying data are statistically independent, it usually ignores sources' additional properties, such as sparsity. In this study, we propose a two-step super-GaussianICA (2SGICA) method that incorporates the sparse prior of the sources into the ICA model. 2SGICA uses the super-Gaussian ICA (SGICA) algorithm that is based on a simplified Lewicki-Sejnowski's model to obtain the initial source estimate in the first step. Using a kernel estimator technique, the source density is acquired and fitted to the Laplacian function based on the initial source estimates. The fitted Laplacian prior is used for each source at the second SGICA step. Moreover, the automatic target generation process for initial value generation is used in 2SGICA to guarantee the stability of the algorithm. An adaptive step size selection criterion is also implemented in the proposed algorithm. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of 2SGICA and made a performance comparison between InfomaxICA, FastICA, mean field ICA (MFICA) with Laplacian prior, sparse online dictionary learning (ODL), SGICA and 2SGICA. Both simulated and real fMRI experiments showed that the 2SGICA was most robust to noises, and had the best spatial detection power and the time course estimation among the six methods. Copyright © 2015. Published by Elsevier Inc.
Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures
NASA Astrophysics Data System (ADS)
Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.
2013-05-01
An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.
Proposed variations of the stepped-wedge design can be used to accommodate multiple interventions.
Lyons, Vivian H; Li, Lingyu; Hughes, James P; Rowhani-Rahbar, Ali
2017-06-01
Stepped-wedge design (SWD) cluster-randomized trials have traditionally been used for evaluating a single intervention. We aimed to explore design variants suitable for evaluating multiple interventions in an SWD trial. We identified four specific variants of the traditional SWD that would allow two interventions to be conducted within a single cluster-randomized trial: concurrent, replacement, supplementation, and factorial SWDs. These variants were chosen to flexibly accommodate study characteristics that limit a one-size-fits-all approach for multiple interventions. In the concurrent SWD, each cluster receives only one intervention, unlike the other variants. The replacement SWD supports two interventions that will not or cannot be used at the same time. The supplementation SWD is appropriate when the second intervention requires the presence of the first intervention, and the factorial SWD supports the evaluation of intervention interactions. The precision for estimating intervention effects varies across the four variants. Selection of the appropriate design variant should be driven by the research question while considering the trade-off between the number of steps, number of clusters, restrictions for concurrent implementation of the interventions, lingering effects of each intervention, and precision of the intervention effect estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Rock sampling. [method for controlling particle size distribution
NASA Technical Reports Server (NTRS)
Blum, P. (Inventor)
1971-01-01
A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.
Tellegen, Cassandra L; Sanders, Matthew R
2013-05-01
This systematic review and meta-analysis evaluated the treatment effects of a behavioral family intervention, Stepping Stones Triple P (SSTP) for parents of children with disabilities. SSTP is a system of five intervention levels of increasing intensity and narrowing population reach. Twelve studies, including a total of 659 families, met eligibility criteria. Studies needed to have evaluated SSTP, be written in English or German, contribute original data, and have sufficient data for analyses. No restrictions were placed on study design. A series of meta-analyses were performed for seven different outcome categories. Analyses were conducted on the combination of all four levels of SSTP for which evidence exists (Levels 2-5), and were also conducted separately for each level of SSTP. Significant moderate effect sizes were found for all levels of SSTP for reducing child problems, the primary outcome of interest. On secondary outcomes, significant overall effect sizes were found for parenting styles, parenting satisfaction and efficacy, parental adjustment, parental relationship, and observed child behaviors. No significant treatment effects were found for observed parenting behaviors. Moderator analyses showed no significant differences in effect sizes across the levels of SSTP intervention, with the exception of child observations. Risk of bias within and across studies was assessed. Analyses suggested that publication bias and selective reporting bias were not likely to have heavily influenced the findings. The overall evidence base supported the effectiveness of SSTP as an intervention for improving child and parent outcomes in families of children with disabilities. Limitations and future research directions are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...
2016-08-09
Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less
A microfluidic separation platform using an array of slanted ramps
NASA Astrophysics Data System (ADS)
Risbud, Sumedh; Bernate, Jorge; Drazer, German
2013-03-01
The separation of the different components of a sample is a crucial step in many micro- and nano-fluidic applications, including the detection of infections, the capture of circulating tumor cells, the isolation of proteins, RNA and DNA, to mention but a few. Vector chromatography, in which different species migrate in different directions in a planar microfluidic device thus achieving spatial as well as temporal resolution, offers the promise of high selectivity along with high throughput. In this work, we present a microfluidic vector chromatography platform consisting of slanted ramps in a microfluidic channel for the separation of suspended particles. We construct these ramps using inclined UV lithography, such that the inclined portion of the ramps is upstream. We show that particles of different size displace laterally to a different extent when driven by a flow field over a slanted ramp. The flow close to the ramp reorients along the ramp, causing the size-dependent deflection of the particles. The cumulative effect of an array of these ramps would cause particles of different size to migrate in different directions, thus allowing their passive and continuous separation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vyas, S.N.; Patwardhan, S.R.; Vijayalakshmi, S.
Adsorption on carbon molecular sieves (CMS) prepared by coke deposition has become an interesting area of adsorption due to its microporous nature and favorable separation factor on size and shape selectivity basis for many gaseous systems. In the present work CMS was synthesized from coconut shell through three major steps, namely, carbonization, activation, and coke deposition by hydrocarbon cracking. The crushed, washed, and sieved granules of coconut shell (particle size 2--3 mm) were pretreated with sodium silicate solution and oven-dried at 150 C to create the inorganic sites necessary for coke deposition. Carbonization and activation of the dried granules weremore » carried out at 800 C, for 30 min each. The activated char thus produced was subjected to hydrocarbon cracking at 600 C for periods varying from 30 to 180 min. The product samples were characterized in terms of adsorption isotherm, kinetic adsorption curve, surface area, pore volume, pore size distribution, and characteristic energy for adsorption by using O[sub 2], N[sub 2], C[sub 2]H[sub 2], CO[sub 2], C[sub 3]H[sub 6], and CH[sub 4].« less
A role for selective contraception of individuals in conservation.
Cope, Holly R; Hogg, Carolyn J; White, Peter J; Herbert, Catherine A
2018-06-01
Contraception has an established role in managing overabundant populations and preventing undesirable breeding in zoos. We propose that it can also be used strategically and selectively in conservation to increase the genetic and behavioral quality of the animals. In captive breeding programs, it is becoming increasingly important to maximize the retention of genetic diversity by managing the reproductive contribution of each individual and preventing genetically suboptimal breeding through the use of selective contraception. Reproductive suppression of selected individuals in conservation programs has further benefits of allowing animals to be housed as a group in extensive enclosures without interfering with breeding recommendations, which reduces adaptation to captivity and facilitates the expression of wild behaviors and social structures. Before selective contraception can be incorporated into a breeding program, the most suitable method of fertility control must be selected, and this can be influenced by factors such as species life history, age, ease of treatment, potential for reversibility, and desired management outcome for the individual or population. Contraception should then be implemented in the population following a step-by-step process. In this way, it can provide crucial, flexible control over breeding to promote the physical and genetic health and sustainability of a conservation dependent species held in captivity. For Tasmanian devils (Sarcophilus harrisii), black-flanked rock wallabies (Petrogale lateralis), and burrowing bettongs (Bettongia lesueur), contraception can benefit their conservation by maximizing genetic diversity and behavioral integrity in the captive breeding program, or, in the case of the wallabies and bettongs, by reducing populations to a sustainable size when they become locally overabundant. In these examples, contraceptive duration relative to reproductive life, reversibility, and predictability of the contraceptive agent being used are important to ensure the potential for individuals to reproduce following cessation of contraception, as exemplified by the wallabies when their population crashed and needed females to resume breeding. © 2017 Society for Conservation Biology.
An, Kunsik; Hong, Sukjoon; Han, Seungyong; Lee, Hyungman; Yeo, Junyeob; Ko, Seung Hwan
2014-02-26
We demonstrate selective laser sintering of silver (Ag) nanoparticle (NP) ink using a digital micromirror device (DMD) for the facile fabrication of 2D electrode pattern without any conventional lithographic means or scanning procedure. An arbitrary 2D pattern at the lateral size of 25 μm × 25 μm with 160 nm height is readily produced on a glass substrate by a short exposure of 532 nm Nd:YAG continuous wave laser. The resultant metal pattern exhibits low electrical resistivity of 10.8 uΩ · cm and also shows a fine edge sharpness by the virtue of low thermal conductivity of Ag NP ink. Furthermore, 10 × 10 star-shaped micropattern arrays are fabricated through a step-and-repeat scheme to ensure the potential of this process for the large-area metal pattern fabrication.
Towards high-resolution neutron imaging on IMAT
NASA Astrophysics Data System (ADS)
Minniti, T.; Tremsin, A. S.; Vitucci, G.; Kockelmann, W.
2018-01-01
IMAT is a new cold-neutron imaging facility at the neutron spallation source ISIS at the Rutherford Appleton Laboratory, U.K.. The ISIS pulsed source enables energy-selective and energy-resolved neutron imaging via time-of-flight (TOF) techniques, which are available in addition to the white-beam neutron radiography and tomography options. A spatial resolution of about 50 μm for white-beam neutron radiography was achieved early in the IMAT commissioning phase. In this work we have made the first steps towards achieving higher spatial resolution. A white-beam radiography with 18 μm spatial resolution was achieved in this experiment. This result was possible by using the event counting neutron pixel detector based on micro-channel plates (MCP) coupled with a Timepix readout chip with 55 μm sized pixels, and by employing an event centroiding technique. The prospects for energy-selective neutron radiography for this centroiding mode are discussed.
Murphy, Patrick J. M.
2014-01-01
Background Hydrophobic interaction chromatography (HIC) most commonly requires experimental determination (i.e., scouting) in order to select an optimal chromatographic medium for purifying a given target protein. Neither a two-step purification of untagged green fluorescent protein (GFP) from crude bacterial lysate using sequential HIC and size exclusion chromatography (SEC), nor HIC column scouting elution profiles of GFP, have been previously reported. Methods and Results Bacterial lysate expressing recombinant GFP was sequentially adsorbed to commercially available HIC columns containing butyl, octyl, and phenyl-based HIC ligands coupled to matrices of varying bead size. The lysate was fractionated using a linear ammonium phosphate salt gradient at constant pH. Collected HIC eluate fractions containing retained GFP were then pooled and further purified using high-resolution preparative SEC. Significant differences in presumptive GFP elution profiles were observed using in-line absorption spectrophotometry (A395) and post-run fluorimetry. SDS-PAGE and western blot demonstrated that fluorometric detection was the more accurate indicator of GFP elution in both HIC and SEC purification steps. Comparison of composite HIC column scouting data indicated that a phenyl ligand coupled to a 34 µm matrix produced the highest degree of target protein capture and separation. Conclusions Conducting two-step protein purification using the preferred HIC medium followed by SEC resulted in a final, concentrated product with >98% protein purity. In-line absorbance spectrophotometry was not as precise of an indicator of GFP elution as post-run fluorimetry. These findings demonstrate the importance of utilizing a combination of detection methods when evaluating purification strategies. GFP is a well-characterized model protein, used heavily in educational settings and by researchers with limited protein purification experience, and the data and strategies presented here may aid in development other of HIC-compatible protein purification schemes. PMID:25254496
Surface treated carbon catalysts produced from waste tires for fatty acids to biofuel conversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hood, Zachary D.; Adhikari, Shiba P.; Wright, Marcus W.
A method of making solid acid catalysts includes the step of sulfonating waste tire pieces in a first sulfonation step. The sulfonated waste tire pieces are pyrolyzed to produce carbon composite pieces having a pore size less than 10 nm. The carbon composite pieces are then ground to produce carbon composite powders having a size less than 50 .mu.m. The carbon composite particles are sulfonated in a second sulfonation step to produce sulfonated solid acid catalysts. A method of making biofuels and solid acid catalysts are also disclosed.
NASA Astrophysics Data System (ADS)
Yang, GuanYa; Wu, Jiang; Chen, ShuGuang; Zhou, WeiJun; Sun, Jian; Chen, GuanHua
2018-06-01
Neural network-based first-principles method for predicting heat of formation (HOF) was previously demonstrated to be able to achieve chemical accuracy in a broad spectrum of target molecules [L. H. Hu et al., J. Chem. Phys. 119, 11501 (2003)]. However, its accuracy deteriorates with the increase in molecular size. A closer inspection reveals a systematic correlation between the prediction error and the molecular size, which appears correctable by further statistical analysis, calling for a more sophisticated machine learning algorithm. Despite the apparent difference between simple and complex molecules, all the essential physical information is already present in a carefully selected set of small molecule representatives. A model that can capture the fundamental physics would be able to predict large and complex molecules from information extracted only from a small molecules database. To this end, a size-independent, multi-step multi-variable linear regression-neural network-B3LYP method is developed in this work, which successfully improves the overall prediction accuracy by training with smaller molecules only. And in particular, the calculation errors for larger molecules are drastically reduced to the same magnitudes as those of the smaller molecules. Specifically, the method is based on a 164-molecule database that consists of molecules made of hydrogen and carbon elements. 4 molecular descriptors were selected to encode molecule's characteristics, among which raw HOF calculated from B3LYP and the molecular size are also included. Upon the size-independent machine learning correction, the mean absolute deviation (MAD) of the B3LYP/6-311+G(3df,2p)-calculated HOF is reduced from 16.58 to 1.43 kcal/mol and from 17.33 to 1.69 kcal/mol for the training and testing sets (small molecules), respectively. Furthermore, the MAD of the testing set (large molecules) is reduced from 28.75 to 1.67 kcal/mol.
Soós, Reka; Whiteman, Andrew D; Wilson, David C; Briciu, Cosmin; Nürnberger, Sofia; Oelz, Barbara; Gunsilius, Ellen; Schwehn, Ekkehard
2017-08-01
This is the second of two papers reporting the results of a major study considering 'operator models' for municipal solid waste management (MSWM) in emerging and developing countries. Part A documents the evidence base, while Part B presents a four-step decision support system for selecting an appropriate operator model in a particular local situation. Step 1 focuses on understanding local problems and framework conditions; Step 2 on formulating and prioritising local objectives; and Step 3 on assessing capacities and conditions, and thus identifying strengths and weaknesses, which underpin selection of the operator model. Step 4A addresses three generic questions, including public versus private operation, inter-municipal co-operation and integration of services. For steps 1-4A, checklists have been developed as decision support tools. Step 4B helps choose locally appropriate models from an evidence-based set of 42 common operator models ( coms); decision support tools here are a detailed catalogue of the coms, setting out advantages and disadvantages of each, and a decision-making flowchart. The decision-making process is iterative, repeating steps 2-4 as required. The advantages of a more formal process include avoiding pre-selection of a particular com known to and favoured by one decision maker, and also its assistance in identifying the possible weaknesses and aspects to consider in the selection and design of operator models. To make the best of whichever operator models are selected, key issues which need to be addressed include the capacity of the public authority as 'client', management in general and financial management in particular.
Method and apparatus for sizing and separating warp yarns using acoustical energy
Sheen, Shuh-Haw; Chien, Hual-Te; Raptis, Apostolos C.; Kupperman, David S.
1998-01-01
A slashing process for preparing warp yarns for weaving operations including the steps of sizing and/or desizing the yarns in an acoustic resonance box and separating the yarns with a leasing apparatus comprised of a set of acoustically agitated lease rods. The sizing step includes immersing the yarns in a size solution contained in an acoustic resonance box. Acoustic transducers are positioned against the exterior of the box for generating an acoustic pressure field within the size solution. Ultrasonic waves that result from the acoustic pressure field continuously agitate the size solution to effect greater mixing and more uniform application and penetration of the size onto the yarns. The sized yarns are then separated by passing the warp yarns over and under lease rods. Electroacoustic transducers generate acoustic waves along the longitudinal axis of the lease rods, creating a shearing motion on the surface of the rods for splitting the yarns.
Protein complex purification from Thermoplasma acidophilum using a phage display library.
Hubert, Agnes; Mitani, Yasuo; Tamura, Tomohiro; Boicu, Marius; Nagy, István
2014-03-01
We developed a novel protein complex isolation method using a single-chain variable fragment (scFv) based phage display library in a two-step purification procedure. We adapted the antibody-based phage display technology which has been developed for single target proteins to a protein mixture containing about 300 proteins, mostly subunits of Thermoplasma acidophilum complexes. T. acidophilum protein specific phages were selected and corresponding scFvs were expressed in Escherichia coli. E. coli cell lysate containing the expressed His-tagged scFv specific against one antigen protein and T. acidophilum crude cell lysate containing intact target protein complexes were mixed, incubated and subjected to protein purification using affinity and size exclusion chromatography steps. This method was confirmed to isolate intact particles of thermosome and proteasome suitable for electron microscopy analysis and provides a novel protein complex isolation strategy applicable to organisms where no genetic tools are available. Copyright © 2013 Elsevier B.V. All rights reserved.
ASCI visualization tool evaluation, Version 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kegelmeyer, P.
1997-04-01
The charter of the ASCI Visualization Common Tools subgroup was to investigate and evaluate 3D scientific visualization tools. As part of that effort, a Tri-Lab evaluation effort was launched in February of 1996. The first step was to agree on a thoroughly documented list of 32 features against which all tool candidates would be evaluated. These evaluation criteria were both gleaned from a user survey and determined from informed extrapolation into the future, particularly as concerns the 3D nature and extremely large size of ASCI data sets. The second step was to winnow a field of 41 candidate tools downmore » to 11. The selection principle was to be as inclusive as practical, retaining every tool that seemed to hold any promise of fulfilling all of ASCI`s visualization needs. These 11 tools were then closely investigated by volunteer evaluators distributed across LANL, LLNL, and SNL. This report contains the results of those evaluations, as well as a discussion of the evaluation philosophy and criteria.« less
Dang, Jing-Shuang; Wang, Wei-Wei; Zheng, Jia-Jia; Nagase, Shigeru; Zhao, Xiang
2017-10-05
Although the existence of Stone-Wales (5-7) defect at graphene edge has been clarified experimentally, theoretical study on the formation mechanism is still imperfect. In particular, the regioselectivity of multistep reactions at edge (self-reconstruction and growth with foreign carbon feedstock) is essential to understand the kinetic behavior of reactive boundaries but investigations are still lacking. Herein, by using finite-sized models, multistep reconstructions and carbon dimer additions of a bared zigzag edge are introduced using density functional theory calculations. The zigzag to 5-7 transformation is proved as a site-selective process to generate alternating 5-7 pairs sequentially and the first step with largest barrier is suggested as the rate-determining step. Conversely, successive C 2 insertions on the active edge are calculated to elucidate the formation of 5-7 edge during graphene growth. A metastable intermediate with a triple sequentially fused pentagon fragment is proved as the key structure for 5-7 edge formation. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions
NASA Astrophysics Data System (ADS)
Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.
2016-09-01
Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.
Two types of amorphous protein particles facilitate crystal nucleation.
Yamazaki, Tomoya; Kimura, Yuki; Vekilov, Peter G; Furukawa, Erika; Shirai, Manabu; Matsumoto, Hiroaki; Van Driessche, Alexander E S; Tsukamoto, Katsuo
2017-02-28
Nucleation, the primary step in crystallization, dictates the number of crystals, the distribution of their sizes, the polymorph selection, and other crucial properties of the crystal population. We used time-resolved liquid-cell transmission electron microscopy (TEM) to perform an in situ examination of the nucleation of lysozyme crystals. Our TEM images revealed that mesoscopic clusters, which are similar to those previously assumed to consist of a dense liquid and serve as nucleation precursors, are actually amorphous solid particles (ASPs) and act only as heterogeneous nucleation sites. Crystalline phases never form inside them. We demonstrate that a crystal appears within a noncrystalline particle assembling lysozyme on an ASP or a container wall, highlighting the role of heterogeneous nucleation. These findings represent a significant departure from the existing formulation of the two-step nucleation mechanism while reaffirming the role of noncrystalline particles. The insights gained may have significant implications in areas that rely on the production of protein crystals, such as structural biology, pharmacy, and biophysics, and for the fundamental understanding of crystallization mechanisms.
Systemic safety project selection tool.
DOT National Transportation Integrated Search
2013-07-01
"The Systemic Safety Project Selection Tool presents a process for incorporating systemic safety planning into traditional safety management processes. The Systemic Tool provides a step-by-step process for conducting systemic safety analysis; conside...
Interspecific competition alters nonlinear selection on offspring size in the field.
Marshall, Dustin J; Monro, Keyne
2013-02-01
Offspring size is one of the most important life-history traits with consequences for both the ecology and evolution of most organisms. Surprisingly, formal estimates of selection on offspring size are rare, and the degree to which selection (particularly nonlinear selection) varies among environments remains poorly explored. We estimate linear and nonlinear selection on offspring size, module size, and senescence rate for a sessile marine invertebrate in the field under three different intensities of interspecific competition. The intensity of competition strongly modified the strength and form of selection acting on offspring size. We found evidence for differences in nonlinear selection across the three environments. Our results suggest that the fitness returns of a given offspring size depend simultaneously on their environmental context, and on the context of other offspring traits. Offspring size effects can be more pervasive with regards to their influence on the fitness returns of other traits than previously recognized, and we suggest that the evolution of offspring size cannot be understood in isolation from other traits. Overall, variability in the form and strength of selection on offspring size in nature may reduce the efficacy of selection on offspring size and maintain variation in this trait. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
1990-08-01
the guidance in this report. 1-4. Scope This guidance covers selection of projects suitable for a One-Step or Two-Step approach, development of design...conducted, focus on resolving proposal deficiencies; prices are not "negotiated" in the common use of the term. A Request for Proposal (RFP) states project ...carefully examines experience and past performance in the design of similar projects and building types. Quality of
Applications of step-selection functions in ecology and conservation.
Thurfjell, Henrik; Ciuti, Simone; Boyce, Mark S
2014-01-01
Recent progress in positioning technology facilitates the collection of massive amounts of sequential spatial data on animals. This has led to new opportunities and challenges when investigating animal movement behaviour and habitat selection. Tools like Step Selection Functions (SSFs) are relatively new powerful models for studying resource selection by animals moving through the landscape. SSFs compare environmental attributes of observed steps (the linear segment between two consecutive observations of position) with alternative random steps taken from the same starting point. SSFs have been used to study habitat selection, human-wildlife interactions, movement corridors, and dispersal behaviours in animals. SSFs also have the potential to depict resource selection at multiple spatial and temporal scales. There are several aspects of SSFs where consensus has not yet been reached such as how to analyse the data, when to consider habitat covariates along linear paths between observations rather than at their endpoints, how many random steps should be considered to measure availability, and how to account for individual variation. In this review we aim to address all these issues, as well as to highlight weak features of this modelling approach that should be developed by further research. Finally, we suggest that SSFs could be integrated with state-space models to classify behavioural states when estimating SSFs.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Applicability of corrosion control treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper...
NASA Astrophysics Data System (ADS)
Wang, Zhan-zhi; Xiong, Ying
2013-04-01
A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.
Steps to consider for effective decision making when selecting and prioritizing eHealth services.
Vimarlund, Vivian; Davoody, Nadia; Koch, Sabine
2013-01-01
Making the best choice for an organization when selecting IT applications or eHealth services is not always easy as there are a lot of parameters to take into account. The aim of this paper is to explore some steps to support effective decision making when selecting and prioritizing eHealth services prior to implementation and/or procurement. The steps presented in this paper were identified by interviewing nine key stakeholders at Stockholm County Council. They are supposed to work as a guide for decision making and aim to identify objectives and expected effects, technical, organizational, and economic requirements, and opportunities important to consider before decisions are taken. The steps and their respective issues and variables are concretized in a number of templates to be filled in by decision makers when selecting and prioritizing eHealth services.
Method and system for radioisotope generation
Toth, James J.; Soderquist, Chuck Z.; Greenwood, Lawrence R.; Mattigod, Shas V.; Fryxell, Glen E.; O'Hara, Matthew J.
2014-07-15
A system and a process for producing selected isotopic daughter products from parent materials characterized by the steps of loading the parent material upon a sorbent having a functional group configured to selectively bind the parent material under designated conditions, generating the selected isotopic daughter products, and eluting said selected isotopic daughter products from the sorbent. In one embodiment, the process also includes the step of passing an eluent formed by the elution step through a second sorbent material that is configured to remove a preselected material from said eluent. In some applications a passage of the material through a third sorbent material after passage through the second sorbent material is also performed.
Khara, Dinesh C; Berger, Yaron; Ouldridge, Thomas E
2018-01-01
Abstract We present a detailed coarse-grained computer simulation and single molecule fluorescence study of the walking dynamics and mechanism of a DNA bipedal motor striding on a DNA origami. In particular, we study the dependency of the walking efficiency and stepping kinetics on step size. The simulations accurately capture and explain three different experimental observations. These include a description of the maximum possible step size, a decrease in the walking efficiency over short distances and a dependency of the efficiency on the walking direction with respect to the origami track. The former two observations were not expected and are non-trivial. Based on this study, we suggest three design modifications to improve future DNA walkers. Our study demonstrates the ability of the oxDNA model to resolve the dynamics of complex DNA machines, and its usefulness as an engineering tool for the design of DNA machines that operate in the three spatial dimensions. PMID:29294083
Liu, Zhi-Hua; Chen, Hong-Zhang
2017-01-01
The simultaneous saccharification and fermentation (SSF) of corn stover biomass for ethanol production was performed by integrating steam explosion (SE) pretreatment, hydrolysis and fermentation. Higher SE pretreatment severity and two-step size reduction increased the specific surface area, swollen volume and water holding capacity of steam exploded corn stover (SECS) and hence facilitated the efficiency of hydrolysis and fermentation. The ethanol production and yield in SSF increased with the decrease of particle size and post-washing of SECS prior to fermentation to remove the inhibitors. Under the SE conditions of 1.5MPa and 9min using 2.0cm particle size, glucan recovery and conversion to glucose by enzymes were 86.2% and 87.2%, respectively. The ethanol concentration and yield were 45.0g/L and 85.6%, respectively. With this two-step size reduction and post-washing strategy, the water utilization efficiency, sugar recovery and conversion, and ethanol concentration and yield by the SSF process were improved. Copyright © 2016 Elsevier Ltd. All rights reserved.
Koli, Sunil H; Mohite, Bhavana V; Suryawanshi, Rahul K; Borase, Hemant P; Patil, Satish V
2018-05-01
The development of a safe and eco-friendly method for metal nanoparticle synthesis has an increasing demand, due to emerging environmental and biological harms of hazardous chemicals used in existing nanosynthesis methods. The present investigation reports a rapid one-step, eco-friendly and green approach for the formation of nanosized silver particles (AgNPs) using extracellular non-toxic-colored fungal metabolites (Monascus pigments-MPs). The formation of nanosized silver particles utilizing Monascus pigments was confirmed after exposure of reaction mixture to sunlight, by visually color change and further established by spectrophotometric analysis. The size, shape, and topography of synthesized MPs-AgNPs were well-defined using different microscopic and spectroscopic techniques, i.e., FE-SEM, HR-TEM, and DLS. The average size of MPs-AgNPs was found to be 10-40 nm with a spherical shape which was highly stable and dispersed in the solution. HR-TEM and XRD confirmed crystalline nature of MPs-AgNPs. The biocidal potential of MPs-AgNPs was evaluated against three bacterial pathogens such as Pseudomonas aeruginosa, Escherichia coli, and Staphylococcus aureus and it was observed that the MPs-AgNPs significantly inhibited the growth of all three bacterial pathogens. The anti-biofilm activity of MPs-AgNPs was recorded against antibiotic-resistant P. aeruginosa. Besides, the colorimetric metal sensing using MPs-AgNPs was studied. Among the metals tested, the selective Hg 2+ -sensing potential at micromolar concentration was observed. In conclusion, this is the rapid one-step (within 12-15 min), environment-friendly method for synthesis of AgNPs and synthesized MPs-AgNPs could be used as a potential antibacterial agent against antibiotic-resistant bacterial pathogens.
NASA Astrophysics Data System (ADS)
Durech, Josef; Hanus, Josef; Delbo, Marco; Ali-Lagoa, Victor; Carry, Benoit
2014-11-01
Convex shape models and spin vectors of asteroids are now routinely derived from their disk-integrated lightcurves by the lightcurve inversion method of Kaasalainen et al. (2001, Icarus 153, 37). These shape models can be then used in combination with thermal infrared data and a thermophysical model to derive other physical parameters - size, albedo, macroscopic roughness and thermal inertia of the surface. In this classical two-step approach, the shape and spin parameters are kept fixed during the thermophysical modeling when the emitted thermal flux is computed from the surface temperature, which is computed by solving a 1-D heat diffusion equation in sub-surface layers. A novel method of simultaneous inversion of optical and infrared data was presented by Durech et al. (2012, LPI Contribution No. 1667, id.6118). The new algorithm uses the same convex shape representation as the lightcurve inversion but optimizes all relevant physical parameters simultaneously (including the shape, size, rotation vector, thermal inertia, albedo, surface roughness, etc.), which leads to a better fit to the thermal data and a reliable estimation of model uncertainties. We applied this method to selected asteroids using their optical lightcurves from archives and thermal infrared data observed by the Wide-field Infrared Survey Explorer (WISE) satellite. We will (i) show several examples of how well our model fits both optical and infrared data, (ii) discuss the uncertainty of derived parameters (namely the thermal inertia), (iii) compare results obtained with the two-step approach with those obtained by our method, (iv) discuss the advantages of this simultaneous approach with respect to the classical two-step approach, and (v) advertise the possibility to use this approach to tens of thousands asteroids for which enough WISE and optical data exist.
Aldridge Whitehead, Jennifer M; Wolf, Erik J; Scoville, Charles R; Wilken, Jason M
2014-10-01
Stair ascent can be difficult for individuals with transfemoral amputation because of the loss of knee function. Most individuals with transfemoral amputation use either a step-to-step (nonreciprocal, advancing one stair at a time) or skip-step strategy (nonreciprocal, advancing two stairs at a time), rather than a step-over-step (reciprocal) strategy, because step-to-step and skip-step allow the leading intact limb to do the majority of work. A new microprocessor-controlled knee (Ottobock X2(®)) uses flexion/extension resistance to allow step-over-step stair ascent. We compared self-selected stair ascent strategies between conventional and X2(®) prosthetic knees, examined between-limb differences, and differentiated stair ascent mechanics between X2(®) users and individuals without amputation. We also determined which factors are associated with differences in knee position during initial contact and swing within X2(®) users. Fourteen individuals with transfemoral amputation participated in stair ascent sessions while using conventional and X2(®) knees. Ten individuals without amputation also completed a stair ascent session. Lower-extremity stair ascent joint angles, moment, and powers and ground reaction forces were calculated using inverse dynamics during self-selected strategy and cadence and controlled cadence using a step-over-step strategy. One individual with amputation self-selected a step-over-step strategy while using a conventional knee, while 10 individuals self-selected a step-over-step strategy while using X2(®) knees. Individuals with amputation used greater prosthetic knee flexion during initial contact (32.5°, p = 0.003) and swing (68.2°, p = 0.001) with higher intersubject variability while using X2(®) knees compared to conventional knees (initial contact: 1.6°, swing: 6.2°). The increased prosthetic knee flexion while using X2(®) knees normalized knee kinematics to individuals without amputation during swing (88.4°, p = 0.179) but not during initial contact (65.7°, p = 0.002). Prosthetic knee flexion during initial contact and swing were positively correlated with prosthetic limb hip power during pull-up (r = 0.641, p = 0.046) and push-up/early swing (r = 0.993, p < 0.001), respectively. Participants with transfemoral amputation were more likely to self-select a step-over-step strategy similar to individuals without amputation while using X2(®) knees than conventional prostheses. Additionally, the increased prosthetic knee flexion used with X2(®) knees placed large power demands on the hip during pull-up and push-up/early swing. A modified strategy that uses less knee flexion can be used to allow step-over-step ascent in individuals with less hip strength.
Activity Monitors Step Count Accuracy in Community-Dwelling Older Adults.
Johnson, Marquell
2015-01-01
Objective: To examine the step count accuracy of activity monitors in community-dwelling older adults. Method : Twenty-nine participants aged 67.70 ± 6.07 participated. Three pedometers and the Actical accelerometer step count functions were compared with actual steps taken during a 200-m walk around an indoor track and during treadmill walking at three different speeds. Results : There was no statistical difference between activity monitors step counts and actual steps during self-selected pace walking. During treadmill walking at 0.67 m∙s -1 , all activity monitors step counts were significantly different from actual steps. During treadmill walking at 0.894m∙s -1 , the Omron HJ-112 pedometer step counts were not significantly different from actual steps. During treadmill walking at 1.12 m∙s -1 , the Yamax SW-200 pedometer steps were significantly different from actual steps. Discussion : Activity monitor selection should be deliberate when examining the walking behaviors of community-dwelling older adults, especially for those who walk at a slower pace.
Activity Monitors Step Count Accuracy in Community-Dwelling Older Adults
2015-01-01
Objective: To examine the step count accuracy of activity monitors in community-dwelling older adults. Method: Twenty-nine participants aged 67.70 ± 6.07 participated. Three pedometers and the Actical accelerometer step count functions were compared with actual steps taken during a 200-m walk around an indoor track and during treadmill walking at three different speeds. Results: There was no statistical difference between activity monitors step counts and actual steps during self-selected pace walking. During treadmill walking at 0.67 m∙s−1, all activity monitors step counts were significantly different from actual steps. During treadmill walking at 0.894m∙s−1, the Omron HJ-112 pedometer step counts were not significantly different from actual steps. During treadmill walking at 1.12 m∙s−1, the Yamax SW-200 pedometer steps were significantly different from actual steps. Discussion: Activity monitor selection should be deliberate when examining the walking behaviors of community-dwelling older adults, especially for those who walk at a slower pace. PMID:28138464
Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.
2004-01-01
Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.
Vasilev, Nikolay; Schmitz, Christian; Grömping, Ulrike; Fischer, Rainer; Schillberg, Stefan
2014-01-01
A large-scale statistical experimental design was used to determine essential cultivation parameters that affect biomass accumulation and geraniol production in transgenic tobacco (Nicotiana tabacum cv. Samsun NN) cell suspension cultures. The carbohydrate source played a major role in determining the geraniol yield and factors such as filling volume, inoculum size and light were less important. Sucrose, filling volume and inoculum size had a positive effect on geraniol yield by boosting growth of plant cell cultures whereas illumination of the cultures stimulated the geraniol biosynthesis. We also found that the carbohydrates sucrose and mannitol showed polarizing effects on biomass and geraniol accumulation. Factors such as shaking frequency, the presence of conditioned medium and solubilizers had minor influence on both plant cell growth and geraniol content. When cells were cultivated under the screened conditions for all the investigated factors, the cultures produced ∼5.2 mg/l geraniol after 12 days of cultivation in shaking flasks which is comparable to the yield obtained in microbial expression systems. Our data suggest that industrial experimental designs based on orthogonal arrays are suitable for the selection of initial cultivation parameters prior to the essential medium optimization steps. Such designs are particularly beneficial in the early optimization steps when many factors must be screened, increasing the statistical power of the experiments without increasing the demand on time and resources. PMID:25117009
Vasilev, Nikolay; Schmitz, Christian; Grömping, Ulrike; Fischer, Rainer; Schillberg, Stefan
2014-01-01
A large-scale statistical experimental design was used to determine essential cultivation parameters that affect biomass accumulation and geraniol production in transgenic tobacco (Nicotiana tabacum cv. Samsun NN) cell suspension cultures. The carbohydrate source played a major role in determining the geraniol yield and factors such as filling volume, inoculum size and light were less important. Sucrose, filling volume and inoculum size had a positive effect on geraniol yield by boosting growth of plant cell cultures whereas illumination of the cultures stimulated the geraniol biosynthesis. We also found that the carbohydrates sucrose and mannitol showed polarizing effects on biomass and geraniol accumulation. Factors such as shaking frequency, the presence of conditioned medium and solubilizers had minor influence on both plant cell growth and geraniol content. When cells were cultivated under the screened conditions for all the investigated factors, the cultures produced ∼ 5.2 mg/l geraniol after 12 days of cultivation in shaking flasks which is comparable to the yield obtained in microbial expression systems. Our data suggest that industrial experimental designs based on orthogonal arrays are suitable for the selection of initial cultivation parameters prior to the essential medium optimization steps. Such designs are particularly beneficial in the early optimization steps when many factors must be screened, increasing the statistical power of the experiments without increasing the demand on time and resources.
Optimized spray drying process for preparation of one-step calcium-alginate gel microspheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popeski-Dimovski, Riste
Calcium-alginate micro particles have been used extensively in drug delivery systems. Therefore we establish a one-step method for preparation of internally gelated micro particles with spherical shape and narrow size distribution. We use four types of alginate with different G/M ratio and molar weight. The size of the particles is measured using light diffraction and scanning electron microscopy. Measurements showed that with this method, micro particles with size distribution around 4 micrometers can be prepared, and SEM imaging showed that those particles are spherical in shape.
Shear Melting of a Colloidal Glass
NASA Astrophysics Data System (ADS)
Eisenmann, Christoph; Kim, Chanjoong; Mattsson, Johan; Weitz, David A.
2010-01-01
We use confocal microscopy to explore shear melting of colloidal glasses, which occurs at strains of ˜0.08, coinciding with a strongly non-Gaussian step size distribution. For larger strains, the particle mean square displacement increases linearly with strain and the step size distribution becomes Gaussian. The effective diffusion coefficient varies approximately linearly with shear rate, consistent with a modified Stokes-Einstein relationship in which thermal energy is replaced by shear energy and the length scale is set by the size of cooperatively moving regions consisting of ˜3 particles.
Comparison of two integration methods for dynamic causal modeling of electrophysiological data.
Lemaréchal, Jean-Didier; George, Nathalie; David, Olivier
2018-06-01
Dynamic causal modeling (DCM) is a methodological approach to study effective connectivity among brain regions. Based on a set of observations and a biophysical model of brain interactions, DCM uses a Bayesian framework to estimate the posterior distribution of the free parameters of the model (e.g. modulation of connectivity) and infer architectural properties of the most plausible model (i.e. model selection). When modeling electrophysiological event-related responses, the estimation of the model relies on the integration of the system of delay differential equations (DDEs) that describe the dynamics of the system. In this technical note, we compared two numerical schemes for the integration of DDEs. The first, and standard, scheme approximates the DDEs (more precisely, the state of the system, with respect to conduction delays among brain regions) using ordinary differential equations (ODEs) and solves it with a fixed step size. The second scheme uses a dedicated DDEs solver with adaptive step sizes to control error, making it theoretically more accurate. To highlight the effects of the approximation used by the first integration scheme in regard to parameter estimation and Bayesian model selection, we performed simulations of local field potentials using first, a simple model comprising 2 regions and second, a more complex model comprising 6 regions. In these simulations, the second integration scheme served as the standard to which the first one was compared. Then, the performances of the two integration schemes were directly compared by fitting a public mismatch negativity EEG dataset with different models. The simulations revealed that the use of the standard DCM integration scheme was acceptable for Bayesian model selection but underestimated the connectivity parameters and did not allow an accurate estimation of conduction delays. Fitting to empirical data showed that the models systematically obtained an increased accuracy when using the second integration scheme. We conclude that inference on connectivity strength and delay based on DCM for EEG/MEG requires an accurate integration scheme. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Design and grayscale fabrication of beamfanners in a silicon substrate
NASA Astrophysics Data System (ADS)
Ellis, Arthur Cecil
2001-11-01
This dissertation addresses important first steps in the development of a grayscale fabrication process for multiple phase diffractive optical elements (DOS's) in silicon. Specifically, this process was developed through the design, fabrication, and testing of 1-2 and 1-4 beamfanner arrays for 5-micron illumination. The 1-2 beamfanner arrays serve as a test-of- concept and basic developmental step toward the construction of the 1-4 beamfanners. The beamfanners are 50 microns wide, and have features with dimensions of between 2 and 10 microns. The Iterative Annular Spectrum Approach (IASA) method, developed by Steve Mellin of UAH, and the Boundary Element Method (BEM) are the design and testing tools used to create the beamfanner profiles and predict their performance. Fabrication of the beamfanners required the techniques of grayscale photolithography and reactive ion etching (RIE). A 2-3micron feature size 1-4 silicon beamfanner array was fabricated, but the small features and contact photolithographic techniques available prevented its construction to specifications. A second and more successful attempt was made in which both 1-4 and 1-2 beamfanner arrays were fabricated with a 5-micron minimum feature size. Photolithography for the UAH array was contracted to MEMS-Optical of Huntsville, Alabama. A repeatability study was performed, using statistical techniques, of 14 photoresist arrays and the subsequent RIE process used to etch the arrays in silicon. The variance in selectivity between the 14 processes was far greater than the variance between the individual etched features within each process. Specifically, the ratio of the variance of the selectivities averaged over each of the 14 etch processes to the variance of individual feature selectivities within the processes yielded a significance level below 0.1% by F-test, indicating that good etch-to-etch process repeatability was not attained. One of the 14 arrays had feature etch-depths close enough to design specifications for optical testing, but 5- micron IR illumination of the 1-4 and 1-2 beamfanners yielded no convincing results of beam splitting in the detector plane 340 microns from the surface of the beamfanner array.
Impurity effects in crystal growth from solutions: Steady states, transients and step bunch motion
NASA Astrophysics Data System (ADS)
Ranganathan, Madhav; Weeks, John D.
2014-05-01
We analyze a recently formulated model in which adsorbed impurities impede the motion of steps in crystals grown from solutions, while moving steps can remove or deactivate adjacent impurities. In this model, the chemical potential change of an atom on incorporation/desorption to/from a step is calculated for different step configurations and used in the dynamical simulation of step motion. The crucial difference between solution growth and vapor growth is related to the dependence of the driving force for growth of the main component on the size of the terrace in front of the step. This model has features resembling experiments in solution growth, which yields a dead zone with essentially no growth at low supersaturation and the motion of large coherent step bunches at larger supersaturation. The transient behavior shows a regime wherein steps bunch together and move coherently as the bunch size increases. The behavior at large line tension is reminiscent of the kink-poisoning mechanism of impurities observed in calcite growth. Our model unifies different impurity models and gives a picture of nonequilibrium dynamics that includes both steady states and time dependent behavior and shows similarities with models of disordered systems and the pinning/depinning transition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dix, Sean T.; Scott, Joseph K.; Getman, Rachel B.
2016-01-01
Metal nanoparticles encapsulated within metal organic frameworks (MOFs) offer steric restrictions near the catalytic metal that can improve selectivity, much like in enzymes. A microkinetic model is developed for the regio-selective oxidation ofn-butane to 1-butanol with O 2over a model for MOF-encapsulated bimetallic nanoparticles. The model consists of a Ag 3Pd(111) surface decorated with a 2-atom-thick ring of (immobile) helium atoms which creates an artificial pore of similar size to that in common MOFs, which sterically constrains the adsorbed reaction intermediates. The kinetic parameters are based on energies calculated using density functional theory (DFT). The microkinetic model was analysed atmore » 423 K to determine the dominant pathways and which species (adsorbed intermediates and transition states in the reaction mechanism) have energies that most sensitively affect the reaction rates to the different products, using degree-of-rate-control (DRC) analysis. This analysis revealed that activation of the C–H bond is assisted by adsorbed oxygen atoms, O*. Unfortunately, O* also abstracts H from adsorbed 1-butanol and butoxy as well, leading to butanal as the only significant product. This suggested to (1) add water to produce more OH*, thus inhibiting these undesired steps which produce OH*, and (2) eliminate most of the O 2pressure to reduce the O* coverage, thus also inhibiting these steps. Combined with increasing butane pressure, this dramatically improved the 1-butanol selectivity (from 0 to 95%) and the rate (to 2 molecules per site per s). Moreover, 40% less O 2was consumed per oxygen atom in the products. Under these conditions, a terminal H in butane is directly eliminated to the Pd site, and the resulting adsorbed butyl combines with OH* to give the desired 1-butanol. These results demonstrate that DRC analysis provides a powerful approach for optimizing catalytic process conditions, and that highly selectivity oxidation can sometimes be achieved by using a mixture of O 2and H 2O as the oxidant. This was further demonstrated by DRC analysis of a second microkinetic model based on a related but hypothetical catalyst, where the activation energies for two of the steps were modified.« less
Homoepitaxial and Heteroepitaxial Growth on Step-Free SiC Mesas
NASA Technical Reports Server (NTRS)
Neudeck, Philip G.; Powell, J. Anthony
2004-01-01
This article describes the initial discovery and development of new approaches to SiC homoepitaxial and heteroepitaxial growth. These approaches are based upon the previously unanticipated ability to effectively supress two-dimensional nucleation of 3C-SiC on large basal plane terraces that form between growth steps when epitaxy is carried out on 4H- and 6H-SiC nearly on-axis substrates. After subdividing the growth surface into mesa regions, pure stepflow homoeptixay with no terrace nucleation was then used to grow all existing surface steps off the edges of screw-dislocation-free mesas, leaving behind perfectly on-axis (0001) basal plane mesa surfaces completely free of atomic-scale steps. Step-free mesa surfaces as large as 0.4 mm x 0.4 mm were experimentally realized, with the yield and size of step-free mesas being initally limited by substrate screw dislocations. Continued epitaxial growth following step-free surface formation leads to the formation of thin lateral cantilevers that extend the step-free surface area from the top edge of the mesa sidewalls. By selecting a proper pre-growth mesa shape and crystallographic orientation, the rate of cantilever growth can be greatly enhanced in a web growth process that has been used to (1) enlarge step-free surface areas and (2) overgrow and laterally relocate micropipes and screw dislocations. A new growth process, named step-free surface heteroepitaxy, has been developed to achieve 3C-SiC films on 4H- and 6H-SiC substrate mesas completely free of double positioning boundary and stacking fault defects. The process is based upon the controlled terrace nucleation and lateral expansion of a single island of 3C-SiC across a step-free mesa surface. Experimental results indicate that substrateepilayer lattice mismatch is at least partially relieved parallel to the interface without dislocations that undesirably thread through the thickness of the epilayer. These results should enable realization of improved SiC homojunction and heterojunction devices. In addition, these experiments offer important insights into the nature of polytypism during SiC crystal growth.
Language change in a multiple group society
NASA Astrophysics Data System (ADS)
Pop, Cristina-Maria; Frey, Erwin
2013-08-01
The processes leading to change in languages are manifold. In order to reduce ambiguity in the transmission of information, agreement on a set of conventions for recurring problems is favored. In addition to that, speakers tend to use particular linguistic variants associated with the social groups they identify with. The influence of other groups propagating across the speech community as new variant forms sustains the competition between linguistic variants. With the utterance selection model, an evolutionary description of language change, Baxter [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.73.046118 73, 046118 (2006)] have provided a mathematical formulation of the interactions inside a group of speakers, exploring the mechanisms that lead to or inhibit the fixation of linguistic variants. In this paper, we take the utterance selection model one step further by describing a speech community consisting of multiple interacting groups. Tuning the interaction strength between groups allows us to gain deeper understanding about the way in which linguistic variants propagate and how their distribution depends on the group partitioning. Both for the group size and the number of groups we find scaling behaviors with two asymptotic regimes. If groups are strongly connected, the dynamics is that of the standard utterance selection model, whereas if their coupling is weak, the magnitude of the latter along with the system size governs the way consensus is reached. Furthermore, we find that a high influence of the interlocutor on a speaker's utterances can act as a counterweight to group segregation.
Electronic-carrier-controlled photochemical etching process in semiconductor device fabrication
Ashby, C.I.H.; Myers, D.R.; Vook, F.L.
1988-06-16
An electronic-carrier-controlled photochemical etching process for carrying out patterning and selective removing of material in semiconductor device fabrication includes the steps of selective ion implanting, photochemical dry etching, and thermal annealing, in that order. In the selective ion implanting step, regions of the semiconductor material in a desired pattern are damaged and the remainder of the regions of the material not implanted are left undamaged. The rate of recombination of electrons and holes is increased in the damaged regions of the pattern compared to undamaged regions. In the photochemical dry etching step which follows ion implanting step, the material in the undamaged regions of the semiconductor are removed substantially faster than in the damaged regions representing the pattern, leaving the ion-implanted, damaged regions as raised surface structures on the semiconductor material. After completion of photochemical dry etching step, the thermal annealing step is used to restore the electrical conductivity of the damaged regions of the semiconductor material.
Electronic-carrier-controlled photochemical etching process in semiconductor device fabrication
Ashby, Carol I. H.; Myers, David R.; Vook, Frederick L.
1989-01-01
An electronic-carrier-controlled photochemical etching process for carrying out patterning and selective removing of material in semiconductor device fabrication includes the steps of selective ion implanting, photochemical dry etching, and thermal annealing, in that order. In the selective ion implanting step, regions of the semiconductor material in a desired pattern are damaged and the remainder of the regions of the material not implanted are left undamaged. The rate of recombination of electrons and holes is increased in the damaged regions of the pattern compared to undamaged regions. In the photochemical dry etching step which follows ion implanting step, the material in the undamaged regions of the semiconductor are removed substantially faster than in the damaged regions representing the pattern, leaving the ion-implanted, damaged regions as raised surface structures on the semiconductor material. After completion of photochemical dry etching step, the thermal annealing step is used to restore the electrical conductivity of the damaged regions of the semiconductor material.
Study of CdTe quantum dots grown using a two-step annealing method
NASA Astrophysics Data System (ADS)
Sharma, Kriti; Pandey, Praveen K.; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.
2006-02-01
High size dispersion, large average radius of quantum dot and low-volume ratio has been a major hurdle in the development of quantum dot based devices. In the present paper, we have grown CdTe quantum dots in a borosilicate glass matrix using a two-step annealing method. Results of optical characterization and the theoretical model of absorption spectra have shown that quantum dots grown using two-step annealing have lower average radius, lesser size dispersion, higher volume ratio and higher decrease in bulk free energy as compared to quantum dots grown conventionally.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Visually Lossless JPEG 2000 for Remote Image Browsing
Oh, Han; Bilgin, Ali; Marcellin, Michael
2017-01-01
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112
NASA Astrophysics Data System (ADS)
Kamran, J.; Hasan, B. A.; Tariq, N. H.; Izhar, S.; Sarwar, M.
2014-06-01
In this study the effect of multi-passes warm rolling of AZ31 magnesium alloy on texture, microstructure, grain size variation and hardness of as cast sample (A) and two rolled samples (B & C) taken from different locations of the as-cast ingot was investigated. The purpose was to enhance the formability of AZ31 alloy in order to help manufacturability. It was observed that multi-passes warm rolling (250°C to 350°C) of samples B & C with initial thickness 7.76mm and 7.73 mm was successfully achieved up to 85% reduction without any edge or surface cracks in ten steps with a total of 26 passes. The step numbers 1 to 4 consist of 5, 2, 11 and 3 passes respectively, the remaining steps 5 to 10 were single pass rolls. In each discrete step a fixed roll gap is used in a way that true strain per step increases very slowly from 0.0067 in the first step to 0.7118 in the 26th step. Both samples B & C showed very similar behavior after 26th pass and were successfully rolled up to 85% thickness reduction. However, during 10th step (27th pass) with a true strain value of 0.772 the sample B experienced very severe surface as well as edge cracks. Sample C was therefore not rolled for the 10th step and retained after 26 passes. Both samples were studied in terms of their basal texture, microstructure, grain size and hardness. Sample C showed an equiaxed grain structure after 85% total reduction. The equiaxed grain structure of sample C may be due to the effective involvement of dynamic recrystallization (DRX) which led to formation of these grains with relatively low misorientations with respect to the parent as cast grains. The sample B on the other hand showed a microstructure in which all the grains were elongated along the rolling direction (RD) after 90 % total reduction and DRX could not effectively play its role due to heavy strain and lack of plastic deformation systems. The microstructure of as cast sample showed a near-random texture (mrd 4.3), with average grain size of 44 & micro-hardness of 52 Hv. The grain size of sample B and C was 14μm and 27μm respectively and mrd intensity of basal texture was 5.34 and 5.46 respectively. The hardness of sample B and C came out to be 91 and 66 Hv respectively due to reduction in grain size and followed the well known Hall-Petch relationship.
Switchable Chiral Selection of Aspartic Acids by Dynamic States of Brushite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Wenge; Pan, Haihua; Zhang, Zhisen
Here, we show the chiral recognition and separation of aspartic acid (Asp) enantiomers by achiral brushite due to the asymmetries of their dynamical steps in its nonequilibrium states. Growing brushite has a higher adsorption affinity to d-Asp, while l-Asp is predominant on the dissolving brushite surface. Microstructural characterization reveals that chiral selection is mainly attributed to brushite [101] steps, which exhibit two different configurations during crystal growth and dissolution, respectively, with each preferring a distinct enantiomer due to this asymmetry. Because these transition step configurations have different stabilities, they subsequently result in asymmetric adsorption. Furthermore, by varying free energy barriersmore » through solution thermodynamic driving force (i.e., supersaturation), the dominant nonequilibrium intermediate states can be switched and chiral selection regulated. This finding highlights that the dynamic steps can be vital for chiral selection, which may provide a potential pathway for chirality generation through the dynamic nature.« less
Switchable Chiral Selection of Aspartic Acids by Dynamic States of Brushite
Jiang, Wenge; Pan, Haihua; Zhang, Zhisen; ...
2017-06-15
Here, we show the chiral recognition and separation of aspartic acid (Asp) enantiomers by achiral brushite due to the asymmetries of their dynamical steps in its nonequilibrium states. Growing brushite has a higher adsorption affinity to d-Asp, while l-Asp is predominant on the dissolving brushite surface. Microstructural characterization reveals that chiral selection is mainly attributed to brushite [101] steps, which exhibit two different configurations during crystal growth and dissolution, respectively, with each preferring a distinct enantiomer due to this asymmetry. Because these transition step configurations have different stabilities, they subsequently result in asymmetric adsorption. Furthermore, by varying free energy barriersmore » through solution thermodynamic driving force (i.e., supersaturation), the dominant nonequilibrium intermediate states can be switched and chiral selection regulated. This finding highlights that the dynamic steps can be vital for chiral selection, which may provide a potential pathway for chirality generation through the dynamic nature.« less
NASA Astrophysics Data System (ADS)
Anand, Madhu
Nanoparticles have received significant attention because of their unusual characteristics including high surface area to volume ratios. Materials built from nanoparticles possess unique chemical, physical, mechanical and optical properties. Due to these properties, they hold potential in application areas such as catalysts, sensors, semiconductors and optics. At the same time, CO 2 in the form of supercritical fluid or CO2 gas-expanded liquid mixtures has gained significant attention in the area of processing nanostructures. This dissertation focuses on the synthesis and processing of nanoparticles using CO2 tunable solvent systems. Nanoparticle properties depend heavily on their size and, as such, the ability to finely control the size and uniformity of nanoparticles is of utmost importance. Solution based nanoparticle formation techniques are attractive due to their simplicity, but they often result in the synthesis of particles with a wide size range. To address this limitation, a post-synthesis technique has been developed in this dissertation to fractionate polydisperse nanoparticles ( s . = 30%) into monodisperse fractions ( s . = 8%) using tunable physicochemical properties of CO 2 expanded liquids, where CO2 is employed as an antisolvent. This work demonstrates that by controlling the addition of CO2 (pressurization) to an organic dispersion of nanoparticles, the ligand stabilized nanoparticles can be size selectively precipitated within a novel high pressure apparatus that confines the particle precipitation to a specified location on a surface. Unlike current techniques, this CO2 expanded liquid approach provides faster and more efficient particle size separation, reduction in organic solvent usage, and pressure tunable size selection in a single process. To improve our fundamental understanding and to further refine the size separation process, a detailed study has been performed to identify the key parameters enabling size separation of various nanoparticle populations. This study details the influence of various factors on the size separation process, such as the types of nanoparticles, ligand type and solvent type as well as the use of recursive fractionation and the time allowed for settling during each fractionation step. This size selective precipitation technique was also applied to fractionate and separate polydisperse dispersions of CdSe/ZnS semiconductor nanocrystals into very distinct size and color fractions based solely on the pressure tunable solvent properties of CO2 expanded liquids. This size selective precipitation of nanoparticles is achieved by finely tuning the solvent strength of the CO2/organic solvent medium by simply adjusting the applied CO2 pressure. These subtle changes affect the balance between osmotic repulsive and van der Waals attractive forces thereby allowing fractionation of the nanocrystals into multiple narrow size populations. Thermodynamic analysis of nanoparticle size selective fractionation was performed to develop a theoretical model based on the thermodynamic properties of gas expanded liquids. We have used the general phenomenon of nanoparticle precipitation with CO2 expanded liquids to create dodecanethiol stabilized gold nanoparticle thin films. This method utilizes CO2 as an anti-solvent for low defect, wide area gold nanoparticle film formation employing monodisperse gold nanoparticles. Dodecanethiol stabilized gold particles are precipitated from hexane by controllably expanding the solution with carbon dioxide. Subsequent addition of carbon dioxide as a dense supercritical fluid then provides for removal of the organic solvent while avoiding the dewetting effects common to evaporating solvents. Unfortunately, the use of carbon dioxide as a neat solvent in nanoparticles synthesis and processing is limited by the very poor solvent strength of dense phase CO2. As a result, most current techniques employed to synthesize and disperse nanoparticles in neat carbon dioxide require the use of environmentally persistent fluorinated compounds as metal precursors and/or stabilizing ligands. This dissertation presents the first report of the simultaneous synthesis and stabilization of metallic nanoparticles in carbon dioxide solvent without the use of any fluorinated compounds thereby further enabling the use of CO 2 as a green solvent medium in nanomaterials synthesis and processing.
Fisher, E R; Sass, R; Fisher, B
1985-09-01
Investigation of the biologic significance of delay between biopsy and mastectomy was performed upon women with invasive carcinoma of the breast in protocol four of the NSABP. Since the period of delay was two weeks or less in approximately 75 per cent, no comment concerning the possible effects of longer periods can be made. Life table analyses failed to reveal any difference in ten year survival rates between patients undergoing radical mastectomy management by the one and two step procedures. Similarly, no difference in adjusted ten year survival rate was observed between women managed by the two step procedure who did or did not have residual tumor identified in the mastectomy specimen after the first step or biopsy. Importantly, the clinical or pathologic stages, sizes of tumor or histologic grades were similar in women managed by the one and two step procedures minimizing selection bias. The material used also allowed for study of the possible causative role of biopsy of the breast on the development of sinus histiocytosis in regional axillary lymph nodes. No difference in degree or types of this nodal reaction could be discerned in the lymph nodes of the mastectomy specimens obtained from patients who had undergone the one and two step procedures. This finding indicates that nodal sinus histiocytosis is indeed related to the neoplastic process, albeit in an undefined manner, rather than the trauma of biopsy per se as has been suggested. These results do not invalidate the use of the one step procedure in the management of patients with carcinoma of the breast. Indeed, it is highly likely that it will be commonly used now that breast-conserving operations appear to represent a viable alternative modality for the primary surgical treatment of carcinoma of the breast. Yet, it is apparent that the one step procedure will be performed for technical and practical rather than biologic reasons.
Shea, Michael E; Juárez, Oscar; Cho, Jonathan; Barquera, Blanca
2013-10-25
The Na(+)-pumping NADH:quinone complex is found in Vibrio cholerae and other marine and pathogenic bacteria. NADH:ubiquinone oxidoreductase oxidizes NADH and reduces ubiquinone, using the free energy released by this reaction to pump sodium ions across the cell membrane. In a previous report, a conserved aspartic acid residue in the NqrB subunit at position 397, located in the cytosolic face of this protein, was proposed to be involved in the capture of sodium. Here, we studied the role of this residue through the characterization of mutant enzymes in which this aspartic acid was substituted by other residues that change charge and size, such as arginine, serine, lysine, glutamic acid, and cysteine. Our results indicate that NqrB-Asp-397 forms part of one of the at least two sodium-binding sites and that both size and charge at this position are critical for the function of the enzyme. Moreover, we demonstrate that this residue is involved in cation selectivity, has a critical role in the communication between sodium-binding sites, by promoting cooperativity, and controls the electron transfer step involved in sodium uptake (2Fe-2S → FMNC).
Shea, Michael E.; Juárez, Oscar; Cho, Jonathan; Barquera, Blanca
2013-01-01
The Na+-pumping NADH:quinone complex is found in Vibrio cholerae and other marine and pathogenic bacteria. NADH:ubiquinone oxidoreductase oxidizes NADH and reduces ubiquinone, using the free energy released by this reaction to pump sodium ions across the cell membrane. In a previous report, a conserved aspartic acid residue in the NqrB subunit at position 397, located in the cytosolic face of this protein, was proposed to be involved in the capture of sodium. Here, we studied the role of this residue through the characterization of mutant enzymes in which this aspartic acid was substituted by other residues that change charge and size, such as arginine, serine, lysine, glutamic acid, and cysteine. Our results indicate that NqrB-Asp-397 forms part of one of the at least two sodium-binding sites and that both size and charge at this position are critical for the function of the enzyme. Moreover, we demonstrate that this residue is involved in cation selectivity, has a critical role in the communication between sodium-binding sites, by promoting cooperativity, and controls the electron transfer step involved in sodium uptake (2Fe-2S → FMNC). PMID:24030824
Generating Daily Synthetic Landsat Imagery by Combining Landsat and MODIS Data
Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao
2015-01-01
Owing to low temporal resolution and cloud interference, there is a shortage of high spatial resolution remote sensing data. To address this problem, this study introduces a modified spatial and temporal data fusion approach (MSTDFA) to generate daily synthetic Landsat imagery. This algorithm was designed to avoid the limitations of the conditional spatial temporal data fusion approach (STDFA) including the constant window for disaggregation and the sensor difference. An adaptive window size selection method is proposed in this study to select the best window size and moving steps for the disaggregation of coarse pixels. The linear regression method is used to remove the influence of differences in sensor systems using disaggregated mean coarse reflectance by testing and validation in two study areas located in Xinjiang Province, China. The results show that the MSTDFA algorithm can generate daily synthetic Landsat imagery with a high correlation coefficient (R) ranged from 0.646 to 0.986 between synthetic images and the actual observations. We further show that MSTDFA can be applied to 250 m 16-day MODIS MOD13Q1 products and the Landsat Normalized Different Vegetation Index (NDVI) data by generating a synthetic NDVI image highly similar to actual Landsat NDVI observation with a high R of 0.97. PMID:26393607
Generating Daily Synthetic Landsat Imagery by Combining Landsat and MODIS Data.
Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao
2015-09-18
Owing to low temporal resolution and cloud interference, there is a shortage of high spatial resolution remote sensing data. To address this problem, this study introduces a modified spatial and temporal data fusion approach (MSTDFA) to generate daily synthetic Landsat imagery. This algorithm was designed to avoid the limitations of the conditional spatial temporal data fusion approach (STDFA) including the constant window for disaggregation and the sensor difference. An adaptive window size selection method is proposed in this study to select the best window size and moving steps for the disaggregation of coarse pixels. The linear regression method is used to remove the influence of differences in sensor systems using disaggregated mean coarse reflectance by testing and validation in two study areas located in Xinjiang Province, China. The results show that the MSTDFA algorithm can generate daily synthetic Landsat imagery with a high correlation coefficient (R) ranged from 0.646 to 0.986 between synthetic images and the actual observations. We further show that MSTDFA can be applied to 250 m 16-day MODIS MOD13Q1 products and the Landsat Normalized Different Vegetation Index (NDVI) data by generating a synthetic NDVI image highly similar to actual Landsat NDVI observation with a high R of 0.97.
Using step and path selection functions for estimating resistance to movement: Pumas as a case study
Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce
2015-01-01
GPS telemetry collars and their ability to acquire accurate and consistently frequent locations have increased the use of step selection functions (SSFs) and path selection functions (PathSFs) for studying animal movement and estimating resistance. However, previously published SSFs and PathSFs often do not accommodate multiple scales or multiscale modeling....
DOT National Transportation Integrated Search
2007-10-01
The goal of Selective Traffic Enforcement Programs (STEPs) is to induce motorists to drive safely. To achieve this goal, the STEP model combines intensive enforcement of a specific traffic safety law with extensive communication, education, and outre...
Index Fund Selections with GAs and Classifications Based on Turnover
NASA Astrophysics Data System (ADS)
Orito, Yukiko; Motoyama, Takaaki; Yamazaki, Genji
It is well known that index fund selections are important for the risk hedge of investment in a stock market. The`selection’means that for`stock index futures’, n companies of all ones in the market are selected. For index fund selections, Orito et al.(6) proposed a method consisting of the following two steps : Step 1 is to select N companies in the market with a heuristic rule based on the coefficient of determination between the return rate of each company in the market and the increasing rate of the stock price index. Step 2 is to construct a group of n companies by applying genetic algorithms to the set of N companies. We note that the rule of Step 1 is not unique. The accuracy of the results using their method depends on the length of time data (price data) in the experiments. The main purpose of this paper is to introduce a more`effective rule’for Step 1. The rule is based on turnover. The method consisting of Step 1 based on turnover and Step 2 is examined with numerical experiments for the 1st Section of Tokyo Stock Exchange. The results show that with our method, it is possible to construct the more effective index fund than the results of Orito et al.(6). The accuracy of the results using our method depends little on the length of time data (turnover data). The method especially works well when the increasing rate of the stock price index over a period can be viewed as a linear time series data.
McGaghie, William C; Cohen, Elaine R; Wayne, Diane B
2011-01-01
United States Medical Licensing Examination (USMLE) scores are frequently used by residency program directors when evaluating applicants. The objectives of this report are to study the chain of reasoning and evidence that underlies the use of USMLE Step 1 and 2 scores for postgraduate medical resident selection decisions and to evaluate the validity argument about the utility of USMLE scores for this purpose. This is a research synthesis using the critical review approach. The study first describes the chain of reasoning that underlies a validity argument about using test scores for a specific purpose. It continues by summarizing correlations of USMLE Step 1 and 2 scores and reliable measures of clinical skill acquisition drawn from nine studies involving 393 medical learners from 2005 to 2010. The integrity of the validity argument about using USMLE Step 1 and 2 scores for postgraduate residency selection decisions is tested. The research synthesis shows that USMLE Step 1 and 2 scores are not correlated with reliable measures of medical students', residents', and fellows' clinical skill acquisition. The validity argument about using USMLE Step 1 and 2 scores for postgraduate residency selection decisions is neither structured, coherent, nor evidence based. The USMLE score validity argument breaks down on grounds of extrapolation and decision/interpretation because the scores are not associated with measures of clinical skill acquisition among advanced medical students, residents, and subspecialty fellows. Continued use of USMLE Step 1 and 2 scores for postgraduate medical residency selection decisions is discouraged.
Setting health priorities in a community: a case example
Sousa, Fábio Alexandre Melo do Rego; Goulart, Maria José Garcia; Braga, Antonieta Manuela dos Santos; Medeiros, Clara Maria Oliveira; Rego, Débora Cristina Martins; Vieira, Flávio Garcia; Pereira, Helder José Alves da Rocha; Tavares, Helena Margarida Correia Vicente; Loura, Marta Maria Puim
2017-01-01
ABSTRACT OBJECTIVE To describe the methodology used in the process of setting health priorities for community intervention in a community of older adults. METHODS Based on the results of a health diagnosis related to active aging, a prioritization process was conceived to select the priority intervention problem. The process comprised four successive phases of problem analysis and classification: (1) grouping by level of similarity, (2) classification according to epidemiological criteria, (3) ordering by experts, and (4) application of the Hanlon method. These stages combined, in an integrated manner, the views of health team professionals, community nursing and gerontology experts, and the actual community. RESULTS The first stage grouped the identified problems by level of similarity, comprising a body of 19 issues for analysis. In the second stage these problems were classified by the health team members by epidemiological criteria (size, vulnerability, and transcendence). The nine most relevant problems resulting from the second stage of the process were submitted to expert analysis and the five most pertinent problems were selected. The last step identified the priority issue for intervention in this specific community with the participation of formal and informal community leaders: Low Social Interaction in Community Participation. CONCLUSIONS The prioritization process is a key step in health planning, enabling the identification of priority problems to intervene in a given community at a given time. There are no default formulas for selecting priority issues. It is up to each community intervention team to define its own process with different methods/techniques that allow the identification of and intervention in needs classified as priority by the community. PMID:28273229
Cervera, R P; Garcia-Ximénez, F
2003-10-01
The purpose of this study was to test the effectiveness of one two-step (A) and two one-step (B1 and B2) vitrification procedures on denuded expanded or hatching rabbit blastocysts held in standard sealed plastic straws as a possible model for human blastocysts. The effect of blastocyst size was also studied on the basis of three size categories (I: diameter <200 micro m; II: diameter 200-299 micro m; III: diameter >/==" BORDER="0">300 micro m). Rabbit expanded or hatching blastocysts were vitrified at day 4 or 5. Before vitrification, the zona pellucida was removed using acidic phosphate buffered saline. For the two-step procedure, prior to vitrification, blastocysts were pre- equilibrated in a solution containing 10% dimethyl sulphoxide (DMSO) and 10% ethylene glycol (EG) for 1 min. Different final vitrification solutions were compared: 20% DMSO and 20% EG with (A and B1) or without (B2) 0.5 mol/l sucrose. Of 198 vitrified blastocysts, 181 (91%) survived, regardless of the vitrification procedure applied. Vitrification procedure A showed significantly higher re-expansion (88%), attachment (86%) and trophectoderm outgrowth (80%) rates than the two one-step vitrification procedures, B1 and B2 (46 and 21%, 20 and 33%, and 18 and 23%, respectively). After warming, blastocysts of greater size (II and III) showed significantly higher attachment (54 and 64%) and trophectoderm outgrowth (44 and 58%) rates than smaller blastocysts (I, attachment: 29%; trophectoderm outgrowth: 25%). These result demonstrate that denuded expanded or hatching rabbit blastocysts of greater size can be satisfactorily vitrified by use of a two-step procedure. The similarity of vitrification solutions used in humans could make it feasible to test such a procedure on human denuded blastocysts of different sizes.
Winne, Christopher T; Willson, John D; Whitfield Gibbons, J
2010-04-01
The causes and consequences of body size and sexual size dimorphism (SSD) have been central questions in evolutionary ecology. Two, often opposing selective forces are suspected to act on body size in animals: survival selection and reproductive (fecundity and sexual) selection. We have recently identified a system where a small aquatic snake species (Seminatrix pygaea) is capable of surviving severe droughts by aestivating within dried, isolated wetlands. We tested the hypothesis that the lack of aquatic prey during severe droughts would impose significant survivorship pressures on S. pygaea, and that the largest individuals, particularly females, would be most adversely affected by resource limitation. Our findings suggest that both sexes experience selection against large body size during severe drought when prey resources are limited, as nearly all S. pygaea are absent from the largest size classes and maximum body size and SSD are dramatically reduced following drought. Conversely, strong positive correlations between maternal body size and reproductive success in S. pygaea suggest that females experience fecundity selection for large size during non-drought years. Collectively, our study emphasizes the dynamic interplay between selection pressures that act on body size and supports theoretical predictions about the relationship between body size and survivorship in ectotherms under conditions of resource limitation.
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-10-01
This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.
Closed-loop optimization of chromatography column sizing strategies in biopharmaceutical manufacture
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-01-01
BACKGROUND This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. RESULTS An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. CONCLUSION This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. PMID:25506115
Electrokinetic Response of Charge-Selective Nanostructured Polymeric Membranes
NASA Astrophysics Data System (ADS)
Schiffbauer, Jarrod; Li, Diya; Gao, Feng; Phillip, William; Chang, Hsueh-Chia
2017-11-01
Nanostructured polymeric membranes, with a tunable pore size and ease of surface molecular functionalization, are a promising material for separations, filtration, and sensing applications. Recently, such membranes have been fabricated wherein the ion selectivity is imparted by self-assembled functional groups through a two-step process. Amine groups are used to provide a positive surface charge and acid groups are used to yield a negative charge. The membranes can be fabricated as either singly-charged or patterned/mosaic membranes, where there are alternating regions of amine- lined or acid-lined pores. We demonstrate that such membranes, in addition to having many features in common with other charge selective membranes (i.e. AMX or Nafion), display a unique single-membrane rectification behavior. This is due to the asymmetric distribution of charged functional groups during the fabrication process. We demonstrate this rectification effect using both dc current-voltage characteristics as well as dc-biased electrical impedance spectroscopy. Furthermore, surface charge changes due to dc concentration polarization and generation of localized pH shifts are monitored using electrical impedance spectroscopy. (formerly at University of Notre Dame).
Slow and fast solar wind - data selection and statistical analysis
NASA Astrophysics Data System (ADS)
Wawrzaszek, Anna; Macek, Wiesław M.; Bruno, Roberto; Echim, Marius
2014-05-01
In this work we consider the important problem of selection of slow and fast solar wind data measured in-situ by the Ulysses spacecraft during two solar minima (1995-1997, 2007-2008) and solar maximum (1999-2001). To recognise different types of solar wind we use a set of following parameters: radial velocity, proton density, proton temperature, the distribution of charge states of oxygen ions, and compressibility of magnetic field. We present how this idea of the data selection works on Ulysses data. In the next step we consider the chosen intervals for fast and slow solar wind and perform statistical analysis of the fluctuating magnetic field components. In particular, we check the possibility of identification of inertial range by considering the scale dependence of the third and fourth orders scaling exponents of structure function. We try to verify the size of inertial range depending on the heliographic latitudes, heliocentric distance and phase of the solar cycle. Research supported by the European Community's Seventh Framework Programme (FP7/2007 - 2013) under grant agreement no 313038/STORM.
ChemScreener: A Distributed Computing Tool for Scaffold based Virtual Screening.
Karthikeyan, Muthukumarasamy; Pandit, Deepak; Vyas, Renu
2015-01-01
In this work we present ChemScreener, a Java-based application to perform virtual library generation combined with virtual screening in a platform-independent distributed computing environment. ChemScreener comprises a scaffold identifier, a distinct scaffold extractor, an interactive virtual library generator as well as a virtual screening module for subsequently selecting putative bioactive molecules. The virtual libraries are annotated with chemophore-, pharmacophore- and toxicophore-based information for compound prioritization. The hits selected can then be further processed using QSAR, docking and other in silico approaches which can all be interfaced within the ChemScreener framework. As a sample application, in this work scaffold selectivity, diversity, connectivity and promiscuity towards six important therapeutic classes have been studied. In order to illustrate the computational power of the application, 55 scaffolds extracted from 161 anti-psychotic compounds were enumerated to produce a virtual library comprising 118 million compounds (17 GB) and annotated with chemophore, pharmacophore and toxicophore based features in a single step which would be non-trivial to perform with many standard software tools today on libraries of this size.
Simultaneous fabrication of a microcavity absorber-emitter on a Ni-W alloy film
NASA Astrophysics Data System (ADS)
Nashun; Kagimoto, Masahiro; Iwami, Kentaro; Umeda, Norihiro
2017-10-01
A process for the simultaneous fabrication of microcavity structures on both sides of a film was proposed and demonstrated to develop a free-standing-type integrated absorber-emitter for use in solar thermophotovoltaic power generation systems. The absorber-emitter-integrated film comprised a heat-resistant Ni-W alloy deposited by electroplating. A two-step silicon mould was fabricated using deep reactive-ion etching and electron beam lithography. Cavity arrays with different unit sizes were successfully fabricated on both sides of the film; these arrays are suitable for use as a solar spectrum absorber and an infrared-selective emitter. Their emissivity spectra were characterised through UV-vis-NIR and Fourier transform infrared spectroscopy.
Subtraction of cap-trapped full-length cDNA libraries to select rare transcripts.
Hirozane-Kishikawa, Tomoko; Shiraki, Toshiyuki; Waki, Kazunori; Nakamura, Mari; Arakawa, Takahiro; Kawai, Jun; Fagiolini, Michela; Hensch, Takao K; Hayashizaki, Yoshihide; Carninci, Piero
2003-09-01
The normalization and subtraction of highly expressed cDNAs from relatively large tissues before cloning dramatically enhanced the gene discovery by sequencing for the mouse full-length cDNA encyclopedia, but these methods have not been suitable for limited RNA materials. To normalize and subtract full-length cDNA libraries derived from limited quantities of total RNA, here we report a method to subtract plasmid libraries excised from size-unbiased amplified lambda phage cDNA libraries that avoids heavily biasing steps such as PCR and plasmid library amplification. The proportion of full-length cDNAs and the gene discovery rate are high, and library diversity can be validated by in silico randomization.
Mercury orbiter transport study
NASA Technical Reports Server (NTRS)
Friedlander, A. L.; Feingold, H.
1977-01-01
A data base and comparative performance analyses of alternative flight mode options for delivering a range of payload masses to Mercury orbit are provided. Launch opportunities over the period 1980-2000 are considered. Extensive data trades are developed for the ballistic flight mode option utilizing one or more swingbys of Venus. Advanced transport options studied include solar electric propulsion and solar sailing. Results show the significant performance tradeoffs among such key parameters as trip time, payload mass, propulsion system mass, orbit size, launch year sensitivity and relative cost-effectiveness. Handbook-type presentation formats, particularly in the case of ballistic mode data, provide planetary program planners with an easily used source of reference information essential in the preliminary steps of mission selection and planning.
Garson, Christopher D; Li, Bing; Acton, Scott T; Hossack, John A
2008-06-01
The active surface technique using gradient vector flow allows semi-automated segmentation of ventricular borders. The accuracy of the algorithm depends on the optimal selection of several key parameters. We investigated the use of conservation of myocardial volume for quantitative assessment of each of these parameters using synthetic and in vivo data. We predicted that for a given set of model parameters, strong conservation of volume would correlate with accurate segmentation. The metric was most useful when applied to the gradient vector field weighting and temporal step-size parameters, but less effective in guiding an optimal choice of the active surface tension and rigidity parameters.
Poliovirus Mutants Resistant to Neutralization with Soluble Cell Receptors
NASA Astrophysics Data System (ADS)
Kaplan, Gerardo; Peters, David; Racaniello, Vincent R.
1990-12-01
Poliovirus mutants resistant to neutralization with soluble cellular receptor were isolated. Replication of soluble receptor-resistant (srr) mutants was blocked by a monoclonal antibody directed against the HeLa cell receptor for poliovirus, indicating that the mutants use this receptor to enter cells. The srr mutants showed reduced binding to HeLa cells and cell membranes. However, the reduced binding phenotype did not have a major impact on viral replication, as judged by plaque size and one-step growth curves. These results suggest that the use of soluble receptors as antiviral agents could lead to the selection of neutralization-resistant mutants that are able to bind cell surface receptors, replicate, and cause disease.
Multi Bus DC-DC Converter in Electric Hybrid Vehicles
NASA Astrophysics Data System (ADS)
Krithika, V.; Subramaniam, C.; Sridharan, R.; Geetha, A.
2018-04-01
This paper is cotncerned with the design, simulation and fabrication of the prototype of a Multi bus DC- DC converter operating from 42V DC and delivering 14V DC and 260V DC. As a result, three DC buses are interconnected through a single power electronic circuitry. Such a requirement is energized in the development of a hybrid electric automobile which uses the technology of fuel cell. This is implemented by using a Bidirectional DC-DC converter configuration which is ideally suitable for multiple outputs with mutual electrical isolation. For the sake of reduced size and cost of step-up transformer, selection of a high frequency switching cycle at 10 KHz was done.
Controlled sub-nanometer tuning of photonic crystal resonator by carbonaceous nano-dots.
Seo, Min-Kyo; Park, Hong-Gyu; Yang, Jin-Kyu; Kim, Ju-Young; Kim, Se-Heon; Lee, Yong-Hee
2008-06-23
We propose and demonstrate a scheme that enables spectral tuning of a photonic crystal high-quality resonant mode, in steps finer than 0.2 nm, via electron beam induced deposition of carbonaceous nano-dots. The position and size of a nano-dot with a diameter of <100 nm are controlled to an accuracy on the order of nanometers. The possibility of selective modal tuning is also demonstrated by placing nano-dots at locations pre-determined by theoretical computation. The lasing threshold of a photonic crystal mode tends to increase when a nano-dot is grown at the point of strong electric field, showing the absorptive nature of the nano-dot.
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters
NASA Astrophysics Data System (ADS)
Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi
A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.
Evolution of brain region volumes during artificial selection for relative brain size.
Kotrschal, Alexander; Zeng, Hong-Li; van der Bijl, Wouter; Öhman-Mägi, Caroline; Kotrschal, Kurt; Pelckmans, Kristiaan; Kolm, Niclas
2017-12-01
The vertebrate brain shows an extremely conserved layout across taxa. Still, the relative sizes of separate brain regions vary markedly between species. One interesting pattern is that larger brains seem associated with increased relative sizes only of certain brain regions, for instance telencephalon and cerebellum. Till now, the evolutionary association between separate brain regions and overall brain size is based on comparative evidence and remains experimentally untested. Here, we test the evolutionary response of brain regions to directional selection on brain size in guppies (Poecilia reticulata) selected for large and small relative brain size. In these animals, artificial selection led to a fast response in relative brain size, while body size remained unchanged. We use microcomputer tomography to investigate how the volumes of 11 main brain regions respond to selection for larger versus smaller brains. We found no differences in relative brain region volumes between large- and small-brained animals and only minor sex-specific variation. Also, selection did not change allometric scaling between brain and brain region sizes. Our results suggest that brain regions respond similarly to strong directional selection on relative brain size, which indicates that brain anatomy variation in contemporary species most likely stem from direct selection on key regions. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
NASA Astrophysics Data System (ADS)
Lopez-Sanchez, Marco; Llana-Fúnez, Sergio
2016-04-01
The understanding of creep behaviour in rocks requires knowledge of 3D grain size distributions (GSD) that result from dynamic recrystallization processes during deformation. The methods to estimate directly the 3D grain size distribution -serial sectioning, synchrotron or X-ray-based tomography- are expensive, time-consuming and, in most cases and at best, challenging. This means that in practice grain size distributions are mostly derived from 2D sections. Although there are a number of methods in the literature to derive the actual 3D grain size distributions from 2D sections, the most popular in highly deformed rocks is the so-called Saltykov method. It has though two major drawbacks: the method assumes no interaction between grains, which is not true in the case of recrystallised mylonites; and uses histograms to describe distributions, which limits the quantification of the GSD. The first aim of this contribution is to test whether the interaction between grains in mylonites, i.e. random grain packing, affects significantly the GSDs estimated by the Saltykov method. We test this using the random resampling technique in a large data set (n = 12298). The full data set is built from several parallel thin sections that cut a completely dynamically recrystallized quartz aggregate in a rock sample from a Variscan shear zone in NW Spain. The results proved that the Saltykov method is reliable as long as the number of grains is large (n > 1000). Assuming that a lognormal distribution is an optimal approximation for the GSD in a completely dynamically recrystallized rock, we introduce an additional step to the Saltykov method, which allows estimating a continuous probability distribution function of the 3D grain size population. The additional step takes the midpoints of the classes obtained by the Saltykov method and fits a lognormal distribution with a trust region using a non-linear least squares algorithm. The new protocol is named the two-step method. The conclusion of this work is that both the Saltykov and the two-step methods are accurate and simple enough to be useful in practice in rocks, alloys or ceramics with near-equant grains and expected lognormal distributions. The Saltykov method is particularly suitable to estimate the volumes of particular grain fractions, while the two-step method to quantify the full GSD (mean and standard deviation in log grain size). The two-step method is implemented in a free, open-source and easy-to-handle script (see http://marcoalopez.github.io/GrainSizeTools/).
Method and apparatus for sizing and separating warp yarns using acoustical energy
Sheen, S.H.; Chien, H.T.; Raptis, A.C.; Kupperman, D.S.
1998-05-19
A slashing process is disclosed for preparing warp yarns for weaving operations including the steps of sizing and/or desizing the yarns in an acoustic resonance box and separating the yarns with a leasing apparatus comprised of a set of acoustically agitated lease rods. The sizing step includes immersing the yarns in a size solution contained in an acoustic resonance box. Acoustic transducers are positioned against the exterior of the box for generating an acoustic pressure field within the size solution. Ultrasonic waves that result from the acoustic pressure field continuously agitate the size solution to effect greater mixing and more uniform application and penetration of the size onto the yarns. The sized yarns are then separated by passing the warp yarns over and under lease rods. Electroacoustic transducers generate acoustic waves along the longitudinal axis of the lease rods, creating a shearing motion on the surface of the rods for splitting the yarns. 2 figs.
Dallum, Gregory E.; Pratt, Garth C.; Haugen, Peter C.; Romero, Carlos E.
2013-01-15
An ultra-wideband (UWB) dual impulse transmitter is made up of a trigger edge selection circuit actuated by a single trigger input pulse; a first step recovery diode (SRD) based pulser connected to the trigger edge selection circuit to generate a first impulse output; and a second step recovery diode (SRD) based pulser connected to the trigger edge selection circuit in parallel to the first pulser to generate a second impulse output having a selected delay from the first impulse output.
New insights into time series analysis. II - Non-correlated observations
NASA Astrophysics Data System (ADS)
Ferreira Lopes, C. E.; Cross, N. J. G.
2017-08-01
Context. Statistical parameters are used to draw conclusions in a vast number of fields such as finance, weather, industrial, and science. These parameters are also used to identify variability patterns on photometric data to select non-stochastic variations that are indicative of astrophysical effects. New, more efficient, selection methods are mandatory to analyze the huge amount of astronomical data. Aims: We seek to improve the current methods used to select non-stochastic variations on non-correlated data. Methods: We used standard and new data-mining parameters to analyze non-correlated data to find the best way to discriminate between stochastic and non-stochastic variations. A new approach that includes a modified Strateva function was performed to select non-stochastic variations. Monte Carlo simulations and public time-domain data were used to estimate its accuracy and performance. Results: We introduce 16 modified statistical parameters covering different features of statistical distribution such as average, dispersion, and shape parameters. Many dispersion and shape parameters are unbound parameters, I.e. equations that do not require the calculation of average. Unbound parameters are computed with single loop and hence decreasing running time. Moreover, the majority of these parameters have lower errors than previous parameters, which is mainly observed for distributions with few measurements. A set of non-correlated variability indices, sample size corrections, and a new noise model along with tests of different apertures and cut-offs on the data (BAS approach) are introduced. The number of mis-selections are reduced by about 520% using a single waveband and 1200% combining all wavebands. On the other hand, the even-mean also improves the correlated indices introduced in Paper I. The mis-selection rate is reduced by about 18% if the even-mean is used instead of the mean to compute the correlated indices in the WFCAM database. Even-statistics allows us to improve the effectiveness of both correlated and non-correlated indices. Conclusions: The selection of non-stochastic variations is improved by non-correlated indices. The even-averages provide a better estimation of mean and median for almost all statistical distributions analyzed. The correlated variability indices, which are proposed in the first paper of this series, are also improved if the even-mean is used. The even-parameters will also be useful for classifying light curves in the last step of this project. We consider that the first step of this project, where we set new techniques and methods that provide a huge improvement on the efficiency of selection of variable stars, is now complete. Many of these techniques may be useful for a large number of fields. Next, we will commence a new step of this project regarding the analysis of period search methods.
Ahfir, Nasre-Dine; Hammadi, Ahmed; Alem, Abdellah; Wang, HuaQing; Le Bras, Gilbert; Ouahbi, Tariq
2017-03-01
The effects of porous media grain size distribution on the transport and deposition of polydisperse suspended particles under different flow velocities were investigated. Selected Kaolinite particles (2-30μm) and Fluorescein (dissolved tracer) were injected in the porous media by step input injection technique. Three sands filled columns were used: Fine sand, Coarse sand, and a third sand (Mixture) obtained by mixing the two last sands in equal weight proportion. The porous media performance on the particle removal was evaluated by analysing particles breakthrough curves, hydro-dispersive parameters determined using the analytical solution of convection-dispersion equation with a first order deposition kinetics, particles deposition profiles, and particle-size distribution of the recovered and the deposited particles. The deposition kinetics and the longitudinal hydrodynamic dispersion coefficients are controlled by the porous media grain size distribution. Mixture sand is more dispersive than Fine and Coarse sands. More the uniformity coefficient of the porous medium is large, higher is the filtration efficiency. At low velocities, porous media capture all sizes of suspended particles injected with larger ones mainly captured at the entrance. A high flow velocity carries the particles deeper into the porous media, producing more gradual changes in the deposition profile. The median diameter of the deposited particles at different depth increases with flow velocity. The large grain size distribution leads to build narrow pores enhancing the deposition of the particles by straining. Copyright © 2016. Published by Elsevier B.V.
Vasiljevic, Milica; Cartwright, Emma; Pechey, Rachel; Hollands, Gareth J; Couturier, Dominique-Laurent; Jebb, Susan A; Marteau, Theresa M
2017-01-01
An estimated one third of energy is consumed in the workplace. The workplace is therefore an important context in which to reduce energy consumption to tackle the high rates of overweight and obesity in the general population. Altering environmental cues for food selection and consumption-physical micro-environment or 'choice architecture' interventions-has the potential to reduce energy intake. The first aim of this pilot trial is to estimate the potential impact upon energy purchased of three such environmental cues (size of portions, packages and tableware; availability of healthier vs. less healthy options; and energy labelling) in workplace cafeterias. A second aim of this pilot trial is to examine the feasibility of recruiting eligible worksites, and identify barriers to the feasibility and acceptability of implementing the interventions in preparation for a larger trial. Eighteen worksite cafeterias in England will be assigned to one of three intervention groups to assess the impact on energy purchased of altering (a) portion, package and tableware size ( n = 6); (b) availability of healthier options ( n = 6); and (c) energy (calorie) labelling ( n = 6). Using a stepped wedge design, sites will implement allocated interventions at different time periods, as randomised. This pilot trial will examine the feasibility of recruiting eligible worksites, and the feasibility and acceptability of implementing the interventions in preparation for a larger trial. In addition, a series of linear mixed models will be used to estimate the impact of each intervention on total energy (calories) purchased per time frame of analysis (daily or weekly) controlling for the total sales/transactions adjusted for calendar time and with random effects for worksite. These analyses will allow an estimate of an effect size of each of the three proposed interventions, which will form the basis of the sample size calculations necessary for a larger trial. ISRCTN52923504.
NASA Astrophysics Data System (ADS)
Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir
2018-03-01
Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using Taguchi analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.
Effect of reaction-step-size noise on the switching dynamics of stochastic populations
NASA Astrophysics Data System (ADS)
Be'er, Shay; Heller-Algazi, Metar; Assaf, Michael
2016-05-01
In genetic circuits, when the messenger RNA lifetime is short compared to the cell cycle, proteins are produced in geometrically distributed bursts, which greatly affects the cellular switching dynamics between different metastable phenotypic states. Motivated by this scenario, we study a general problem of switching or escape in stochastic populations, where influx of particles occurs in groups or bursts, sampled from an arbitrary distribution. The fact that the step size of the influx reaction is a priori unknown and, in general, may fluctuate in time with a given correlation time and statistics, introduces an additional nondemographic reaction-step-size noise into the system. Employing the probability-generating function technique in conjunction with Hamiltonian formulation, we are able to map the problem in the leading order onto solving a stationary Hamilton-Jacobi equation. We show that compared to the "usual case" of single-step influx, bursty influx exponentially decreases the population's mean escape time from its long-lived metastable state. In particular, close to bifurcation we find a simple analytical expression for the mean escape time which solely depends on the mean and variance of the burst-size distribution. Our results are demonstrated on several realistic distributions and compare well with numerical Monte Carlo simulations.
Project W-320, 241-C-106 sluicing HVAC calculations, Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, J.W.
1998-08-07
This supporting document has been prepared to make the FDNW calculations for Project W-320, readily retrievable. The report contains the following calculations: Exhaust airflow sizing for Tank 241-C-106; Equipment sizing and selection recirculation fan; Sizing high efficiency mist eliminator; Sizing electric heating coil; Equipment sizing and selection of recirculation condenser; Chiller skid system sizing and selection; High efficiency metal filter shielding input and flushing frequency; and Exhaust skid stack sizing and fan sizing.
NASA Astrophysics Data System (ADS)
Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.
NASA Technical Reports Server (NTRS)
Chan, Daniel C.; Darian, Armen; Sindir, Munir
1992-01-01
We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).
Intermediate surface structure between step bunching and step flow in SrRuO3 thin film growth
NASA Astrophysics Data System (ADS)
Bertino, Giulia; Gura, Anna; Dawber, Matthew
We performed a systematic study of SrRuO3 thin films grown on TiO2 terminated SrTiO3 substrates using off-axis magnetron sputtering. We investigated the step bunching formation and the evolution of the SRO film morphology by varying the step size of the substrate, the growth temperature and the film thickness. The thin films were characterized using Atomic Force Microscopy and X-Ray Diffraction. We identified single and multiple step bunching and step flow growth regimes as a function of the growth parameters. Also, we clearly observe a stronger influence of the step size of the substrate on the evolution of the SRO film surface with respect to the other growth parameters. Remarkably, we observe the formation of a smooth, regular and uniform ``fish skin'' structure at the transition between one regime and another. We believe that the fish skin structure results from the merging of 2D flat islands predicted by previous models. The direct observation of this transition structure allows us to better understand how and when step bunching develops in the growth of SrRuO3 thin films.
NASA Astrophysics Data System (ADS)
Jacobs, Luc; Barroo, Cédric; Gilis, Natalia; Lambeets, Sten V.; Genty, Eric; Visart de Bocarmé, Thierry
2018-03-01
To make available atomic oxygen at the surface of a catalyst is the key step for oxidation reactions on Au-based catalysts. In this context, Au-Ag alloys catalysts exhibit promising properties for selective oxidation reactions of alcohols: low temperature activity and high selectivity. The presence of O(ads) and its effects on the catalytic reactivity is studied via the N2O dissociative adsorption and subsequent hydrogenation. Field emission techniques are particularly suited to study this reaction: Field Ion Microscopy (FIM) and Field Emission Microscopy (FEM) enable to image the extremity of sharp metallic tips, the size and morphology of which are close to those of one single catalytic particle. The reaction dynamics is studied in the 300-320 K temperature range and at a pressure of 3.5 × 10-3 Pa. The main results are a strong structure/reactivity relationship during N2O + H2 reaction over Au-8.8 at.%Ag model catalysts. Comparison of high-resolution FIM images of the clean sample and FEM images during reaction shows a sensitivity of the reaction to the local structure of the facets, independently of the used partial pressures of both N2O and H2. This suggests a localised dissociative adsorption step for N2O and H2 with the formation of a reactive interface around the {210} facets.
Orłowska, Marta; Kowalska, Teresa; Sajewicz, Mieczysław; Jesionek, Wioleta; Choma, Irena M; Majer-Dziedzic, Barbara; Szymczak, Grażyna; Waksmundzka-Hajnos, Monika
2015-01-01
Bioautography carried out with the aid of thin-layer chromatographic adsorbents can be used to assess antibacterial activity in samples of different origin. It can either be used as a simple and cost-effective detection method applied to a developed chromatogram, or to the dot blot test performed on a chromatographic plate, where total antibacterial activity of a sample is scrutinized. It was an aim of this study to compare antibacterial activity of 18 thyme (Thymus) specimens and species (originating from the same gardening plot and harvested in the same period of time) by means of a dot blot test with direct bioautography. A two-step extraction of herbal material was applied, and at step two the polar fraction of secondary metabolites was obtained under the earlier optimized extraction conditions [methanol-water (27+73, v/v), 130°C]. This fraction was then tested for its antibacterial activity against Bacillus subtilis bacteria. It was established that all investigated extracts exhibited antibacterial activity, yet distinct differences were perceived in the size of the bacterial growth inhibition zones among the compared thyme species. Based on the results obtained, T. citriodorus "golden dwarf" (sample No. 5) and T. marschallianus (sample No. 6) were selected as promising targets for further investigations and possible inclusion in a herbal pharmacopeia, which is an essential scientific novelty of this study.
Kim, Jaerok; Choi, Yoonseok
2014-01-01
BACKGROUND/OBJECTIVES Educational interventions targeted food selection perception, knowledge, attitude, and behavior. Education regarding irradiated food was intended to change food selection behavior specific to it. SUBJECTS AND METHODS There were 43 elementary students (35.0%), 45 middle school students (36.6%), and 35 high school students (28.5%). The first step was research design. Educational targets were selected and informed consent was obtained in step two. An initial survey was conducted as step three. Step four was a 45 minute-long theoretical educational intervention. Step five concluded with a survey and experiment on food selection behavior. RESULTS As a result of conducting a 45 minute-long education on the principles, actual state of usage, and pros and cons of irradiated food for elementary, middle, and high-school students in Korea, perception, knowledge, attitude, and behavior regarding the irradiated food was significantly higher after the education than before the education (P < 0.000). CONCLUSIONS The behavior of irradiated food selection shows high correlation with all variables of perception, knowledge, and attitude, and it is necessary to provide information of each level of change in perception, knowledge, and attitude in order to derive proper behavior change, which is the ultimate goal of the education. PMID:25324942
Syh, J; Patel, B; Syh, J; Wu, H; Rosen, L; Durci, M; Katz, S; Sibata, C
2012-06-01
To evaluate the characteristics of commercial-grade flatbed scanners and medical-grade scanners for radiochromic EBT film dosimetry. Performance aspects of a Vidar Dosimetry Pro Advantage (Red), Epson 750 Pro, Microtek ArtixScan 1800f, and Microtek ScanMaker 8700 scanner for EBT2 Gafchromic film were evaluated in the categories of repeatability, maximum distinguishable optical density (OD) differentiation, OD variance, and dose curve characteristics. OD step film by Stouffer Industries containing 31 steps ranging from 0.05 to 3.62 OD was used. EBT films were irradiated with dose ranging from 20 to 600 cGy in 6×6 cm 2 field sizes and analyzed 24 hours later using RIT113 and Tomotherapy Film Analyzer software. Scans were performed in transmissive mode, landscape orientation, 16-bit image. The mean and standard deviation Analog to Digital (A/D) scanner value was measured by selecting a 3×3 mm 2 uniform area in the central region of each OD step from a total of 20 scans performed over several weeks. Repeatability was determined from the variance of OD step 0.38. Maximum distinguishable OD was defined as the last OD step whose range of A/D values does not overlap with its neighboring step. Repeatability uncertainty ranged from 0.1% for Vidar to 4% for Epson. Average standard deviation of OD steps ranged from 0.21% for Vidar to 6.4% for ArtixScan 1800f. Maximum distinguishable optical density ranged from 3.38 for Vidar to 1.32 for ScanMaker 8700. A/D range of each OD step corresponds to a dose range. Dose ranges of OD steps varied from 1% for Vidar to 20% for ScanMaker 8700. The Vidar exhibited a dose curve that utilized a broader range of OD values than the other scanners. Vidar exhibited higher maximum distinguishable OD, smaller variance in repeatability, smaller A/D value deviation per OD step, and a shallower dose curve with respect to OD. © 2012 American Association of Physicists in Medicine.
Optimal Padding for the Two-Dimensional Fast Fourier Transform
NASA Technical Reports Server (NTRS)
Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.
2011-01-01
One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications.
Opinion formation on adaptive networks with intensive average degree
NASA Astrophysics Data System (ADS)
Schmittmann, B.; Mukhopadhyay, Abhishek
2010-12-01
We study the evolution of binary opinions on a simple adaptive network of N nodes. At each time step, a randomly selected node updates its state (“opinion”) according to the majority opinion of the nodes that it is linked to; subsequently, all links are reassigned with probability p˜ (q˜) if they connect nodes with equal (opposite) opinions. In contrast to earlier work, we ensure that the average connectivity (“degree”) of each node is independent of the system size (“intensive”), by choosing p˜ and q˜ to be of O(1/N) . Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. We find two absorbing states, characterized by perfect consensus, and one metastable state, characterized by a population split evenly between the two opinions. The relaxation time of this state grows exponentially with the number of nodes, N . A second metastable state, found in the earlier studies, is no longer observed.
Smoothing spline ANOVA frailty model for recurrent event data.
Du, Pang; Jiang, Yihua; Wang, Yuedong
2011-12-01
Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data. © 2011, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Wolfram, Markus; König, Stephan; Bandelow, Steffi; Fischer, Paul; Jankowski, Alexander; Marx, Gerrit; Schweikhard, Lutz
2018-02-01
Lead clusters {{{{Pb}}}{n}}+/- in the size range between about n = 15 and 40 have recently shown to exhibit complex dissociation spectra due to sequential and competing decays. In order to disentangle the pathways the exemplary {{{{Pb}}}31}+ clusters have been stored and size selected in a Penning trap and irradiated by nanosecond laser pulses. We present time-resolved measurements at time scales from several tens of microseconds to several hundreds of milliseconds. The study results in strong evidence that {{{{Pb}}}31}+ decays not only by neutral monomer evaporation but also by neutral heptamers breaking off. In addition, the decays are further followed to smaller products. The corresponding decay and growth times show that {{{{Pb}}}30}+ also dissociates by either monomer evaporation or heptamer break-off. Furthermore, the product {{{{Pb}}}17}+ may well be a result of heptamer break-off from {{{{Pb}}}24}+—as the second step of a sequential heptamer decay.
Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area
NASA Astrophysics Data System (ADS)
Min, Li; Xin, Yang; Liyang, Xiong
2016-06-01
Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.
Cengiz, Ibrahim Fatih; Oliveira, Joaquim Miguel; Reis, Rui L
2017-08-01
Quantitative assessment of micro-structure of materials is of key importance in many fields including tissue engineering, biology, and dentistry. Micro-computed tomography (µ-CT) is an intensively used non-destructive technique. However, the acquisition parameters such as pixel size and rotation step may have significant effects on the obtained results. In this study, a set of tissue engineering scaffolds including examples of natural and synthetic polymers, and ceramics were analyzed. We comprehensively compared the quantitative results of µ-CT characterization using 15 acquisition scenarios that differ in the combination of the pixel size and rotation step. The results showed that the acquisition parameters could statistically significantly affect the quantified mean porosity, mean pore size, and mean wall thickness of the scaffolds. The effects are also practically important since the differences can be as high as 24% regarding the mean porosity in average, and 19.5 h and 166 GB regarding the characterization time and data storage per sample with a relatively small volume. This study showed in a quantitative manner the effects of such a wide range of acquisition scenarios on the final data, as well as the characterization time and data storage per sample. Herein, a clear picture of the effects of the pixel size and rotation step on the results is provided which can notably be useful to refine the practice of µ-CT characterization of scaffolds and economize the related resources.
One-step selection of Vaccinia virus-binding DNA aptamers by MonoLEX
Nitsche, Andreas; Kurth, Andreas; Dunkhorst, Anna; Pänke, Oliver; Sielaff, Hendrik; Junge, Wolfgang; Muth, Doreen; Scheller, Frieder; Stöcklein, Walter; Dahmen, Claudia; Pauli, Georg; Kage, Andreas
2007-01-01
Background As a new class of therapeutic and diagnostic reagents, more than fifteen years ago RNA and DNA aptamers were identified as binding molecules to numerous small compounds, proteins and rarely even to complete pathogen particles. Most aptamers were isolated from complex libraries of synthetic nucleic acids by a process termed SELEX based on several selection and amplification steps. Here we report the application of a new one-step selection method (MonoLEX) to acquire high-affinity DNA aptamers binding Vaccinia virus used as a model organism for complex target structures. Results The selection against complete Vaccinia virus particles resulted in a 64-base DNA aptamer specifically binding to orthopoxviruses as validated by dot blot analysis, Surface Plasmon Resonance, Fluorescence Correlation Spectroscopy and real-time PCR, following an aptamer blotting assay. The same oligonucleotide showed the ability to inhibit in vitro infection of Vaccinia virus and other orthopoxviruses in a concentration-dependent manner. Conclusion The MonoLEX method is a straightforward procedure as demonstrated here for the identification of a high-affinity DNA aptamer binding Vaccinia virus. MonoLEX comprises a single affinity chromatography step, followed by subsequent physical segmentation of the affinity resin and a single final PCR amplification step of bound aptamers. Therefore, this procedure improves the selection of high affinity aptamers by reducing the competition between aptamers of different affinities during the PCR step, indicating an advantage for the single-round MonoLEX method. PMID:17697378
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2018-02-01
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.
Proposed variations of the stepped-wedge design can be used to accommodate multiple interventions
Lyons, Vivian H; Li, Lingyu; Hughes, James P; Rowhani-Rahbar, Ali
2018-01-01
Objective Stepped wedge design (SWD) cluster randomized trials have traditionally been used for evaluating a single intervention. We aimed to explore design variants suitable for evaluating multiple interventions in a SWD trial. Study Design and Setting We identified four specific variants of the traditional SWD that would allow two interventions to be conducted within a single cluster randomized trial: Concurrent, Replacement, Supplementation and Factorial SWDs. These variants were chosen to flexibly accommodate study characteristics that limit a one-size-fits-all approach for multiple interventions. Results In the Concurrent SWD, each cluster receives only one intervention, unlike the other variants. The Replacement SWD supports two interventions that will not or cannot be employed at the same time. The Supplementation SWD is appropriate when the second intervention requires the presence of the first intervention, and the Factorial SWD supports the evaluation of intervention interactions. The precision for estimating intervention effects varies across the four variants. Conclusion Selection of the appropriate design variant should be driven by the research question while considering the trade-off between the number of steps, number of clusters, restrictions for concurrent implementation of the interventions, lingering effects of each intervention, and precision of the intervention effect estimates. PMID:28412466
NASA Astrophysics Data System (ADS)
Yu, Hung Wei; Anandan, Deepak; Hsu, Ching Yi; Hung, Yu Chih; Su, Chun Jung; Wu, Chien Ting; Kakkerla, Ramesh Kumar; Ha, Minh Thien Huu; Huynh, Sa Hoang; Tu, Yung Yi; Chang, Edward Yi
2018-02-01
High-density (˜ 80/um2) vertical InAs nanowires (NWs) with small diameters (˜ 28 nm) were grown on bare Si (111) substrates by means of two-step metal organic chemical vapor deposition. There are two critical factors in the growth process: (1) a critical nucleation temperature for a specific In molar fraction (approximately 1.69 × 10-5 atm) is the key factor to reduce the size of the nuclei and hence the diameter of the InAs NWs, and (2) a critical V/III ratio during the 2nd step growth will greatly increase the density of the InAs NWs (from 45 μm-2 to 80 μm-2) and at the same time keep the diameter small. The high-resolution transmission electron microscopy and selected area diffraction patterns of InAs NWs grown on Si exhibit a Wurtzite structure and no stacking faults. The observed longitudinal optic peaks in the Raman spectra were explained in terms of the small surface charge region width due to the small NW diameter and the increase of the free electron concentration, which was consistent with the TCAD program simulation of small diameter (< 40 nm) InAs NWs.
Dual salt precipitation for the recovery of a recombinant protein from Escherichia coli.
Balasundaram, Bangaru; Sachdeva, Soam; Bracewell, Daniel G
2011-01-01
When considering worldwide demand for biopharmaceuticals, it becomes necessary to consider alternative process strategies to improve the economics of manufacturing such molecules. To address this issue, the current study investigates precipitation to selectively isolate the product or remove contaminants and thus assist the initial purification of a intracellular protein. The hypothesis tested was that the combination of two or more precipitating agents will alter the solubility profile of the product through synergistic or antagonistic effects. This principle was investigated through several combinations of ammonium sulfate and sodium citrate at different ratios. A synergistic effect mediated by a known electrostatic interaction of citrate ions with Fab' in addition to the typical salting-out effects was observed. On the basis of the results of the solubility studies, a two step primary recovery route was investigated. In the first step termed conditioning, post-homogenization and before clarification, addition of 0.8 M ammonium sulfate extracted 30% additional product. Clarification performance measured using a scale-down disc stack centrifugation mimic determined a four-fold reduction in centrifuge size requirements. Dual salt precipitation in the second step resulted in >98% recovery of Fab' while removing 36% of the contaminant proteins simultaneously. Copyright © 2011 American Institute of Chemical Engineers (AIChE).
In situ formation deposited ZnO nanoparticles on silk fabrics under ultrasound irradiation.
Khanjani, Somayeh; Morsali, Ali; Joo, Sang W
2013-03-01
Deposition of zinc(II) oxide (ZnO) nanoparticles on the surface of silk fabrics was prepared by sequential dipping steps in alternating bath of potassium hydroxide and zinc nitrate under ultrasound irradiation. This coating involves in situ generation and deposition of ZnO in a one step. The effects of ultrasound irradiation, concentration and sequential dipping steps on growth of the ZnO nanoparticles have been studied. Results show a decrease in the particles size as increasing power of ultrasound irradiation. Also, increasing of the concentration and sequential dipping steps increase particle size. The physicochemical properties of the nanoparticles were determined by powder X-ray diffraction (XRD), scanning electron microscopy (SEM) and wavelength dispersive X-ray (WDX). Copyright © 2012 Elsevier B.V. All rights reserved.
A Heckman selection model for the safety analysis of signalized intersections
Wong, S. C.; Zhu, Feng; Pei, Xin; Huang, Helai; Liu, Youjun
2017-01-01
Purpose The objective of this paper is to provide a new method for estimating crash rate and severity simultaneously. Methods This study explores a Heckman selection model of the crash rate and severity simultaneously at different levels and a two-step procedure is used to investigate the crash rate and severity levels. The first step uses a probit regression model to determine the sample selection process, and the second step develops a multiple regression model to simultaneously evaluate the crash rate and severity for slight injury/kill or serious injury (KSI), respectively. The model uses 555 observations from 262 signalized intersections in the Hong Kong metropolitan area, integrated with information on the traffic flow, geometric road design, road environment, traffic control and any crashes that occurred during two years. Results The results of the proposed two-step Heckman selection model illustrate the necessity of different crash rates for different crash severity levels. Conclusions A comparison with the existing approaches suggests that the Heckman selection model offers an efficient and convenient alternative method for evaluating the safety performance at signalized intersections. PMID:28732050
Deep Learning for Population Genetic Inference.
Sheehan, Sara; Song, Yun S
2016-03-01
Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.
Deep Learning for Population Genetic Inference
Sheehan, Sara; Song, Yun S.
2016-01-01
Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908
The evolutionary legacy of size-selective harvesting extends from genes to populations
Uusi-Heikkilä, Silva; Whiteley, Andrew R; Kuparinen, Anna; Matsumura, Shuichi; Venturelli, Paul A; Wolter, Christian; Slate, Jon; Primmer, Craig R; Meinelt, Thomas; Killen, Shaun S; Bierbach, David; Polverino, Giovanni; Ludwig, Arne; Arlinghaus, Robert
2015-01-01
Size-selective harvesting is assumed to alter life histories of exploited fish populations, thereby negatively affecting population productivity, recovery, and yield. However, demonstrating that fisheries-induced phenotypic changes in the wild are at least partly genetically determined has proved notoriously difficult. Moreover, the population-level consequences of fisheries-induced evolution are still being controversially discussed. Using an experimental approach, we found that five generations of size-selective harvesting altered the life histories and behavior, but not the metabolic rate, of wild-origin zebrafish (Danio rerio). Fish adapted to high positively size selective fishing pressure invested more in reproduction, reached a smaller adult body size, and were less explorative and bold. Phenotypic changes seemed subtle but were accompanied by genetic changes in functional loci. Thus, our results provided unambiguous evidence for rapid, harvest-induced phenotypic and evolutionary change when harvesting is intensive and size selective. According to a life-history model, the observed life-history changes elevated population growth rate in harvested conditions, but slowed population recovery under a simulated moratorium. Hence, the evolutionary legacy of size-selective harvesting includes populations that are productive under exploited conditions, but selectively disadvantaged to cope with natural selection pressures that often favor large body size. PMID:26136825
Steps in the open space planning process
Stephanie B. Kelly; Melissa M. Ryan
1995-01-01
This paper presents the steps involved in developing an open space plan. The steps are generic in that the methods may be applied various size communities. The intent is to provide a framework to develop an open space plan that meets Massachusetts requirements for funding of open space acquisition.
A simple, compact, and rigid piezoelectric step motor with large step size.
Wang, Qi; Lu, Qingyou
2009-08-01
We present a novel piezoelectric stepper motor featuring high compactness, rigidity, simplicity, and any direction operability. Although tested in room temperature, it is believed to work in low temperatures, owing to its loose operation conditions and large step size. The motor is implemented with a piezoelectric scanner tube that is axially cut into almost two halves and clamp holds a hollow shaft inside at both ends via the spring parts of the shaft. Two driving voltages that singly deform the two halves of the piezotube in one direction and recover simultaneously will move the shaft in the opposite direction, and vice versa.
A simple, compact, and rigid piezoelectric step motor with large step size
NASA Astrophysics Data System (ADS)
Wang, Qi; Lu, Qingyou
2009-08-01
We present a novel piezoelectric stepper motor featuring high compactness, rigidity, simplicity, and any direction operability. Although tested in room temperature, it is believed to work in low temperatures, owing to its loose operation conditions and large step size. The motor is implemented with a piezoelectric scanner tube that is axially cut into almost two halves and clamp holds a hollow shaft inside at both ends via the spring parts of the shaft. Two driving voltages that singly deform the two halves of the piezotube in one direction and recover simultaneously will move the shaft in the opposite direction, and vice versa.
An improved affine projection algorithm for active noise cancellation
NASA Astrophysics Data System (ADS)
Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo
2017-08-01
Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.
Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M
2012-03-01
Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.
NASA Astrophysics Data System (ADS)
Schmidt, Sarah; Tavernaro, Isabella; Cavelius, Christian; Weber, Eva; Kümper, Alexander; Schmitz, Carmen; Fleddermann, Jana; Kraegeloh, Annette
2017-09-01
In this study, a novel approach for preparation of green fluorescent protein (GFP)-doped silica nanoparticles with a narrow size distribution is presented. GFP was chosen as a model protein due to its autofluorescence. Protein-doped nanoparticles have a high application potential in the field of intracellular protein delivery. In addition, fluorescently labelled particles can be used for bioimaging. The size of these protein-doped nanoparticles was adjusted from 15 to 35 nm using a multistep synthesis process, comprising the particle core synthesis followed by shell regrowth steps. GFP was selectively incorporated into the silica matrix of either the core or the shell or both by a one-pot reaction. The obtained nanoparticles were characterised by determination of particle size, hydrodynamic diameter, ζ-potential, fluorescence and quantum yield. The measurements showed that the fluorescence of GFP was maintained during particle synthesis. Cellular uptake experiments demonstrated that the GFP-doped nanoparticles can be used as stable and effective fluorescent probes. The study reveals the potential of the chosen approach for incorporation of functional biological macromolecules into silica nanoparticles, which opens novel application fields like intracellular protein delivery.
Sacks, G; Swinburn, B; Kraak, V; Downs, S; Walker, C; Barquera, S; Friel, S; Hawkes, C; Kelly, B; Kumanyika, S; L'Abbé, M; Lee, A; Lobstein, T; Ma, J; Macmullan, J; Mohan, S; Monteiro, C; Neal, B; Rayner, M; Sanders, D; Snowdon, W; Vandevijvere, S
2013-10-01
Private-sector organizations play a critical role in shaping the food environments of individuals and populations. However, there is currently very limited independent monitoring of private-sector actions related to food environments. This paper reviews previous efforts to monitor the private sector in this area, and outlines a proposed approach to monitor private-sector policies and practices related to food environments, and their influence on obesity and non-communicable disease (NCD) prevention. A step-wise approach to data collection is recommended, in which the first ('minimal') step is the collation of publicly available food and nutrition-related policies of selected private-sector organizations. The second ('expanded') step assesses the nutritional composition of each organization's products, their promotions to children, their labelling practices, and the accessibility, availability and affordability of their products. The third ('optimal') step includes data on other commercial activities that may influence food environments, such as political lobbying and corporate philanthropy. The proposed approach will be further developed and piloted in countries of varying size and income levels. There is potential for this approach to enable national and international benchmarking of private-sector policies and practices, and to inform efforts to hold the private sector to account for their role in obesity and NCD prevention. © 2013 The Authors. Obesity Reviews published by John Wiley & Sons Ltd on behalf of the International Association for the Study of Obesity.
Hollow Microtube Resonators via Silicon Self-Assembly toward Subattogram Mass Sensing Applications.
Kim, Joohyun; Song, Jungki; Kim, Kwangseok; Kim, Seokbeom; Song, Jihwan; Kim, Namsu; Khan, M Faheem; Zhang, Linan; Sader, John E; Park, Keunhan; Kim, Dongchoul; Thundat, Thomas; Lee, Jungchul
2016-03-09
Fluidic resonators with integrated microchannels (hollow resonators) are attractive for mass, density, and volume measurements of single micro/nanoparticles and cells, yet their widespread use is limited by the complexity of their fabrication. Here we report a simple and cost-effective approach for fabricating hollow microtube resonators. A prestructured silicon wafer is annealed at high temperature under a controlled atmosphere to form self-assembled buried cavities. The interiors of these cavities are oxidized to produce thin oxide tubes, following which the surrounding silicon material is selectively etched away to suspend the oxide tubes. This simple three-step process easily produces hollow microtube resonators. We report another innovation in the capping glass wafer where we integrate fluidic access channels and getter materials along with residual gas suction channels. Combined together, only five photolithographic steps and one bonding step are required to fabricate vacuum-packaged hollow microtube resonators that exhibit quality factors as high as ∼ 13,000. We take one step further to explore additionally attractive features including the ability to tune the device responsivity, changing the resonator material, and scaling down the resonator size. The resonator wall thickness of ∼ 120 nm and the channel hydraulic diameter of ∼ 60 nm are demonstrated solely by conventional microfabrication approaches. The unique characteristics of this new fabrication process facilitate the widespread use of hollow microtube resonators, their translation between diverse research fields, and the production of commercially viable devices.
Fluctuating survival selection explains variation in avian group size
Brown, Charles R.; Brown, Mary Bomberger; Roche, Erin A.; O’Brien, Valerie A.; Page, Catherine E.
2016-01-01
Most animal groups vary extensively in size. Because individuals in certain sizes of groups often have higher apparent fitness than those in other groups, why wide group size variation persists in most populations remains unexplained. We used a 30-y mark–recapture study of colonially breeding cliff swallows (Petrochelidon pyrrhonota) to show that the survival advantages of different colony sizes fluctuated among years. Colony size was under both stabilizing and directional selection in different years, and reversals in the sign of directional selection regularly occurred. Directional selection was predicted in part by drought conditions: birds in larger colonies tended to be favored in cooler and wetter years, and birds in smaller colonies in hotter and drier years. Oscillating selection on colony size likely reflected annual differences in food availability and the consequent importance of information transfer, and/or the level of ectoparasitism, with the net benefit of sociality varying under these different conditions. Averaged across years, there was no net directional change in selection on colony size. The wide range in cliff swallow group size is probably maintained by fluctuating survival selection and represents the first case, to our knowledge, in which fitness advantages of different group sizes regularly oscillate over time in a natural vertebrate population. PMID:27091998
Fluctuating survival selection explains variation in avian group size.
Brown, Charles R; Brown, Mary Bomberger; Roche, Erin A; O'Brien, Valerie A; Page, Catherine E
2016-05-03
Most animal groups vary extensively in size. Because individuals in certain sizes of groups often have higher apparent fitness than those in other groups, why wide group size variation persists in most populations remains unexplained. We used a 30-y mark-recapture study of colonially breeding cliff swallows (Petrochelidon pyrrhonota) to show that the survival advantages of different colony sizes fluctuated among years. Colony size was under both stabilizing and directional selection in different years, and reversals in the sign of directional selection regularly occurred. Directional selection was predicted in part by drought conditions: birds in larger colonies tended to be favored in cooler and wetter years, and birds in smaller colonies in hotter and drier years. Oscillating selection on colony size likely reflected annual differences in food availability and the consequent importance of information transfer, and/or the level of ectoparasitism, with the net benefit of sociality varying under these different conditions. Averaged across years, there was no net directional change in selection on colony size. The wide range in cliff swallow group size is probably maintained by fluctuating survival selection and represents the first case, to our knowledge, in which fitness advantages of different group sizes regularly oscillate over time in a natural vertebrate population.
Fluctuating survival selection explains variation in avian group size
Brown, Charles B.; Brown, Mary Bomberger; Roche, Erin A.; O'brien, Valerie A; Page, Catherine E.
2016-01-01
Most animal groups vary extensively in size. Because individuals in certain sizes of groups often have higher apparent fitness than those in other groups, why wide group size variation persists in most populations remains unexplained. We used a 30-y mark–recapture study of colonially breeding cliff swallows (Petrochelidon pyrrhonota) to show that the survival advantages of different colony sizes fluctuated among years. Colony size was under both stabilizing and directional selection in different years, and reversals in the sign of directional selection regularly occurred. Directional selection was predicted in part by drought conditions: birds in larger colonies tended to be favored in cooler and wetter years, and birds in smaller colonies in hotter and drier years. Oscillating selection on colony size likely reflected annual differences in food availability and the consequent importance of information transfer, and/or the level of ectoparasitism, with the net benefit of sociality varying under these different conditions. Averaged across years, there was no net directional change in selection on colony size. The wide range in cliff swallow group size is probably maintained by fluctuating survival selection and represents the first case, to our knowledge, in which fitness advantages of different group sizes regularly oscillate over time in a natural vertebrate population.
NASA Astrophysics Data System (ADS)
Mérida, Fernando; Chiu-Lam, Andreina; Bohórquez, Ana C.; Maldonado-Camargo, Lorena; Pérez, María-Eglée; Pericchi, Luis; Torres-Lugo, Madeline; Rinaldi, Carlos
2015-11-01
Magnetic Fluid Hyperthermia (MFH) uses heat generated by magnetic nanoparticles exposed to alternating magnetic fields to cause a temperature increase in tumors to the hyperthermia range (43-47 °C), inducing apoptotic cancer cell death. As with all cancer nanomedicines, one of the most significant challenges with MFH is achieving high nanoparticle accumulation at the tumor site. This motivates development of synthesis strategies that maximize the rate of energy dissipation of iron oxide magnetic nanoparticles, preferable due to their intrinsic biocompatibility. This has led to development of synthesis strategies that, although attractive from the point of view of chemical elegance, may not be suitable for scale-up to quantities necessary for clinical use. On the other hand, to date the aqueous co-precipitation synthesis, which readily yields gram quantities of nanoparticles, has only been reported to yield sufficiently high specific absorption rates after laborious size selective fractionation. This work focuses on improvements to the aqueous co-precipitation of iron oxide nanoparticles to increase the specific absorption rate (SAR), by optimizing synthesis conditions and the subsequent peptization step. Heating efficiencies up to 1048 W/gFe (36.5 kA/m, 341 kHz; ILP=2.3 nH m2 kg-1) were obtained, which represent one of the highest values reported for iron oxide particles synthesized by co-precipitation without size-selective fractionation. Furthermore, particles reached SAR values of up to 719 W/gFe (36.5 kA/m, 341 kHz; ILP=1.6 nH m2 kg-1) when in a solid matrix, demonstrating they were capable of significant rates of energy dissipation even when restricted from physical rotation. Reduction in energy dissipation rate due to immobilization has been identified as an obstacle to clinical translation of MFH. Hence, particles obtained with the conditions reported here have great potential for application in nanoscale thermal cancer therapy.
Grevillot, L; Stock, M; Vatnitsky, S
2015-10-21
This study aims at selecting and evaluating a ripple filter design compatible with non-isocentric proton and carbon ion scanning beam treatment delivery for a compact nozzle. The use of non-isocentric treatments when the patient is shifted as close as possible towards the nozzle exit allows for a reduction in the air gap and thus an improvement in the quality of scanning proton beam treatment delivery. Reducing the air gap is less important for scanning carbon ions, but ripple filters are still necessary for scanning carbon ion beams to reduce the number of energy steps required to deliver homogeneous SOBP. The proper selection of ripple filters also allows a reduction in the possible transverse and depth-dose inhomogeneities that could appear in non-isocentric conditions in particular. A thorough review of existing ripple filter designs over the past 16 years is performed and a design for non-isocentric treatment delivery is presented. A unique ripple filter quality index (QIRiFi) independent of the particle type and energy and representative of the ratio between energy modulation and induced scattering is proposed. The Bragg peak width evaluated at the 80% dose level (BPW80) is proposed to relate the energy modulation of the delivered Bragg peaks and the energy layer step size allowing the production of homogeneous SOBP. Gate/Geant4 Monte Carlo simulations have been validated for carbon ion and ripple filter simulations based on measurements performed at CNAO and subsequently used for a detailed analysis of the proposed ripple filter design. A combination of two ripple filters in a series has been validated for non-isocentric delivery and did not show significant transverse and depth-dose inhomogeneities. Non-isocentric conditions allow a significant reduction in the spot size at the patient entrance (up to 350% and 200% for protons and carbon ions with range shifter, respectively), and therefore in the lateral penumbra in the patients.
NASA Astrophysics Data System (ADS)
Yan, Maoling; Liu, Pingzeng; Zhang, Chao; Zheng, Yong; Wang, Xizhi; Zhang, Yan; Chen, Weijie; Zhao, Rui
2018-01-01
Agroclimatological resources provide material and energy for agricultural production. This study is aimed to analyze the impact of selected climate factors change on wheat yield over the different growth period applied quantitatively method, by comparing two different time division modules of wheat growth cycle- monthly empirical-statistical multiple regression models ( From October to June of next year ) and growth stage empirical-statistical multiple regression models (Including sowing stage, seedling stage, tillering stage, overwintering period, regreening period, jointing stage, heading stage, maturity stage) analysis of relationship between agrometeorological data and growth stage records and winter wheat production in Yanzhou, Shandong Province of China. Correlation analysis(CA)was done for 35 years (from 1981 to 2015) between crop yield and corresponding weather parameters including daily mean temperature, sunshine duration, and average daily precipitation selected from 18 different meteorological factors. The results shows that the greatest impact on the winter wheat yield is the precipitation overwintering period in this area, each 1mm increase in daily mean rainfall was associated with 201.64 kg/hm2 lowered output. Moreover, the temperature and sunshine duration in heading period and maturity stage also exert significant influence on the output, every 1°C increase in daily mean temperature was associated with 199.85kg/hm2 adding output, every 1h increase in mean sunshine duration was associated with 130.68kg/hm2 reduced output. Comparing with the results of experiment which using months as step sizes and using farming as step sizes was in better agreement with the fluctuation in meteorological yield, offered a better explanation on the growth mechanism of wheat. Eventually the results indicated that 3 factors affects the yield during different growing periods of wheat in different extent and provided more specific reference to guide the agricultural production management in this area.
NASA Technical Reports Server (NTRS)
Majda, George
1986-01-01
One-leg and multistep discretizations of variable-coefficient linear systems of ODEs having both slow and fast time scales are investigated analytically. The stability properties of these discretizations are obtained independent of ODE stiffness and compared. The results of numerical computations are presented in tables, and it is shown that for large step sizes the stability of one-leg methods is better than that of the corresponding linear multistep methods.
ERIC Educational Resources Information Center
ROSEN, ELLEN F.; STOLUROW, LAWRENCE M.
IN ORDER TO FIND A GOOD PREDICTOR OF EMPIRICAL DIFFICULTY, AN OPERATIONAL DEFINITION OF STEP SIZE, TEN PROGRAMER-JUDGES RATED CHANGE IN COMPLEXITY IN TWO VERSIONS OF A MATHEMATICS PROGRAM, AND THESE RATINGS WERE THEN COMPARED WITH MEASURES OF EMPIRICAL DIFFICULTY OBTAINED FROM STUDENT RESPONSE DATA. THE TWO VERSIONS, A 54 FRAME BOOKLET AND A 35…
Predict the fatigue life of crack based on extended finite element method and SVR
NASA Astrophysics Data System (ADS)
Song, Weizhen; Jiang, Zhansi; Jiang, Hui
2018-05-01
Using extended finite element method (XFEM) and support vector regression (SVR) to predict the fatigue life of plate crack. Firstly, the XFEM is employed to calculate the stress intensity factors (SIFs) with given crack sizes. Then predicetion model can be built based on the function relationship of the SIFs with the fatigue life or crack length. Finally, according to the prediction model predict the SIFs at different crack sizes or different cycles. Because of the accuracy of the forward Euler method only ensured by the small step size, a new prediction method is presented to resolve the issue. The numerical examples were studied to demonstrate the proposed method allow a larger step size and have a high accuracy.
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
Finite Memory Walk and Its Application to Small-World Network
NASA Astrophysics Data System (ADS)
Oshima, Hiraku; Odagaki, Takashi
2012-07-01
In order to investigate the effects of cycles on the dynamical process on both regular lattices and complex networks, we introduce a finite memory walk (FMW) as an extension of the simple random walk (SRW), in which a walker is prohibited from moving to sites visited during m steps just before the current position. This walk interpolates the simple random walk (SRW), which has no memory (m = 0), and the self-avoiding walk (SAW), which has an infinite memory (m = ∞). We investigate the FMW on regular lattices and clarify the fundamental characteristics of the walk. We find that (1) the mean-square displacement (MSD) of the FMW shows a crossover from the SAW at a short time step to the SRW at a long time step, and the crossover time is approximately equivalent to the number of steps remembered, and that the MSD can be rescaled in terms of the time step and the size of memory; (2) the mean first-return time (MFRT) of the FMW changes significantly at the number of remembered steps that corresponds to the size of the smallest cycle in the regular lattice, where ``smallest'' indicates that the size of the cycle is the smallest in the network; (3) the relaxation time of the first-return time distribution (FRTD) decreases as the number of cycles increases. We also investigate the FMW on the Watts--Strogatz networks that can generate small-world networks, and show that the clustering coefficient of the Watts--Strogatz network is strongly related to the MFRT of the FMW that can remember two steps.
Evidence of size-selective evolution in the fighting conch from prehistoric subsistence harvesting.
O'Dea, Aaron; Shaffer, Marian Lynne; Doughty, Douglas R; Wake, Thomas A; Rodriguez, Felix A
2014-05-07
Intensive size-selective harvesting can drive evolution of sexual maturity at smaller body size. Conversely, prehistoric, low-intensity subsistence harvesting is not considered an effective agent of size-selective evolution. Uniting archaeological, palaeontological and contemporary material, we show that size at sexual maturity in the edible conch Strombus pugilis declined significantly from pre-human (approx. 7 ka) to prehistoric times (approx. 1 ka) and again to the present day. Size at maturity also fell from early- to late-prehistoric periods, synchronous with an increase in harvesting intensity as other resources became depleted. A consequence of declining size at maturity is that early prehistoric harvesters would have received two-thirds more meat per conch than contemporary harvesters. After exploring the potential effects of selection biases, demographic shifts, environmental change and habitat alteration, these observations collectively implicate prehistoric subsistence harvesting as an agent of size-selective evolution with long-term detrimental consequences. We observe that contemporary populations that are protected from harvesting are slightly larger at maturity, suggesting that halting or even reversing thousands of years of size-selective evolution may be possible.
Evidence of size-selective evolution in the fighting conch from prehistoric subsistence harvesting
O'Dea, Aaron; Shaffer, Marian Lynne; Doughty, Douglas R.; Wake, Thomas A.; Rodriguez, Felix A.
2014-01-01
Intensive size-selective harvesting can drive evolution of sexual maturity at smaller body size. Conversely, prehistoric, low-intensity subsistence harvesting is not considered an effective agent of size-selective evolution. Uniting archaeological, palaeontological and contemporary material, we show that size at sexual maturity in the edible conch Strombus pugilis declined significantly from pre-human (approx. 7 ka) to prehistoric times (approx. 1 ka) and again to the present day. Size at maturity also fell from early- to late-prehistoric periods, synchronous with an increase in harvesting intensity as other resources became depleted. A consequence of declining size at maturity is that early prehistoric harvesters would have received two-thirds more meat per conch than contemporary harvesters. After exploring the potential effects of selection biases, demographic shifts, environmental change and habitat alteration, these observations collectively implicate prehistoric subsistence harvesting as an agent of size-selective evolution with long-term detrimental consequences. We observe that contemporary populations that are protected from harvesting are slightly larger at maturity, suggesting that halting or even reversing thousands of years of size-selective evolution may be possible. PMID:24648229
Feasibility of geophysical methods as a tool to detect urban subsurface cavity
NASA Astrophysics Data System (ADS)
Bang, E.; Kim, C.; Rim, H.; Ryu, D.; Lee, H.; Jeong, S. W.; Jung, B.; Yum, B. W.
2016-12-01
Urban road collapse problem become a social issue in Korea these days. Underground cavity cannot be cured by itself, we need to detect existing underground cavity before road collapse. We should consider cost, reliability, availability, skill requirement for field work and interpretation procedure in selecting detecting method because it's huge area and very long length to complete. We constructed a real-scale ground model for this purpose. Its size is about 15m*8m*3m (L*W*D) and sewer pipes are buried at the depth of 1.2m. We modeled upward moving or enlargement of underground cavity by digging the ground through the hole of sewer pipe inside. There are two or three steps having different cavity size and depth. We performed all five methods on the ground model to monitor ground collapse and detect underground cavity at each step. The first one is GPR method, which is very popular for this kind of project. GPR provided very good images showing underground cavity well at each step. DC resistivity survey is also selected because it is a common tool to locate underground anomaly. It provided the images showing underground cavity, but field setup is not favorable for the project. The third method is micro gravity method which can differentiate cavity zone from gravity distribution. Micro Gravity gave smaller g values around the cavity compared to normal condition, but it takes very long time to perform. The fourth method is thermal image. The temperature of the ground surface on the cavity will be different from the other area. We used multi-copter for rapid thermal imaging and we could pick the area of underground cavity from the aerial thermal image of ground surface. The last method we applied is RFID/magnetic survey. When the ground is collapsed around the buried RFID/magnetic tag in depth, tag will be moved downward. We can know the ground collapse through checking tag detecting condition. We could pick the area of ground collapse easily. When we compared each method from a variety of views, we could check GPR method, aerial thermal imaging method and RFID/magnetic survey show better performance as a tool to detect subsurface cavity.
Strategy Guideline. Proper Water Heater Selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoeschele, M.; Springer, D.; German, A.
2015-04-09
This Strategy Guideline on proper water heater selection was developed by the Building America team Alliance for Residential Building Innovation to provide step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads.
Strategy Guideline: Proper Water Heater Selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoeschele, M.; Springer, D.; German, A.
2015-04-01
This Strategy Guideline on proper water heater selection was developed by the Building America team Alliance for Residential Building Innovation to provide step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads.
Selective catalytic two-step process for ethylene glycol from carbon monoxide
Dong, Kaiwu; Elangovan, Saravanakumar; Sang, Rui; Spannenberg, Anke; Jackstell, Ralf; Junge, Kathrin; Li, Yuehui; Beller, Matthias
2016-01-01
Upgrading C1 chemicals (for example, CO, CO/H2, MeOH and CO2) with C–C bond formation is essential for the synthesis of bulk chemicals. In general, these industrially important processes (for example, Fischer Tropsch) proceed at drastic reaction conditions (>250 °C; high pressure) and suffer from low selectivity, which makes high capital investment necessary and requires additional purifications. Here, a different strategy for the preparation of ethylene glycol (EG) via initial oxidative coupling and subsequent reduction is presented. Separating coupling and reduction steps allows for a completely selective formation of EG (99%) from CO. This two-step catalytic procedure makes use of a Pd-catalysed oxycarbonylation of amines to oxamides at room temperature (RT) and subsequent Ru- or Fe-catalysed hydrogenation to EG. Notably, in the first step the required amines can be efficiently reused. The presented stepwise oxamide-mediated coupling provides the basis for a new strategy for selective upgrading of C1 chemicals. PMID:27377550
Morisse Pradier, H; Sénéchal, A; Philit, F; Tronc, F; Maury, J-M; Grima, R; Flamens, C; Paulus, S; Neidecker, J; Mornex, J-F
2016-02-01
Lung transplantation (LT) is now considered as an excellent treatment option for selected patients with end-stage pulmonary diseases, such as COPD, cystic fibrosis, idiopathic pulmonary fibrosis, and pulmonary arterial hypertension. The 2 goals of LT are to provide a survival benefit and to improve quality of life. The 3-step decision process leading to LT is discussed in this review. The first step is the selection of candidates, which requires a careful examination in order to check absolute and relative contraindications. The second step is the timing of listing for LT; it requires the knowledge of disease-specific prognostic factors available in international guidelines, and discussed in this paper. The third step is the choice of procedure: indications of heart-lung, single-lung, and bilateral-lung transplantation are described. In conclusion, this document provides guidelines to help pulmonologists in the referral and selection processes of candidates for transplantation in order to optimize the outcome of LT. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Edwards, Alyn C; Mocilac, Pavle; Geist, Andreas; Harwood, Laurence M; Sharrad, Clint A; Burton, Neil A; Whitehead, Roger C; Denecke, Melissa A
2017-05-02
The first hydrophilic, 1,10-phenanthroline derived ligands consisting of only C, H, O and N atoms for the selective extraction of Am(iii) from spent nuclear fuel are reported herein. One of these 2,9-bis-triazolyl-1,10-phenanthroline (BTrzPhen) ligands combined with a non-selective extracting agent, was found to exhibit process-suitable selectivity for Am(iii) over Eu(iii) and Cm(iii), providing a clear step forward.
The potential role of real-time geodetic observations in tsunami early warning
NASA Astrophysics Data System (ADS)
Tinti, Stefano; Armigliato, Alberto
2016-04-01
Tsunami warning systems (TWS) have the final goal to launch a reliable alert of an incoming dangerous tsunami to coastal population early enough to allow people to flee from the shore and coastal areas according to some evacuation plans. In the last decade, especially after the catastrophic 2004 Boxing Day tsunami in the Indian Ocean, much attention has been given to filling gaps in the existing TWSs (only covering the Pacific Ocean at that time) and to establishing new TWSs in ocean regions that were uncovered. Typically, TWSs operating today work only on earthquake-induced tsunamis. TWSs estimate quickly earthquake location and size by real-time processing seismic signals; on the basis of some pre-defined "static" procedures (either based on decision matrices or on pre-archived tsunami simulations), assess the tsunami alert level on a large regional scale and issue specific bulletins to a pre-selected recipients audience. Not unfrequently these procedures result in generic alert messages with little value. What usually operative TWSs do not do, is to compute earthquake focal mechanism, to calculate the co-seismic sea-floor displacement, to assess the initial tsunami conditions, to input these data into tsunami simulation models and to compute tsunami propagation up to the threatened coastal districts. This series of steps is considered nowadays too time consuming to provide the required timely alert. An equivalent series of steps could start from the same premises (earthquake focal parameters) and reach the same result (tsunami height at target coastal areas) by replacing the intermediate steps of real-time tsunami simulations with proper selection from a large archive of pre-computed tsunami scenarios. The advantage of real-time simulations and of archived scenarios selection is that estimates are tailored to the specific occurring tsunami and alert can be more detailed (less generic) and appropriate for local needs. Both these procedures are still at an experimental or testing stage and haven't been implemented yet in any standard TWS operations. Nonetheless, this is seen to be the future and the natural TWS evolving enhancement. In this context, improvement of the real-time estimates of tsunamigenic earthquake focal mechanism is of fundamental importance to trigger the appropriate computational chain. Quick discrimination between strike-slip and thrust-fault earthquakes, and equally relevant, quick assessment of co-seismic on-fault slip distribution, are exemplary cases to which a real-time geodetic monitoring system can contribute significantly. Robust inversion of geodetic data can help to reconstruct the sea floor deformation pattern especially if two conditions are met: the source is not too far from network stations and is well covered azimuthally. These two conditions are sometimes hard to satisfy fully, but in certain regions, like the Mediterranean and the Caribbean sea, this is quite possible due to the limited size of the ocean basins. Close cooperation between the Global Geodetic Observing System (GGOS) community, seismologists, tsunami scientists and TWS operators is highly recommended to obtain significant progresses in the quick determination of the earthquake source, which can trigger a timely estimation of the ensuing tsunami and a more reliable and detailed assessment of the tsunami size at the coast.
Migrate small, sound big: functional constraints on body size promote tracheal elongation in cranes.
Jones, M R; Witt, C C
2014-06-01
Organismal traits often represent the outcome of opposing selection pressures. Although social or sexual selection can cause the evolution of traits that constrain function or survival (e.g. ornamental feathers), it is unclear how the strength and direction of selection respond to ecological shifts that increase the severity of the constraint. For example, reduced body size might evolve by natural selection to enhance flight performance in migratory birds, but social or sexual selection favouring large body size may provide a countervailing force. Tracheal elongation is a potential outcome of these opposing pressures because it allows birds to convey an auditory signal of exaggerated body size. We predicted that the evolution of migration in cranes has coincided with a reduction in body size and a concomitant intensification of social or sexual selection for apparent large body size via tracheal elongation. We used a phylogenetic comparative approach to examine the relationships among migration distance, body mass and trachea length in cranes. As predicted, we found that migration distance correlated negatively with body size and positively with proportional trachea length. This result was consistent with our hypothesis that evolutionary reductions in body size led to intensified selection for trachea length. The most likely ultimate causes of intensified positive selection on trachea length are the direct benefits of conveying a large body size in intraspecific contests for mates and territories. We conclude that the strength of social or sexual selection on crane body size is linked to the degree of functional constraint. © 2014 The Authors. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.
Legerstee, Jeroen S; Tulen, Joke H M; Dierckx, Bram; Treffers, Philip D A; Verhulst, Frank C; Utens, Elisabeth M W J
2010-02-01
This study examined whether treatment response to stepped-care cognitive-behavioural treatment (CBT) is associated with changes in threat-related selective attention and its specific components in a large clinical sample of anxiety-disordered children. Ninety-one children with an anxiety disorder were included in the present study. Children received a standardized stepped-care CBT. Three treatment response groups were distinguished: initial responders (anxiety disorder free after phase one: child-focused CBT), secondary responders (anxiety disorder free after phase two: child-parent-focused CBT), and treatment non-responders. Treatment response was determined using a semi-structured clinical interview. Children performed a pictorial dot-probe task before and after stepped-care CBT (i.e., before phase one and after phase two CBT). Changes in selective attention to severely threatening pictures, but not to mildly threatening pictures, were significantly associated with treatment success. At pre-treatment assessment, initial responders selectively attended away from severely threatening pictures, whereas secondary responders selectively attended toward severely threatening pictures. After stepped-care CBT, initial and secondary responders did not show any selectivity in the attentional processing of severely threatening pictures. Treatment non-responders did not show any changes in selective attention due to CBT. Initial and secondary treatment responders showed a reduction of their predisposition to selectively attend away or toward severely threatening pictures, respectively. Treatment non-responders did not show any changes in selective attention. The pictorial dot-probe task can be considered a potentially valuable tool in assigning children to appropriate treatment formats as well as for monitoring changes in selective attention during the course of CBT.
Evolution of body size in Galapagos marine iguanas.
Wikelski, Martin
2005-10-07
Body size is one of the most important traits of organisms and allows predictions of an individual's morphology, physiology, behaviour and life history. However, explaining the evolution of complex traits such as body size is difficult because a plethora of other traits influence body size. Here I review what we know about the evolution of body size in a group of island reptiles and try to generalize about the mechanisms that shape body size. Galapagos marine iguanas occupy all 13 larger islands in this Pacific archipelago and have maximum island body weights between 900 and 12 000g. The distribution of body sizes does not match mitochondrial clades, indicating that body size evolves independently of genetic relatedness. Marine iguanas lack intra- and inter-specific food competition and predators are not size-specific, discounting these factors as selective agents influencing body size. Instead I hypothesize that body size reflects the trade-offs between sexual and natural selection. We found that sexual selection continuously favours larger body sizes. Large males establish display territories and some gain over-proportional reproductive success in the iguanas' mating aggregations. Females select males based on size and activity and are thus responsible for the observed mating skew. However, large individuals are strongly selected against during El Niño-related famines when dietary algae disappear from the intertidal foraging areas. We showed that differences in algae sward ('pasture') heights and thermal constraints on large size are causally responsible for differences in maximum body size among populations. I hypothesize that body size in many animal species reflects a trade-off between foraging constraints and sexual selection and suggest that future research could focus on physiological and genetic mechanisms determining body size in wild animals. Furthermore, evolutionary stable body size distributions within populations should be analysed to better understand selection pressures on individual body size.
Evolution of body size in Galapagos marine iguanas
Wikelski, Martin
2005-01-01
Body size is one of the most important traits of organisms and allows predictions of an individual's morphology, physiology, behaviour and life history. However, explaining the evolution of complex traits such as body size is difficult because a plethora of other traits influence body size. Here I review what we know about the evolution of body size in a group of island reptiles and try to generalize about the mechanisms that shape body size. Galapagos marine iguanas occupy all 13 larger islands in this Pacific archipelago and have maximum island body weights between 900 and 12 000 g. The distribution of body sizes does not match mitochondrial clades, indicating that body size evolves independently of genetic relatedness. Marine iguanas lack intra- and inter-specific food competition and predators are not size-specific, discounting these factors as selective agents influencing body size. Instead I hypothesize that body size reflects the trade-offs between sexual and natural selection. We found that sexual selection continuously favours larger body sizes. Large males establish display territories and some gain over-proportional reproductive success in the iguanas' mating aggregations. Females select males based on size and activity and are thus responsible for the observed mating skew. However, large individuals are strongly selected against during El Niño-related famines when dietary algae disappear from the intertidal foraging areas. We showed that differences in algae sward (‘pasture’) heights and thermal constraints on large size are causally responsible for differences in maximum body size among populations. I hypothesize that body size in many animal species reflects a trade-off between foraging constraints and sexual selection and suggest that future research could focus on physiological and genetic mechanisms determining body size in wild animals. Furthermore, evolutionary stable body size distributions within populations should be analysed to better understand selection pressures on individual body size. PMID:16191607
Body size, swimming speed, or thermal sensitivity? Predator-imposed selection on amphibian larvae.
Gvoždík, Lumír; Smolinský, Radovan
2015-11-02
Many animals rely on their escape performance during predator encounters. Because of its dependence on body size and temperature, escape velocity is fully characterized by three measures, absolute value, size-corrected value, and its response to temperature (thermal sensitivity). The primary target of the selection imposed by predators is poorly understood. We examined predator (dragonfly larva)-imposed selection on prey (newt larvae) body size and characteristics of escape velocity using replicated and controlled predation experiments under seminatural conditions. Specifically, because these species experience a wide range of temperatures throughout their larval phases, we predict that larvae achieving high swimming velocities across temperatures will have a selective advantage over more thermally sensitive individuals. Nonzero selection differentials indicated that predators selected for prey body size and both absolute and size-corrected maximum swimming velocity. Comparison of selection differentials with control confirmed selection only on body size, i.e., dragonfly larvae preferably preyed on small newt larvae. Maximum swimming velocity and its thermal sensitivity showed low group repeatability, which contributed to non-detectable selection on both characteristics of escape performance. In the newt-dragonfly larvae interaction, body size plays a more important role than maximum values and thermal sensitivity of swimming velocity during predator escape. This corroborates the general importance of body size in predator-prey interactions. The absence of an appropriate control in predation experiments may lead to potentially misleading conclusions about the primary target of predator-imposed selection. Insights from predation experiments contribute to our understanding of the link between performance and fitness, and further improve mechanistic models of predator-prey interactions and food web dynamics.
Population viability and connectivity of the Louisiana black bear (Ursus americanus luteolus)
Laufenberg, Jared S.; Clark, Joseph D.
2014-01-01
From April 2010 to April 2012, global positioning system (GPS) radio collars were placed on 8 female and 23 male bears ranging from 1 to 11 years of age to develop a step-selection function model to predict routes and rates of interchange. For both males and females, the probability of a step being selected increased as the distance to natural land cover and agriculture at the end of the step decreased and as distance from roads at the end of a step increased. Of 4,000 correlated random walks, the least potential interchange was between TRB and TRC and between UARB and LARB, but the relative potential for natural interchange between UARB and TRC was high. The step-selection model predicted that dispersals between the LARB and UARB populations were infrequent but possible for males and nearly nonexistent for females. No evidence of natural female dispersal between subpopulations has been documented thus far, which is also consistent with model predictions.
NASA Astrophysics Data System (ADS)
Suzuki, Tomoya; Ohkura, Yuushi
2016-01-01
In order to examine the predictability and profitability of financial markets, we introduce three ideas to improve the traditional technical analysis to detect investment timings more quickly. Firstly, a nonlinear prediction model is considered as an effective way to enhance this detection power by learning complex behavioral patterns hidden in financial markets. Secondly, the bagging algorithm can be applied to quantify the confidence in predictions and compose new technical indicators. Thirdly, we also introduce how to select more profitable stocks to improve investment performance by the two-step selection: the first step selects more predictable stocks during the learning period, and then the second step adaptively and dynamically selects the most confident stock showing the most significant technical signal in each investment. Finally, some investment simulations based on real financial data show that these ideas are successful in overcoming complex financial markets.
Method for localizing and isolating an errant process step
Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Ferrell, Regina K.
2003-01-01
A method for localizing and isolating an errant process includes the steps of retrieving from a defect image database a selection of images each image having image content similar to image content extracted from a query image depicting a defect, each image in the selection having corresponding defect characterization data. A conditional probability distribution of the defect having occurred in a particular process step is derived from the defect characterization data. A process step as a highest probable source of the defect according to the derived conditional probability distribution is then identified. A method for process step defect identification includes the steps of characterizing anomalies in a product, the anomalies detected by an imaging system. A query image of a product defect is then acquired. A particular characterized anomaly is then correlated with the query image. An errant process step is then associated with the correlated image.
Evolution of egg target size: an analysis of selection on correlated characters.
Podolsky, R D
2001-12-01
In broadcast-spawning marine organisms, chronic sperm limitation should select for traits that improve chances of sperm-egg contact. One mechanism may involve increasing the size of the physical or chemical target for sperm. However, models of fertilization kinetics predict that increasing egg size can reduce net zygote production due to an associated decline in fecundity. An alternate method for increasing physical target size is through addition of energetically inexpensive external structures, such as the jelly coats typical of eggs in species from several phyla. In selection experiments on eggs of the echinoid Dendraster excentricus, in which sperm was used as the agent of selection, eggs with larger overall targets were favored in fertilization. Actual shifts in target size following selection matched quantitative predictions of a model that assumed fertilization was proportional to target size. Jelly volume and ovum volume, two characters that contribute to target size, were correlated both within and among females. A cross-sectional analysis of selection partitioned the independent effects of these characters on fertilization success and showed that they experience similar direct selection pressures. Coupled with data on relative organic costs of the two materials, these results suggest that, under conditions where fertilization is limited by egg target size, selection should favor investment in low-cost accessory structures and may have a relatively weak effect on the evolution of ovum size.
Sæther, Bernt-Erik; Visser, Marcel E; Grøtan, Vidar; Engen, Steinar
2016-04-27
Understanding the variation in selection pressure on key life-history traits is crucial in our rapidly changing world. Density is rarely considered as a selective agent. To study its importance, we partition phenotypic selection in fluctuating environments into components representing the population growth rate at low densities and the strength of density dependence, using a new stochastic modelling framework. We analysed the number of eggs laid per season in a small song-bird, the great tit, and found balancing selection favouring large clutch sizes at small population densities and smaller clutches in years with large populations. A significant interaction between clutch size and population size in the regression for the Malthusian fitness reveals that those females producing large clutch sizes at small population sizes also are those that show the strongest reduction in fitness when population size is increased. This provides empirical support for ongoing r- and K-selection in this population, favouring phenotypes with large growth rates r at small population sizes and phenotypes with high competitive skills when populations are close to the carrying capacity K This selection causes long-term fluctuations around a stable mean clutch size caused by variation in population size, implying that r- and K-selection is an important mechanism influencing phenotypic evolution in fluctuating environments. This provides a general link between ecological dynamics and evolutionary processes, operating through a joint influence of density dependence and environmental stochasticity on fluctuations in population size. © 2016 The Author(s).
Selection on an extreme weapon in the frog-legged leaf beetle (Sagra femorata).
O'Brien, Devin M; Katsuki, Masako; Emlen, Douglas J
2017-11-01
Biologists have been fascinated with the extreme products of sexual selection for decades. However, relatively few studies have characterized patterns of selection acting on ornaments and weapons in the wild. Here, we measure selection on a wild population of weapon-bearing beetles (frog-legged leaf beetles: Sagra femorata) for two consecutive breeding seasons. We consider variation in both weapon size (hind leg length) and in relative weapon size (deviations from the population average scaling relationship between hind leg length and body size), and provide evidence for directional selection on weapon size per se and stabilizing selection on a particular scaling relationship in this population. We suggest that whenever growth in body size is sensitive to external circumstance such as nutrition, then considering deviations from population-level scaling relationships will better reflect patterns of selection relevant to evolution of the ornament or weapon than will variation in trait size per se. This is because trait-size versus body-size scaling relationships approximate underlying developmental reaction norms relating trait growth with body condition in these species. Heightened condition-sensitive expression is a hallmark of the exaggerated ornaments and weapons favored by sexual selection, yet this plasticity is rarely reflected in the way we think about-and measure-selection acting on these structures in the wild. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Antibiotic Combinations That Enable One-Step, Targeted Mutagenesis of Chromosomal Genes.
Lee, Wonsik; Do, Truc; Zhang, Ge; Kahne, Daniel; Meredith, Timothy C; Walker, Suzanne
2018-06-08
Targeted modification of bacterial chromosomes is necessary to understand new drug targets, investigate virulence factors, elucidate cell physiology, and validate results of -omics-based approaches. For some bacteria, reverse genetics remains a major bottleneck to progress in research. Here, we describe a compound-centric strategy that combines new negative selection markers with known positive selection markers to achieve simple, efficient one-step genome engineering of bacterial chromosomes. The method was inspired by the observation that certain nonessential metabolic pathways contain essential late steps, suggesting that antibiotics targeting a late step can be used to select for the absence of genes that control flux into the pathway. Guided by this hypothesis, we have identified antibiotic/counterselectable markers to accelerate reverse engineering of two increasingly antibiotic-resistant pathogens, Staphylococcus aureus and Acinetobacter baumannii. For S. aureus, we used wall teichoic acid biosynthesis inhibitors to select for the absence of tarO and for A. baumannii, we used colistin to select for the absence of lpxC. We have obtained desired gene deletions, gene fusions, and promoter swaps in a single plating step with perfect efficiency. Our method can also be adapted to generate markerless deletions of genes using FLP recombinase. The tools described here will accelerate research on two important pathogens, and the concept we outline can be readily adapted to any organism for which a suitable target pathway can be identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, C; Schultheiss, T
Purpose: In this study, we aim to evaluate the effect of dose grid size on the accuracy of calculated dose for small lesions in intracranial stereotactic radiosurgery (SRS), and to verify dose calculation accuracy with radiochromic film dosimetry. Methods: 15 intracranial lesions from previous SRS patients were retrospectively selected for this study. The planning target volume (PTV) ranged from 0.17 to 2.3 cm{sup 3}. A commercial treatment planning system was used to generate SRS plans using the volumetric modulated arc therapy (VMAT) technique using two arc fields. Two convolution-superposition-based dose calculation algorithms (Anisotropic Analytical Algorithm and Acuros XB algorithm) weremore » used to calculate volume dose distribution with dose grid size ranging from 1 mm to 3 mm with 0.5 mm step size. First, while the plan monitor units (MU) were kept constant, PTV dose variations were analyzed. Second, with 95% of the PTV covered by the prescription dose, variations of the plan MUs as a function of dose grid size were analyzed. Radiochomic films were used to compare the delivered dose and profile with the calculated dose distribution with different dose grid sizes. Results: The dose to the PTV, in terms of the mean dose, maximum, and minimum dose, showed steady decrease with increasing dose grid size using both algorithms. With 95% of the PTV covered by the prescription dose, the total MU increased with increasing dose grid size in most of the plans. Radiochromic film measurements showed better agreement with dose distributions calculated with 1-mm dose grid size. Conclusion: Dose grid size has significant impact on calculated dose distribution in intracranial SRS treatment planning with small target volumes. Using the default dose grid size could lead to under-estimation of delivered dose. A small dose grid size should be used to ensure calculation accuracy and agreement with QA measurements.« less
Federsel, Hans-Jürgen; Hedberg, Martin; Qvarnström, Fredrik R; Sjögren, Magnus P T; Tian, Wei
2007-12-01
This Account describes the design and development of a scalable synthesis for the drug molecule AR-A2 (1) starting from the discovery route originating in medicinal chemistry. Special emphasis is placed on the introduction of the correct (R) stereochemistry on C2, which was ultimately achieved in a diastereoselective imine-reducing step applying NaBH4. After optimization, this transformation was operated on a large pilot-plant scale (2000 L), offering the desired product (11) in 55% yield and 96% diastereomeric excess at a 100 kg batch size. From a synthesis strategy point of view, the choice of (S)-1-phenylethylamine (9) was crucial not only for its role as a provider of the NH2 functionality and the stereo-directing abilities but also as an excellent protecting group in the subsequent N-arylation reaction, according to the Buchwald-Hartwig protocol. As one of the very first examples in its kind, the latter step was scaled up to pilot manufacturing (125 kg in 2500 L vessel size), delivering an outstanding isolated yield of 95%. This consecutive series of chemical transformations was completed with an environmentally friendly removal of the phenethyl appendage. In addition, an elegant method to synthesize the tetralone substrate 6, as well as a novel and robust procedure to use imidazole as a buffer for the selective formation of the mono-HBr salt of AR-A2, will be briefly described.
Step back! Niche dynamics in cave-dwelling predators
NASA Astrophysics Data System (ADS)
Mammola, Stefano; Piano, Elena; Isaia, Marco
2016-08-01
The geometry of the Hutchinson's hypervolume derives from multiple selective pressures defined, on one hand, by the physiological tolerance of the species, and on the other, by intra- and interspecific competition. The quantification of these evolutionary forces is essential for the understanding of the coexistence of predators in light of competitive exclusion dynamics. We address this topic by investigating the ecological niche of two medium-sized troglophile spiders (Meta menardi and Pimoa graphitica). Over one year, we surveyed several populations in four subterranean sites in the Western Italian Alps, monitoring monthly their spatial and temporal dynamics and the associated physical and ecological variables. We assessed competition between the two species by means of multi regression techniques and by evaluating the intersection between their multidimensional hypervolumes. We detected a remarkable overlap between the microclimatic and trophic niche of M. menardi and P. graphitica, however, the former -being larger in size- resulted the best competitor in proximity of the cave entrance, causing the latter to readjust its spatial niche towards the inner part, where prey availability is scarcer ("step back effect"). In parallel to the slight variations in the subterranean microclimatic condition, the niche of the two species was also found to be seasonal dependent, varying over the year. With this work, we aim at providing new insights about the relationships among predators, demonstrating that energy-poor environments such as caves maintain the potential for diversification of predators via niche differentiation and serve as useful models for theoretical ecological studies.
Enzyme-linked DNA dendrimer nanosensors for acetylcholine
Walsh, Ryan; Morales, Jennifer M.; Skipwith, Christopher G.; Ruckh, Timothy T.; Clark, Heather A.
2015-01-01
It is currently difficult to measure small dynamics of molecules in the brain with high spatial and temporal resolution while connecting them to the bigger picture of brain function. A step towards understanding the underlying neural networks of the brain is the ability to sense discrete changes of acetylcholine within a synapse. Here we show an efficient method for generating acetylcholine-detecting nanosensors based on DNA dendrimer scaffolds that incorporate butyrylcholinesterase and fluorescein in a nanoscale arrangement. These nanosensors are selective for acetylcholine and reversibly respond to levels of acetylcholine in the neurophysiological range. This DNA dendrimer architecture has the potential to overcome current obstacles to sensing in the synaptic environment, including the nanoscale size constraints of the synapse and the ability to quantify the spatio-temporal fluctuations of neurotransmitter release. By combining the control of nanosensor architecture with the strategic placement of fluorescent reporters and enzymes, this novel nanosensor platform can facilitate the development of new selective imaging tools for neuroscience. PMID:26442999
Enzyme-linked DNA dendrimer nanosensors for acetylcholine.
Walsh, Ryan; Morales, Jennifer M; Skipwith, Christopher G; Ruckh, Timothy T; Clark, Heather A
2015-10-07
It is currently difficult to measure small dynamics of molecules in the brain with high spatial and temporal resolution while connecting them to the bigger picture of brain function. A step towards understanding the underlying neural networks of the brain is the ability to sense discrete changes of acetylcholine within a synapse. Here we show an efficient method for generating acetylcholine-detecting nanosensors based on DNA dendrimer scaffolds that incorporate butyrylcholinesterase and fluorescein in a nanoscale arrangement. These nanosensors are selective for acetylcholine and reversibly respond to levels of acetylcholine in the neurophysiological range. This DNA dendrimer architecture has the potential to overcome current obstacles to sensing in the synaptic environment, including the nanoscale size constraints of the synapse and the ability to quantify the spatio-temporal fluctuations of neurotransmitter release. By combining the control of nanosensor architecture with the strategic placement of fluorescent reporters and enzymes, this novel nanosensor platform can facilitate the development of new selective imaging tools for neuroscience.
Enzyme-linked DNA dendrimer nanosensors for acetylcholine
NASA Astrophysics Data System (ADS)
Walsh, Ryan; Morales, Jennifer M.; Skipwith, Christopher G.; Ruckh, Timothy T.; Clark, Heather A.
2015-10-01
It is currently difficult to measure small dynamics of molecules in the brain with high spatial and temporal resolution while connecting them to the bigger picture of brain function. A step towards understanding the underlying neural networks of the brain is the ability to sense discrete changes of acetylcholine within a synapse. Here we show an efficient method for generating acetylcholine-detecting nanosensors based on DNA dendrimer scaffolds that incorporate butyrylcholinesterase and fluorescein in a nanoscale arrangement. These nanosensors are selective for acetylcholine and reversibly respond to levels of acetylcholine in the neurophysiological range. This DNA dendrimer architecture has the potential to overcome current obstacles to sensing in the synaptic environment, including the nanoscale size constraints of the synapse and the ability to quantify the spatio-temporal fluctuations of neurotransmitter release. By combining the control of nanosensor architecture with the strategic placement of fluorescent reporters and enzymes, this novel nanosensor platform can facilitate the development of new selective imaging tools for neuroscience.
Sai, Xiaowei; Li, Yan; Yang, Chen; Li, Wei; Qiu, Jifang; Hong, Xiaobin; Zuo, Yong; Guo, Hongxiang; Tong, Weijun; Wu, Jian
2017-11-01
Elliptical-core few mode fiber (EC-FMF) is used in a mode division multiplexing (MDM) transmission system to release multiple-input-multiple-output (MIMO) digital-signal-processing, which reduces the cost and the complexity of the receiver. However, EC-FMF does not match with conventional multiplexers/de-multiplexers (MUXs/DeMUXs) such as a photonic lantern, leading to extra mode coupling loss and crosstalk. We design elliptical-core mode-selective photonic lanterns (EC-MSPLs) with six modes, which can match well with EC-FMF in MIMO-free MDM systems. Simulation of the EC-MSPL using the beam propagation method was demonstrated employing a combination of either step-index or graded-index fibers with six different sizes of cores, and the taper transition length of 8 cm or 4 cm. Through numerical simulations and optimizations, both types of photonic lanterns can realize low loss transmission and low crosstalk of below -20.0 dB for all modes.
The evolution of syntactic communication
NASA Astrophysics Data System (ADS)
Nowak, Martin A.; Plotkin, Joshua B.; Jansen, Vincent A. A.
2000-03-01
Animal communication is typically non-syntactic, which means that signals refer to whole situations. Human language is syntactic, and signals consist of discrete components that have their own meaning. Syntax is a prerequisite for taking advantage of combinatorics, that is, ``making infinite use of finite means''. The vast expressive power of human language would be impossible without syntax, and the transition from non-syntactic to syntactic communication was an essential step in the evolution of human language. We aim to understand the evolutionary dynamics of this transition and to analyse how natural selection can guide it. Here we present a model for the population dynamics of language evolution, define the basic reproductive ratio of words and calculate the maximum size of a lexicon. Syntax allows larger repertoires and the possibility to formulate messages that have not been learned beforehand. Nevertheless, according to our model natural selection can only favour the emergence of syntax if the number of required signals exceeds a threshold value. This result might explain why only humans evolved syntactic communication and hence complex language.
De-agglomeration and homogenisation of nanoparticles in coal tar pitch-based carbon materials
NASA Astrophysics Data System (ADS)
Gubernat, Maciej; Tomala, Janusz; Frohs, Wilhelm; Fraczek-Szczypta, Aneta; Blazewicz, Stanislaw
2016-03-01
The aim of the work was to characterise coal tar pitch (CTP) modified with selected nanoparticles as a binder precursor for the manufacture of synthetic carbon materials. Different factors influencing the preliminary preparative steps in the preparation of homogenous nanoparticle/CTP composition were studied. Graphene flakes, carbon black and nano-sized silicon carbide were used to modify CTP. Prior to introducing them into liquid CTP, nanoparticles were subjected to sonication. Various dispersants were used to prepare the suspensions, i.e. water, ethanol, dimethylformamide (DMF) and N-methylpyrrolidone (NMP).The results showed that proper dispersant selection is one of the most important factors influencing the de-agglomeration process of nanoparticles. DMF and NMP were found to be effective dispersants for the preparation of homogenous nanoparticle-containing suspensions. The presence of SiC and carbon black nanoparticles in the liquid pitch during heat treatment up to 2000 °C leads to the inhibition of crystallite growth in carbon residue.
Ou, Yang; Lv, Chang-Jiang; Yu, Wei; Mao, Zheng-Wei; Wan, Ling-Shu; Xu, Zhi-Kang
2014-12-24
Thin perforated membranes with ordered pores are ideal barriers for high-resolution and high-efficiency selective transport and separation of biological species. However, for self-assembled thin membranes with a thickness less than several micrometers, an additional step of transferring the membranes onto porous supports is generally required. In this article, we present a facile transfer-free strategy for fabrication of robust perforated composite membranes via the breath figure process, and for the first time, demonstrate the application of the membranes in high-resolution cell separation of yeasts and lactobacilli without external pressure, achieving almost 100% rejection of yeasts and more than 70% recovery of lactobacilli with excellent viability. The avoidance of the transfer step simplifies the fabrication procedure of composite membranes and greatly improves the membrane homogeneity. Moreover, the introduction of an elastic triblock copolymer increases the interfacial strength between the membrane and the support, and allows the preservation of composite membranes in a dry state. Such perforated ordered membranes can also be applied in other size-based separation systems, enabling new opportunities in bioseparation and biosensors.
Enhanced production of lovastatin by Omphalotus olearius (DC.) Singer in solid state fermentation.
Atlı, Burcu; Yamaç, Mustafa; Yıldız, Zeki; Isikhuemnen, Omoanghe S
2015-01-01
Although lovastatin production has been reported for different microorganism species, there is limited information about lovastatin production by basidiomycetes. The optimization of culture parameters that enhances lovastatin production by Omphalotus olearius OBCC 2002 was investigated, using statistically based experimental designs under solid state fermentation. The Plackett Burman design was used in the first step to test the relative importance of the variables affecting production of lovastatin. Amount and particle size of barley were identified as efficient variables. In the latter step, the interactive effects of selected efficient variables were studied with a full factorial design. A maximum lovastatin yield of 139.47mg/g substrate was achieved by the fermentation of 5g of barley, 1-2mm particle diam., at 28°C. This study showed that O. olearius OBCC 2002 has a high capacity for lovastatin production which could be enhanced by using solid state fermentation with novel and cost-effective substrates, such as barley. Copyright © 2013 Revista Iberoamericana de Micología. Published by Elsevier Espana. All rights reserved.
Physical pretreatment – woody biomass size reduction – for forest biorefinery
J.Y. Zhu
2011-01-01
Physical pretreatment of woody biomass or wood size reduction is a prerequisite step for further chemical or biochemical processing in forest biorefinery. However, wood size reduction is very energy intensive which differentiates woody biomass from herbaceous biomass for biorefinery. This chapter discusses several critical issues related to wood size reduction: (1)...
Study on characteristics of printed circuit board liberation and its crushed products.
Quan, Cui; Li, Aimin; Gao, Ningbo
2012-11-01
Recycling printed circuit board waste (PCBW) waste is a hot issue of environmental protection and resource recycling. Mechanical and thermo-chemical methods are two traditional recycling processes for PCBW. In the present research, a two-step crushing process combined with a coarse-crushing step and a fine-pulverizing step was adopted, and then the crushed products were classified into seven different fractions with a standard sieve. The liberation situation and particle shape in different size fractions were observed. Properties of different size fractions, such as heating value, thermogravimetric, proximate, ultimate and chemical analysis were determined. The Rosin-Rammler model was applied to analyze the particle size distribution of crushed material. The results indicated that complete liberation of metals from the PCBW was achieved at a size less than 0.59 mm, but the nonmetal particle in the smaller-than-0.15 mm fraction is liable to aggregate. Copper was the most prominent metal in PCBW and mainly enriched in the 0.42-0.25 mm particle size. The Rosin-Rammler equation adequately fit particle size distribution data of crushed PCBW with a correlation coefficient of 0.9810. The results of heating value and proximate analysis revealed that the PCBW had a low heating value and high ash content. The combustion and pyrolysis process of PCBW was different and there was an obvious oxidation peak of Cu in combustion runs.
Workshop II On Unsteady Separated Flow Proceedings
1988-07-28
was static stall angle of 12 ° . achieved by injecting diluted food coloring at the apex through a 1.5 mm diameter tube placed The response of the wing...differences with uniform step size in q, and trailing -. 75 three- pront differences with uniform step size in ,, ,,as used The nonlinearity of the...flow prop- "Kutta condition." erties for slender 3D wings are addressed. To begin the The present paper emphasizes recent progress in the de- study
NASA Technical Reports Server (NTRS)
Justus, C. G.
1987-01-01
The Global Reference Atmosphere Model (GRAM) is under continuous development and improvement. GRAM data were compared with Middle Atmosphere Program (MAP) predictions and with shuttle data. An important note: Users should employ only step sizes in altitude that give vertical density gradients consistent with shuttle-derived density data. Using too small a vertical step size (finer then 1 km) will result in what appears to be unreasonably high values of density shears but what in reality is noise in the model.
Gape-limitation, foraging tactics and prey size selectivity of two microcarnivorous species of fish.
Schmitt, Russell J; Holbrook, Sally J
1984-07-01
Patterns of prey size selectivity were quantified in the field for two species of marine microcarnivorous fish, Embiotoca jacksoni and Embiotoca lateralis (Embiotocidae) to test Scott and Murdoch's (1983) size spectrum hypothesis. Two mechanisms accounted for observed selectivity: the relative size of a fish in relation to its prey, and the type of foraging behavior used. Juvenile E. jacksoni were gape limited and newborn individuals achieved highest selectivity for the smallest prey size by using a visual picking foraging strategy. As young E. jacksoni grew, highest preference shifted to the next larger prey sizes. When E. jacksoni reached adulthood, the principal mode of foraging changed from visual picking to relatively indiscriminant winnowing behavior. The shift in foraging behavior by adults was accompanied by a decline in overall preference for prey size; sizes were taken nearly in proportion to their relative abundance. Adult E. lateralis retained a visual picking strategy and achieved highest selectivity for the largest class of prey. These differences in selectivity patterns by adult fish were not explained by gape-limination since adults of both species could ingest the largest prey items available to them. These results support Scott and Murdoch's (1983) hypothesis that the qualitative pattern of size selectivity depends largely on the range of available prey sizes relative to that a predator can effectively harvest.
Exploring the Genetic Signature of Body Size in Yucatan Miniature Pig
Kim, Hyeongmin; Song, Ki Duk; Kim, Hyeon Jeong; Park, WonCheoul; Kim, Jaemin; Lee, Taeheon; Shin, Dong-Hyun; Kwak, Woori; Kwon, Young-jun; Sung, Samsun; Moon, Sunjin; Lee, Kyung-Tai; Kim, Namshin; Hong, Joon Ki; Eo, Kyung Yeon; Seo, Kang Seok; Kim, Girak; Park, Sungmoo; Yun, Cheol-Heui; Kim, Hyunil; Choi, Kimyung; Kim, Jiho; Lee, Woon Kyu; Kim, Duk-Kyung; Oh, Jae-Don; Kim, Eui-Soo; Cho, Seoae; Lee, Hak-Kyo; Kim, Tae-Hun; Kim, Heebal
2015-01-01
Since being domesticated about 10,000–12,000 years ago, domestic pigs (Sus scrofa domesticus) have been selected for traits of economic importance, in particular large body size. However, Yucatan miniature pigs have been selected for small body size to withstand high temperature environment and for laboratory use. This renders the Yucatan miniature pig a valuable model for understanding the evolution of body size. We investigate the genetic signature for selection of body size in the Yucatan miniature pig. Phylogenetic distance of Yucatan miniature pig was compared to other large swine breeds (Yorkshire, Landrace, Duroc and wild boar). By estimating the XP-EHH statistic using re-sequencing data derived from 70 pigs, we were able to unravel the signatures of selection of body size. We found that both selections at the level of organism, and at the cellular level have occurred. Selection at the higher levels include feed intake, regulation of body weight and increase in mass while selection at the molecular level includes cell cycle and cell proliferation. Positively selected genes probed by XP-EHH may provide insight into the docile character and innate immunity as well as body size of Yucatan miniature pig. PMID:25885114
Pomeroy, Jeremy; Brage, Søren; Curtis, Jeffrey M; Swan, Pamela D; Knowler, William C; Franks, Paul W
2011-04-27
The quantification of the relationships between walking and health requires that walking is measured accurately. We correlated different measures of step accumulation to body size, overall physical activity level, and glucose regulation. Participants were 25 men and 25 women American Indians without diabetes (Age: 20-34 years) in Phoenix, Arizona, USA. We assessed steps/day during 7 days of free living, simultaneously with three different monitors (Accusplit-AX120, MTI-ActiGraph, and Dynastream-AMP). We assessed total physical activity during free-living with doubly labeled water combined with resting metabolic rate measured by expired gas indirect calorimetry. Glucose tolerance was determined during an oral glucose tolerance test. Based on observed counts in the laboratory, the AMP was the most accurate device, followed by the MTI and the AX120, respectively. The estimated energy cost of 1000 steps per day was lower in the AX120 than the MTI or AMP. The correlation between AX120-assessed steps/day and waist circumference was significantly higher than the correlation between AMP steps and waist circumference. The difference in steps per day between the AX120 and both the AMP and the MTI were significantly related to waist circumference. Between-monitor differences in step counts influence the observed relationship between walking and obesity-related traits.
Short-term Time Step Convergence in a Climate Model
Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...
2015-02-11
A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less
Dynamics of upper mantle rocks decompression melting above hot spots under continental plates
NASA Astrophysics Data System (ADS)
Perepechko, Yury; Sorokin, Konstantin; Sharapov, Victor
2014-05-01
Numeric 2D simulation of the decompression melting above the hot spots (HS) was accomplished under the following conditions: initial temperature within crust mantle section was postulated; thickness of the metasomatized lithospheric mantle is determined by the mantle rheology and position of upper asthenosphere boundary; upper and lower boundaries were postulated to be not permeable and the condition for adhesion and the distribution of temperature (1400-2050°C); lateral boundaries imitated infinity of layer. Sizes and distribution of lateral points, their symmetry, and maximum temperature varied between the thermodynamic condition for existences of perovskite - majorite transition and its excess above transition temperature. Problem was solved numerically a cell-vertex finite volume method for thermo hydrodynamic problems. For increasing convergence of iterative process the method of lower relaxation with different value of relaxation parameter for each equation was used. The method of through calculation was used for the increase in the computing rate for the two-layered upper mantle - lithosphere system. Calculated region was selected as 700 x (2100-4900) km. The time step for the study of the asthenosphere dynamics composed 0.15-0.65 Ma. The following factors controlling the sizes and melting degree of the convective upper mantle, are shown: a) the initial temperature distribution along the section of upper mantleb) sizes and the symmetry of HS, c) temperature excess within the HS above the temperature on the upper and lower mantle border TB=1500-2000oC with 5-15% deviation but not exceed 2350oC. It is found, that appearance of decompression melting with HS presence initiate primitive mantle melting at TB > of 1600oC. Initial upper mantle heating influence on asthenolens dimensions with a constant HS size is controlled mainly by decompression melting degree. Thus, with lateral sizes of HS = 400 km the decompression melting appears at TB > 1600oC and HS temperature (THS) > 1900oC asthenolens size ~700 km. When THS = of 2000oC the maximum melting degree of the primitive mantle is near 40%. An increase in the TB > 1900oC the maximum degree of melting could rich 100% with the same size of decompression melting zone (700 km). We examined decompression melting above the HS having LHS = 100 km - 780 km at a TB 1850- 2100oC with the thickness of lithosphere = 100 km.It is shown that asthenolens size (Lln) does not change substantially: Lln=700 km at LHS = of 100 km; Lln= 800 km at LHS = of 780 km. In presence of asymmetry of large HS the region of advection is developed above the HS maximum with the formation of asymmetrical cell. Influence of lithospheric plate thicknesses on appearance and evolution of asthenolens above the HS were investigated for the model stepped profile for the TB ≤ of 1750oS with Lhs = 100km and maximum of THS =2350oC. With an increase of TB the Lln difference beneath lithospheric steps is leveled with retention of a certain difference to melting degrees and time of the melting appearance a top of the HS. RFBR grant 12-05-00625.
Sariyar, Murat; Hoffmann, Isabell; Binder, Harald
2014-02-26
Molecular data, e.g. arising from microarray technology, is often used for predicting survival probabilities of patients. For multivariate risk prediction models on such high-dimensional data, there are established techniques that combine parameter estimation and variable selection. One big challenge is to incorporate interactions into such prediction models. In this feasibility study, we present building blocks for evaluating and incorporating interactions terms in high-dimensional time-to-event settings, especially for settings in which it is computationally too expensive to check all possible interactions. We use a boosting technique for estimation of effects and the following building blocks for pre-selecting interactions: (1) resampling, (2) random forests and (3) orthogonalization as a data pre-processing step. In a simulation study, the strategy that uses all building blocks is able to detect true main effects and interactions with high sensitivity in different kinds of scenarios. The main challenge are interactions composed of variables that do not represent main effects, but our findings are also promising in this regard. Results on real world data illustrate that effect sizes of interactions frequently may not be large enough to improve prediction performance, even though the interactions are potentially of biological relevance. Screening interactions through random forests is feasible and useful, when one is interested in finding relevant two-way interactions. The other building blocks also contribute considerably to an enhanced pre-selection of interactions. We determined the limits of interaction detection in terms of necessary effect sizes. Our study emphasizes the importance of making full use of existing methods in addition to establishing new ones.
Importance of casein micelle size and milk composition for milk gelation.
Glantz, M; Devold, T G; Vegarud, G E; Lindmark Månsson, H; Stålhammar, H; Paulsson, M
2010-04-01
The economic output of the dairy industry is to a great extent dependent on the processing of milk into other milk-based products such as cheese. The yield and quality of cheese are dependent on both the composition and technological properties of milk. The objective of this study was to evaluate the importance and effects of casein (CN) micelle size and milk composition on milk gelation characteristics in order to evaluate the possibilities for enhancing gelation properties through breeding. Milk was collected on 4 sampling occasions at the farm level in winter and summer from dairy cows with high genetic merit, classified as elite dairy cows, of the Swedish Red and Swedish Holstein breeds. Comparisons were made with milk from a Swedish Red herd, a Swedish Holstein herd, and a Swedish dairy processor. Properties of CN micelles, such as their native and rennet-induced CN micelle size and their zeta-potential, were analyzed by photon correlation spectroscopy, and rennet-induced gelation characteristics, including gel strength, gelation time, and frequency sweeps, were determined. Milk parameters of the protein, lipid, and carbohydrate profiles as well as minerals were used to obtain correlations with native CN micelle size and gelation characteristics. Milk pH and protein, CN, and lactose contents were found to affect milk gelation. Smaller native CN micelles were shown to form stronger gels when poorly coagulating milk was excluded from the correlation analysis. In addition, milk pH correlated positively, whereas Mg and K correlated negatively with native CN micellar size. The milk from the elite dairy cows was shown to have good gelation characteristics. Furthermore, genetic progress in relation to CN micelle size was found for these cows as a correlated response to selection for the Swedish breeding objective if optimizing for milk gelation characteristics. The results indicate that selection for smaller native CN micelles and lower milk pH through breeding would enhance gelation properties and may thus improve the initial step in the processing of cheese. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Testing Students for Chapter 1 Eligibility: ECIA Chapter 1.
ERIC Educational Resources Information Center
Davis, Walter E.
This document summarizes the criteria for Chapter 1 eligibility, discusses a step-by-step selection procedure, used in the Austin Independent School District, explains the laws and regulations concerning how students are to be selected, emphasizes that special testing should be administered to students whose scores are clearly discrepant from…
Time-Delayed Two-Step Selective Laser Photodamage of Dye-Biomolecule Complexes
NASA Astrophysics Data System (ADS)
Andreoni, A.; Cubeddu, R.; de Silvestri, S.; Laporta, P.; Svelto, O.
1980-08-01
A scheme is proposed for laser-selective photodamage of biological molecules, based on time-delayed two-step photoionization of a dye molecule bound to the biomolecule. The validity of the scheme is experimentally demonstrated in the case of the dye Proflavine, bound to synthetic polynucleotides.
Tan, Swee Jin; Phan, Huan; Gerry, Benjamin Michael; Kuhn, Alexandre; Hong, Lewis Zuocheng; Min Ong, Yao; Poon, Polly Suk Yean; Unger, Marc Alexander; Jones, Robert C; Quake, Stephen R; Burkholder, William F
2013-01-01
Library preparation for next-generation DNA sequencing (NGS) remains a key bottleneck in the sequencing process which can be relieved through improved automation and miniaturization. We describe a microfluidic device for automating laboratory protocols that require one or more column chromatography steps and demonstrate its utility for preparing Next Generation sequencing libraries for the Illumina and Ion Torrent platforms. Sixteen different libraries can be generated simultaneously with significantly reduced reagent cost and hands-on time compared to manual library preparation. Using an appropriate column matrix and buffers, size selection can be performed on-chip following end-repair, dA tailing, and linker ligation, so that the libraries eluted from the chip are ready for sequencing. The core architecture of the device ensures uniform, reproducible column packing without user supervision and accommodates multiple routine protocol steps in any sequence, such as reagent mixing and incubation; column packing, loading, washing, elution, and regeneration; capture of eluted material for use as a substrate in a later step of the protocol; and removal of one column matrix so that two or more column matrices with different functional properties can be used in the same protocol. The microfluidic device is mounted on a plastic carrier so that reagents and products can be aliquoted and recovered using standard pipettors and liquid handling robots. The carrier-mounted device is operated using a benchtop controller that seals and operates the device with programmable temperature control, eliminating any requirement for the user to manually attach tubing or connectors. In addition to NGS library preparation, the device and controller are suitable for automating other time-consuming and error-prone laboratory protocols requiring column chromatography steps, such as chromatin immunoprecipitation.
Tan, Swee Jin; Phan, Huan; Gerry, Benjamin Michael; Kuhn, Alexandre; Hong, Lewis Zuocheng; Min Ong, Yao; Poon, Polly Suk Yean; Unger, Marc Alexander; Jones, Robert C.; Quake, Stephen R.; Burkholder, William F.
2013-01-01
Library preparation for next-generation DNA sequencing (NGS) remains a key bottleneck in the sequencing process which can be relieved through improved automation and miniaturization. We describe a microfluidic device for automating laboratory protocols that require one or more column chromatography steps and demonstrate its utility for preparing Next Generation sequencing libraries for the Illumina and Ion Torrent platforms. Sixteen different libraries can be generated simultaneously with significantly reduced reagent cost and hands-on time compared to manual library preparation. Using an appropriate column matrix and buffers, size selection can be performed on-chip following end-repair, dA tailing, and linker ligation, so that the libraries eluted from the chip are ready for sequencing. The core architecture of the device ensures uniform, reproducible column packing without user supervision and accommodates multiple routine protocol steps in any sequence, such as reagent mixing and incubation; column packing, loading, washing, elution, and regeneration; capture of eluted material for use as a substrate in a later step of the protocol; and removal of one column matrix so that two or more column matrices with different functional properties can be used in the same protocol. The microfluidic device is mounted on a plastic carrier so that reagents and products can be aliquoted and recovered using standard pipettors and liquid handling robots. The carrier-mounted device is operated using a benchtop controller that seals and operates the device with programmable temperature control, eliminating any requirement for the user to manually attach tubing or connectors. In addition to NGS library preparation, the device and controller are suitable for automating other time-consuming and error-prone laboratory protocols requiring column chromatography steps, such as chromatin immunoprecipitation. PMID:23894273
Effect of Pd surface structure on the activation of methyl acetate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Lijun; Xu, Ye
2011-01-01
The activation of methyl acetate (CH3COOCH3; MA) has been studied using periodic density functional theory calculations to probe the effect of Pd surface structure on the selectivity in MA activation. The adsorption of MA, dehydrogenated derivatives, enolate (CH2COOCH3; ENL) and methylene acetate (CH3COOCH2; MeA), and several dissociation products (including acetate, acetyl, ketene, methoxy, formaldehyde, CO, C, O, and H); and C-H and C-O (mainly in the RCO-OR position) bond dissociation in MA, ENL, and MeA, are calculated on Pd(111) terrace, step, and kink; and on Pd(100) terrace and step. The adsorption of most species is not strongly affected between (111)-more » to (100)-type surfaces, but is clearly enhanced by step/kink compared to the corresponding terrace. Going from terrace to step edge and from (111)- to (100)-type surfaces both stabilize the transition states of C-O bond dissociation steps. Going from terrace to step edge also stabilizes the transition states of C-H bond dissociation steps, but going from (111)- to (100)-type surfaces does not clearly do so. We propose that compared to the Pd(111) terrace, the Pd(100) terrace is more selective for C-O bond dissociation that is desirable for alcohol formation, whereas the Pd step edges are more selective for C-H bond dissociation.« less
Micromega IR, an infrared hyperspectral microscope for space exploration
NASA Astrophysics Data System (ADS)
Pilorget, C.; Bibring, J.-P.; Berthe, M.; Hamm, V.
2017-11-01
The coupling between imaging and spectrometry has proved to be one of the most promising way to study remotely planetary objects [1][2]. The next step is to use this concept for in situ analyses. MicrOmega IR has been developed within this scope. It is an ultra miniaturized near-infrared hyperspectral microscope dedicated to in situ analyses, selected to be part of the ESA/ExoMars rover and RKA/Phobos Grunt lander payload. The goal of this instrument is to characterize the composition of samples at almost their grain size scale, in a nondestructive way. Coupled to the mapping information, it provides unique clues to trace back the history of the parent body (planet, satellite or small body) [3][4].
Laser-based nanoengineering of surface topographies for biomedical applications
NASA Astrophysics Data System (ADS)
Schlie, Sabrina; Fadeeva, Elena; Koroleva, Anastasia; Ovsianikov, Aleksandr; Koch, Jürgen; Ngezahayo, Anaclet; Chichkov, Boris. N.
2011-04-01
In this study femtosecond laser systems were used for nanoengineering of special surface topographies in silicon and titanium. Besides the control of feature sizes, we demonstrated that laser structuring caused changes in material wettability due to a reduced surface contact area. These laser-engineered topographies were tested for their capability to control cellular behavior of human fibroblasts, SH-SY5Y neuroblastoma cells, and MG-63 osteoblasts. We found that fibroblasts reduced cell growth on the structures, while the other cell types proliferated at the same rate. These findings make laser-surface structuring very attractive for biomedical applications. Finally, to explain the results the correlation between topography and the biophysics of cellular adhesion, which is the key step of selective cell control, is discussed.
RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing
NASA Astrophysics Data System (ADS)
Gui, Guan; Xu, Li; Adachi, Fumiyuki
2014-12-01
Nonlinear sparse sensing (NSS) techniques have been adopted for realizing compressive sensing in many applications such as radar imaging. Unlike the NSS, in this paper, we propose an adaptive sparse sensing (ASS) approach using the reweighted zero-attracting normalized least mean fourth (RZA-NLMF) algorithm which depends on several given parameters, i.e., reweighted factor, regularization parameter, and initial step size. First, based on the independent assumption, Cramer-Rao lower bound (CRLB) is derived as for the performance comparisons. In addition, reweighted factor selection method is proposed for achieving robust estimation performance. Finally, to verify the algorithm, Monte Carlo-based computer simulations are given to show that the ASS achieves much better mean square error (MSE) performance than the NSS.
Detailed Primitive-Based 3d Modeling of Architectural Elements
NASA Astrophysics Data System (ADS)
Remondino, F.; Lo Buglio, D.; Nony, N.; De Luca, L.
2012-07-01
The article describes a pipeline, based on image-data, for the 3D reconstruction of building façades or architectural elements and the successive modeling using geometric primitives. The approach overcome some existing problems in modeling architectural elements and deliver efficient-in-size reality-based textured 3D models useful for metric applications. For the 3D reconstruction, an opensource pipeline developed within the TAPENADE project is employed. In the successive modeling steps, the user manually selects an area containing an architectural element (capital, column, bas-relief, window tympanum, etc.) and then the procedure fits geometric primitives and computes disparity and displacement maps in order to tie visual and geometric information together in a light but detailed 3D model. Examples are reported and commented.
Framework for Creating a Smart Growth Economic Development Strategy
This step-by-step guide can help small and mid-sized cities, particularly those that have limited population growth, areas of disinvestment, and/or a struggling economy, build a place-based economic development strategy.
Fedy, Bradley C.; O'Donnell, Michael; Bowen, Zachary H.
2015-01-01
Human impacts on wildlife populations are widespread and prolific and understanding wildlife responses to human impacts is a fundamental component of wildlife management. The first step to understanding wildlife responses is the documentation of changes in wildlife population parameters, such as population size. Meaningful assessment of population changes in potentially impacted sites requires the establishment of monitoring at similar, nonimpacted, control sites. However, it is often difficult to identify appropriate control sites in wildlife populations. We demonstrated use of Geographic Information System (GIS) data across large spatial scales to select biologically relevant control sites for population monitoring. Greater sage-grouse (Centrocercus urophasianus; hearafter, sage-grouse) are negatively affected by energy development, and monitoring of sage-grouse population within energy development areas is necessary to detect population-level responses. Weused population data (1995–2012) from an energy development area in Wyoming, USA, the Atlantic Rim Project Area (ARPA), and GIS data to identify control sites that were not impacted by energy development for population monitoring. Control sites were surrounded by similar habitat and were within similar climate areas to the ARPA. We developed nonlinear trend models for both the ARPA and control sites and compared long-term trends from the 2 areas. We found little difference between the ARPA and control sites trends over time. This research demonstrated an approach for control site selection across large landscapes and can be used as a template for similar impact-monitoring studies. It is important to note that identification of changes in population parameters between control and treatment sites is only the first step in understanding the mechanisms that underlie those changes. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine
2011-03-01
International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.
NASA Astrophysics Data System (ADS)
Hou, Yanqing; Verhagen, Sandra; Wu, Jie
2016-12-01
Ambiguity Resolution (AR) is a key technique in GNSS precise positioning. In case of weak models (i.e., low precision of data), however, the success rate of AR may be low, which may consequently introduce large errors to the baseline solution in cases of wrong fixing. Partial Ambiguity Resolution (PAR) is therefore proposed such that the baseline precision can be improved by fixing only a subset of ambiguities with high success rate. This contribution proposes a new PAR strategy, allowing to select the subset such that the expected precision gain is maximized among a set of pre-selected subsets, while at the same time the failure rate is controlled. These pre-selected subsets are supposed to obtain the highest success rate among those with the same subset size. The strategy is called Two-step Success Rate Criterion (TSRC) as it will first try to fix a relatively large subset with the fixed failure rate ratio test (FFRT) to decide on acceptance or rejection. In case of rejection, a smaller subset will be fixed and validated by the ratio test so as to fulfill the overall failure rate criterion. It is shown how the method can be practically used, without introducing a large additional computation effort. And more importantly, how it can improve (or at least not deteriorate) the availability in terms of baseline precision comparing to classical Success Rate Criterion (SRC) PAR strategy, based on a simulation validation. In the simulation validation, significant improvements are obtained for single-GNSS on short baselines with dual-frequency observations. For dual-constellation GNSS, the improvement for single-frequency observations on short baselines is very significant, on average 68%. For the medium- to long baselines, with dual-constellation GNSS the average improvement is around 20-30%.
Foellmer, Matthias W; Fairbairn, Daphne J
2005-02-01
Mate search plays a central role in hypotheses for the adaptive significance of extreme female-biased sexual size dimorphism (SSD) in animals. Spiders (Araneae) are the only free-living terrestrial taxon where extreme SSD is common. The "gravity hypothesis" states that small body size in males is favoured during mate search in species where males have to climb to reach females, because body length is inversely proportional to achievable speed on vertical structures. However, locomotive performance of males may also depend on relative leg length. Here we examine selection on male body size and leg length during mate search in the highly dimorphic orb-weaving spider Argiope aurantia, using a multivariate approach to distinguish selection targeted at different components of size. Further, we investigate the scaling relationships between male size and energy reserves, and the differential loss of reserves. Adult males do not feed while roving, and a size-dependent differential energy storage capacity may thus affect male performance during mate search. Contrary to predictions, large body size was favoured in one of two populations, and this was due to selection for longer legs. Male size was not under selection in the second population, but we detected direct selection for longer third legs. Males lost energy reserves during mate search, but this was independent of male size and storage capacity scaled isometrically with size. Thus, mate search is unlikely to lead to selection for small male size, but the hypothesis that relatively longer legs in male spiders reflect a search-adapted morphology is supported.
Skier triggering of backcountry avalanches with skilled route selection
NASA Astrophysics Data System (ADS)
Sinickas, Alexandra; Haegeli, Pascal; Jamieson, Bruce
2015-04-01
Jamieson (2009) provided numerical estimates for the baseline probabilities of triggering an avalanche by a backcountry skier making fresh tracks without skilled route selection as a function of the North American avalanche danger scale (i.e., hazard levels Low, Moderate, Considerable, High and Extreme). Using the results of an expert survey, he showed that triggering probabilities while skiing directly up, down or across a trigger zone without skilled route selection increase roughly by a factor of 10 with each step of the North American avalanche danger scale (i.e. hazard level). The objective of the present study is to examine the effect of skilled route selection on the relationship between triggering probability and hazard level. To assess the effect of skilled route selection on triggering probability by hazard level, we analysed avalanche hazard assessments as well as reports of skiing activity and triggering of avalanches from 11 Canadian helicopter and snowcat operations during two winters (2012-13 and 2013-14). These reports were submitted to the daily information exchange among Canadian avalanche safety operations, and reflect professional decision-making and route selection practices of guides leading groups of skiers. We selected all skier-controlled or accidentally triggered avalanches with a destructive size greater than size 1 according to the Canadian avalanche size classification, triggered by any member of a guided group (guide or guest). These operations forecast the avalanche hazard daily for each of three elevation bands: alpine, treeline and below treeline. In contrast to the 2009 study, an exposure was defined as a group skiing within any one of the three elevation bands, and consequently within a hazard rating, for the day (~4,300 ratings over two winters). For example, a group that skied below treeline (rated Moderate) and treeline (rated Considerable) in one day, would receive one count for exposure to Moderate hazard, and one count for exposure to Considerable hazard. While the absolute values for triggering probability cannot be compared to the 2009 study because of different definitions of exposure, our preliminary results suggest that with skilled route selection the triggering probability is similar all hazard levels, except for extreme for which there are few exposures. This means that the guiding teams of backcountry skiing operations effectively control the hazard from triggering avalanches with skilled route selection. Groups were exposed relatively evenly to Low hazard (1275 times or 29% of total exposure), Moderate hazard (1450 times or 33 %) and Considerable hazard (1215 times or 28 %). At higher levels, the exposure reduced to roughly 380 times (9 % of total exposure) to High hazard, and only 13 times (0.3 %) to Extreme hazard. We assess the sensitivity of the results to some of our key assumptions.
Automatic rocks detection and classification on high resolution images of planetary surfaces
NASA Astrophysics Data System (ADS)
Aboudan, A.; Pacifici, A.; Murana, A.; Cannarsa, F.; Ori, G. G.; Dell'Arciprete, I.; Allemand, P.; Grandjean, P.; Portigliotti, S.; Marcer, A.; Lorenzoni, L.
2013-12-01
High-resolution images can be used to obtain rocks location and size on planetary surfaces. In particular rock size-frequency distribution is a key parameter to evaluate the surface roughness, to investigate the geologic processes that formed the surface and to assess the hazards related with spacecraft landing. The manual search for rocks on high-resolution images (even for small areas) can be a very intensive work. An automatic or semi-automatic algorithm to identify rocks is mandatory to enable further processing as determining the rocks presence, size, height (by means of shadows) and spatial distribution over an area of interest. Accurate rocks and shadows contours localization are the key steps for rock detection. An approach to contour detection based on morphological operators and statistical thresholding is presented in this work. The identified contours are then fitted using a proper geometric model of the rocks or shadows and used to estimate salient rocks parameters (position, size, area, height). The performances of this approach have been evaluated both on images of Martian analogue area of Morocco desert and on HiRISE images. Results have been compared with ground truth obtained by means of manual rock mapping and proved the effectiveness of the algorithm. The rock abundance and rocks size-frequency distribution derived on selected HiRISE images have been compared with the results of similar analyses performed for the landing site certification of Mars landers (Viking, Pathfinder, MER, MSL) and with the available thermal data from IRTM and TES.
Jiang, S C; Zhang, X X
2005-12-01
A two-dimensional model was developed to model the effects of dynamic changes in the physical properties on tissue temperature and damage to simulate laser-induced interstitial thermotherapy (LITT) treatment procedures with temperature monitoring. A modified Monte Carlo method was used to simulate photon transport in the tissue in the non-uniform optical property field with the finite volume method used to solve the Pennes bioheat equation to calculate the temperature distribution and the Arrhenius equation used to predict the thermal damage extent. The laser light transport and the heat transfer as well as the damage accumulation were calculated iteratively at each time step. The influences of different laser sources, different applicator sizes, and different irradiation modes on the final damage volume were analyzed to optimize the LITT treatment. The numerical results showed that damage volume was the smallest for the 1,064-nm laser, with much larger, similar damage volumes for the 980- and 850-nm lasers at normal blood perfusion rates. The damage volume was the largest for the 1,064-nm laser with significantly smaller, similar damage volumes for the 980- and 850-nm lasers with temporally interrupted blood perfusion. The numerical results also showed that the variations in applicator sizes, laser powers, heating durations and temperature monitoring ranges significantly affected the shapes and sizes of the thermal damage zones. The shapes and sizes of the thermal damage zones can be optimized by selecting different applicator sizes, laser powers, heating duration times, temperature monitoring ranges, etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Zhanying; Key Laboratory for Anisotropy and Texture of Materials, Northeastern University, Shenyang 110819, China,; Zhao, Gang
2015-04-15
The effect of two-step homogenization treatments on the precipitation behavior of Al{sub 3}Zr dispersoids was investigated by transmission electron microscopy (TEM) in 7150 alloys. Two-step treatments with the first step in the temperature range of 300–400 °C followed by the second step at 470 °C were applied during homogenization. Compared with the conventional one-step homogenization, both a finer particle size and a higher number density of Al{sub 3}Zr dispersoids were obtained with two-step homogenization treatments. The most effective dispersoid distribution was attained using the first step held at 300 °C. In addition, the two-step homogenization minimized the precipitate free zonesmore » and greatly increased the number density of dispersoids near dendrite grain boundaries. The effect of two-step homogenization on recrystallization resistance of 7150 alloys with different Zr contents was quantitatively analyzed using the electron backscattered diffraction (EBSD) technique. It was found that the improved dispersoid distribution through the two-step treatment can effectively inhibit the recrystallization process during the post-deformation annealing for 7150 alloys containing 0.04–0.09 wt.% Zr, resulting in a remarkable reduction of the volume fraction and grain size of recrystallization grains. - Highlights: • Effect of two-step homogenization on Al{sub 3}Zr dispersoids was investigated by TEM. • Finer and higher number of dispersoids obtained with two-step homogenization • Minimized the precipitate free zones and improved the dispersoid distribution • Recrystallization resistance with varying Zr content was quantified by EBSD. • Effectively inhibit the recrystallization through two-step treatments in 7150 alloy.« less
Design of a high definition imaging (HDI) analysis technique adapted to challenging environments
NASA Astrophysics Data System (ADS)
Laurent, Sophie Nathalie
2005-11-01
This dissertation describes a new comprehensive, flexible, highly-automated and computationally-robust approach for high definition imaging (HDI), a data acquisition technique for video-rate imaging through a turbulent atmosphere with telescopes not equipped with adaptive optics (AO). The HDI process, when applied to astronomical objects, involves the recording of a large number of images (10 3 -10 5 ) from the Earth and, in post-processing mode, selection of the very best ones to create a "perfect-seeing" diffraction-limited image via a three-step process. First, image registration is performed to find the exact position of the object in each field, using a template similar in size and shape to the target. The next task is to select only higher-quality fields using a criterion based on a measure of the blur in a region of interest around that object. The images are then shifted and added together to create an effective time exposure under ideal observing conditions. The last step's objective is to remove residual distortions in the image caused by the atmosphere and the optical equipment, using a point spread function (PSF), and a technique called "l 1 regularization" that has been adapted to this type of environment. In order to study the tenuous sodium atmospheres around solar system bodies, the three-step HDI procedure is done first in the white light domain (695-950 nm), where the Signal-to-Noise Ratio (SNR) of the images is high, resulting in an image with a sharp limb. Then the known selection and registration results are mapped to the simultaneously recorded spectral data (sodium lines: 589 and 589.6 nm), where the lower-SNR images cannot support independent registration and selection. Science results can then be derived from this spectral study to understand the structure of the atmospheres of moons and planets. This dissertation's contribution to space physics deals with locating the source of escaping sodium from Jupiter's moon lo. The results show, for the first time, that the source region is not homogeneously distributed around the small moon, but concentrated on its side of orbital motion. This identifies for modelers the physical mechanisms taking place around the most volcanic moon in the solar system.
Genomic analysis of morphometric traits in bighorn sheep using the Ovine Infinium® HD SNP BeadChip.
Miller, Joshua M; Festa-Bianchet, Marco; Coltman, David W
2018-01-01
Elucidating the genetic basis of fitness-related traits is a major goal of molecular ecology. Traits subject to sexual selection are particularly interesting, as non-random mate choice should deplete genetic variation and thereby their evolutionary benefits. We examined the genetic basis of three sexually selected morphometric traits in bighorn sheep ( Ovis canadensis ): horn length, horn base circumference, and body mass. These traits are of specific concern in bighorn sheep as artificial selection through trophy hunting opposes sexual selection. Specifically, horn size determines trophy status and, in most North American jurisdictions, if an individual can be legally harvested. Using between 7,994-9,552 phenotypic measures from the long-term individual-based study at Ram Mountain (Alberta, Canada), we first showed that all three traits are heritable ( h 2 = 0.15-0.23). We then conducted a genome-wide association study (GWAS) utilizing a set of 3,777 SNPs typed in 76 individuals using the Ovine Infinium ® HD SNP BeadChip. We found suggestive association for body mass at a single locus (OAR9_91647990). The absence of strong associations with SNPs suggests that the traits are likely polygenic. These results represent a step forward for characterizing the genetic architecture of fitness related traits in sexually dimorphic ungulates.
Niu, Gang; Capellini, Giovanni; Schubert, Markus Andreas; Niermann, Tore; Zaumseil, Peter; Katzer, Jens; Krause, Hans-Michael; Skibitzki, Oliver; Lehmann, Michael; Xie, Ya-Hong; von Känel, Hans; Schroeder, Thomas
2016-03-04
The integration of dislocation-free Ge nano-islands was realized via selective molecular beam epitaxy on Si nano-tip patterned substrates. The Si-tip wafers feature a rectangular array of nanometer sized Si tips with (001) facet exposed among a SiO2 matrix. These wafers were fabricated by complementary metal-oxide-semiconductor (CMOS) compatible nanotechnology. Calculations based on nucleation theory predict that the selective growth occurs close to thermodynamic equilibrium, where condensation of Ge adatoms on SiO2 is disfavored due to the extremely short re-evaporation time and diffusion length. The growth selectivity is ensured by the desorption-limited growth regime leading to the observed pattern independence, i.e. the absence of loading effect commonly encountered in chemical vapor deposition. The growth condition of high temperature and low deposition rate is responsible for the observed high crystalline quality of the Ge islands which is also associated with negligible Si-Ge intermixing owing to geometric hindrance by the Si nano-tip approach. Single island as well as area-averaged characterization methods demonstrate that Ge islands are dislocation-free and heteroepitaxial strain is fully relaxed. Such well-ordered high quality Ge islands present a step towards the achievement of materials suitable for optical applications.
Niu, Gang; Capellini, Giovanni; Schubert, Markus Andreas; Niermann, Tore; Zaumseil, Peter; Katzer, Jens; Krause, Hans-Michael; Skibitzki, Oliver; Lehmann, Michael; Xie, Ya-Hong; von Känel, Hans; Schroeder, Thomas
2016-01-01
The integration of dislocation-free Ge nano-islands was realized via selective molecular beam epitaxy on Si nano-tip patterned substrates. The Si-tip wafers feature a rectangular array of nanometer sized Si tips with (001) facet exposed among a SiO2 matrix. These wafers were fabricated by complementary metal-oxide-semiconductor (CMOS) compatible nanotechnology. Calculations based on nucleation theory predict that the selective growth occurs close to thermodynamic equilibrium, where condensation of Ge adatoms on SiO2 is disfavored due to the extremely short re-evaporation time and diffusion length. The growth selectivity is ensured by the desorption-limited growth regime leading to the observed pattern independence, i.e. the absence of loading effect commonly encountered in chemical vapor deposition. The growth condition of high temperature and low deposition rate is responsible for the observed high crystalline quality of the Ge islands which is also associated with negligible Si-Ge intermixing owing to geometric hindrance by the Si nano-tip approach. Single island as well as area-averaged characterization methods demonstrate that Ge islands are dislocation-free and heteroepitaxial strain is fully relaxed. Such well-ordered high quality Ge islands present a step towards the achievement of materials suitable for optical applications. PMID:26940260
Li, Pu; Weng, Linlu; Niu, Haibo; Robinson, Brian; King, Thomas; Conmy, Robyn; Lee, Kenneth; Liu, Lei
2016-12-15
This study was aimed at testing the applicability of modified Weber number scaling with Alaska North Slope (ANS) crude oil, and developing a Reynolds number scaling approach for oil droplet size prediction for high viscosity oils. Dispersant to oil ratio and empirical coefficients were also quantified. Finally, a two-step Rosin-Rammler scheme was introduced for the determination of droplet size distribution. This new approach appeared more advantageous in avoiding the inconsistency in interfacial tension measurements, and consequently delivered concise droplet size prediction. Calculated and observed data correlated well based on Reynolds number scaling. The relation indicated that chemical dispersant played an important role in reducing the droplet size of ANS under different seasonal conditions. The proposed Reynolds number scaling and two-step Rosin-Rammler approaches provide a concise, reliable way to predict droplet size distribution, supporting decision making in chemical dispersant application during an offshore oil spill. Copyright © 2016 Elsevier Ltd. All rights reserved.
Control of Alginate Core Size in Alginate-Poly (Lactic-Co-Glycolic) Acid Microparticles
NASA Astrophysics Data System (ADS)
Lio, Daniel; Yeo, David; Xu, Chenjie
2016-01-01
Core-shell alginate-poly (lactic-co-glycolic) acid (PLGA) microparticles are potential candidates to improve hydrophilic drug loading while facilitating controlled release. This report studies the influence of the alginate core size on the drug release profile of alginate-PLGA microparticles and its size. Microparticles are synthesized through double-emulsion fabrication via a concurrent ionotropic gelation and solvent extraction. The size of alginate core ranges from approximately 10, 50, to 100 μm when the emulsification method at the first step is homogenization, vortexing, or magnetic stirring, respectively. The second step emulsification for all three conditions is performed with magnetic stirring. Interestingly, although the alginate core has different sizes, alginate-PLGA microparticle diameter does not change. However, drug release profiles are dramatically different for microparticles comprising different-sized alginate cores. Specifically, taking calcein as a model drug, microparticles containing the smallest alginate core (10 μm) show the slowest release over a period of 26 days with burst release less than 1 %.
Mathew, Hanna; Kunde, Wilfried; Herbort, Oliver
2017-05-01
When someone grasps an object, the grasp depends on the intended object manipulation and usually facilitates it. If several object manipulation steps are planned, the first step has been reported to primarily determine the grasp selection. We address whether the grasp can be aligned to the second step, if the second step's requirements exceed those of the first step. Participants grasped and rotated a dial first by a small extent and then by various extents in the opposite direction, without releasing the dial. On average, when the requirements of the first and the second step were similar, participants mostly aligned the grasp to the first step. When the requirements of the second step were considerably higher, participants aligned the grasp to the second step, even though the first step still had a considerable impact. Participants employed two different strategies. One subgroup initially aligned the grasp to the first step and then ceased adjusting the grasp to either step. Another group also initially aligned the grasp to the first step and then switched to aligning it primarily to the second step. The data suggest that participants are more likely to switch to the latter strategy when they experienced more awkward arm postures. In summary, grasp selections for multi-step object manipulations can be aligned to the second object manipulation step, if the requirements of this step clearly exceed those of the first step and if participants have some experience with the task.
Liu, Dong; Wu, Lili; Li, Chunxiu; Ren, Shengqiang; Zhang, Jingquan; Li, Wei; Feng, Lianghuan
2015-08-05
The methylammonium lead halide perovskite solar cells have become very attractive because they can be prepared with low-cost solution-processable technology and their power conversion efficiency have been increasing from 3.9% to 20% in recent years. However, the high performance of perovskite photovoltaic devices are dependent on the complicated process to prepare compact perovskite films with large grain size. Herein, a new method is developed to achieve excellent CH3NH3PbI3-xClx film with fine morphology and crystallization based on one step deposition and two-step annealing process. This method include the spin coating deposition of the perovskite films with the precursor solution of PbI2, PbCl2, and CH3NH3I at the molar ratio 1:1:4 in dimethylformamide (DMF) and the post two-step annealing (TSA). The first annealing is achieved by solvent-induced process in DMF to promote migration and interdiffusion of the solvent-assisted precursor ions and molecules and realize large size grain growth. The second annealing is conducted by thermal-induced process to further improve morphology and crystallization of films. The compact perovskite films are successfully prepared with grain size up to 1.1 μm according to SEM observation. The PL decay lifetime, and the optic energy gap for the film with two-step annealing are 460 ns and 1.575 eV, respectively, while they are 307 and 327 ns and 1.577 and 1.582 eV for the films annealed in one-step thermal and one-step solvent process. On the basis of the TSA process, the photovoltaic devices exhibit the best efficiency of 14% under AM 1.5G irradiation (100 mW·cm(-2)).
Monte Carlo modeling of single-molecule cytoplasmic dynein.
Singh, Manoranjan P; Mallik, Roop; Gross, Steven P; Yu, Clare C
2005-08-23
Molecular motors are responsible for active transport and organization in the cell, underlying an enormous number of crucial biological processes. Dynein is more complicated in its structure and function than other motors. Recent experiments have found that, unlike other motors, dynein can take different size steps along microtubules depending on load and ATP concentration. We use Monte Carlo simulations to model the molecular motor function of cytoplasmic dynein at the single-molecule level. The theory relates dynein's enzymatic properties to its mechanical force production. Our simulations reproduce the main features of recent single-molecule experiments that found a discrete distribution of dynein step sizes, depending on load and ATP concentration. The model reproduces the large steps found experimentally under high ATP and no load by assuming that the ATP binding affinities at the secondary sites decrease as the number of ATP bound to these sites increases. Additionally, to capture the essential features of the step-size distribution at very low ATP concentration and no load, the ATP hydrolysis of the primary site must be dramatically reduced when none of the secondary sites have ATP bound to them. We make testable predictions that should guide future experiments related to dynein function.
Outward Bound to the Galaxies--One Step at a Time
ERIC Educational Resources Information Center
Ward, R. Bruce; Miller-Friedmann, Jaimie; Sienkiewicz, Frank; Antonucci, Paul
2012-01-01
Less than a century ago, astronomers began to unlock the cosmic distances within and beyond the Milky Way. Understanding the size and scale of the universe is a continuing, step-by-step process that began with the remarkably accurate measurement of the distance to the Moon made by early Greeks. In part, the authors have ITEAMS (Innovative…
Impaired Response Selection During Stepping Predicts Falls in Older People-A Cohort Study.
Schoene, Daniel; Delbaere, Kim; Lord, Stephen R
2017-08-01
Response inhibition, an important executive function, has been identified as a risk factor for falls in older people. This study investigated whether step tests that include different levels of response inhibition differ in their ability to predict falls and whether such associations are mediated by measures of attention, speed, and/or balance. A cohort study with a 12-month follow-up was conducted in community-dwelling older people without major cognitive and mobility impairments. Participants underwent 3 step tests: (1) choice stepping reaction time (CSRT) requiring rapid decision making and step initiation; (2) inhibitory choice stepping reaction time (iCSRT) requiring additional response inhibition and response-selection (go/no-go); and (3) a Stroop Stepping Test (SST) under congruent and incongruent conditions requiring conflict resolution. Participants also completed tests of processing speed, balance, and attention as potential mediators. Ninety-three of the 212 participants (44%) fell in the follow-up period. Of the step tests, only components of the iCSRT task predicted falls in this time with the relative risk per standard deviation for the reaction time (iCSRT-RT) = 1.23 (95%CI = 1.10-1.37). Multiple mediation analysis indicated that the iCSRT-RT was independently associated with falls and not mediated through slow processing speed, poor balance, or inattention. Combined stepping and response inhibition as measured in a go/no-go test stepping paradigm predicted falls in older people. This suggests that integrity of the response-selection component of a voluntary stepping response is crucial for minimizing fall risk. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Local neutral networks help maintain inaccurately replicating ribozymes.
Szilágyi, András; Kun, Ádám; Szathmáry, Eörs
2014-01-01
The error threshold of replication limits the selectively maintainable genome size against recurrent deleterious mutations for most fitness landscapes. In the context of RNA replication a distinction between the genotypic and the phenotypic error threshold has been made; where the latter concerns the maintenance of secondary structure rather than sequence. RNA secondary structure is treated as a proxy for function. The phenotypic error threshold allows higher per digit mutation rates than its genotypic counterpart, and is known to increase with the frequency of neutral mutations in sequence space. Here we show that the degree of neutrality, i.e. the frequency of nearest-neighbour (one-step) neutral mutants is a remarkably accurate proxy for the overall frequency of such mutants in an experimentally verifiable formula for the phenotypic error threshold; this we achieve by the full numerical solution for the concentration of all sequences in mutation-selection balance up to length 16. We reinforce our previous result that currently known ribozymes could be selectively maintained by the accuracy known from the best available polymerase ribozymes. Furthermore, we show that in silico stabilizing selection can increase the mutational robustness of ribozymes due to the fact that they were produced by artificial directional selection in the first place. Our finding offers a better understanding of the error threshold and provides further insight into the plausibility of an ancient RNA world.
Phase and crystallite size analysis of (Ti1-xMox)C-(Ni,Cr) cermet obtained by mechanical alloying
NASA Astrophysics Data System (ADS)
Suryana, Anis, Muhammad; Manaf, Azwar
2018-04-01
In this paper, we report the phase and crystallite size analysis of (Ti1-xMox)C-(Ni,Cr) with x = 0-0.5 cermet obtained by mechanical alloying of Ti, Mo, Ni, Cr and C elemental powders using a high-energy shaker ball mill under wet condition for 10 hours. The process used toluene as process control agent and the ball to mass ratio was 10:1. The mechanically milled powder was then consolidated and subsequently heated at a temperature 850°C for 2 hours under an argon flow to prevent oxidation. The product was characterized by X-ray diffraction (XRD) and scanning electron microscope equipped with energy dispersive analyzer. Results shown that, by the selection of appropriate condition during the mechanical alloying process, a metastable Ti-Ni-Cr-C powders could be obtained. The powder then allowed the in situ synthesis of TiC-(Ni,Cr) cermet which took place during exposure time at a high temperature that applied in reactive sintering step. Addition to molybdenum has caused shifting the TiC XRD peaks to a slightly higher angle which indicated that molybdenum dissolved in TiC phase. The crystallite size distribution of TiC is discussed in the report, which showing that the mean size decreased with the addition of molybdenum.
Luchini, Alessandra; Geho, David H.; Bishop, Barney; Tran, Duy; Xia, Cassandra; Dufour, Robert; Jones, Clint; Espina, Virginia; Patanarut, Alexis; Zhu, Weidong; Ross, Mark; Tessitore, Alessandra; Petricoin, Emanuel; Liotta, Lance A.
2010-01-01
Disease-associated blood biomarkers exist in exceedingly low concentrations within complex mixtures of high-abundance proteins such as albumin. We have introduced an affinity bait molecule into N-isopropylacrylamide to produce a particle that will perform three independent functions within minutes, in one step, in solution: a) molecular size sieving b) affinity capture of all solution phase target molecules, and c) complete protection of harvested proteins from enzymatic degradation. The captured analytes can be readily electroeluted for analysis. PMID:18076201
Height of a faceted macrostep for sticky steps in a step-faceting zone
NASA Astrophysics Data System (ADS)
Akutsu, Noriko
2018-02-01
The driving force dependence of the surface velocity and the average height of faceted merged steps, the terrace-surface slope, and the elementary step velocity are studied using the Monte Carlo method in the nonequilibrium steady state. The Monte Carlo study is based on a lattice model, the restricted solid-on-solid model with point-contact-type step-step attraction (p-RSOS model). The main focus of this paper is a change of the "kink density" on the vicinal surface. The temperature is selected to be in the step-faceting zone [N. Akutsu, AIP Adv. 6, 035301 (2016), 10.1063/1.4943400] where the vicinal surface is surrounded by the (001) terrace and the (111) faceted step at equilibrium. Long time simulations are performed at this temperature to obtain steady states for the different driving forces that influence the growth/recession of the surface. A Wulff figure of the p-RSOS model is produced through the anomalous surface tension calculated using the density-matrix renormalization group method. The characteristics of the faceted macrostep profile at equilibrium are classified with respect to the connectivity of the surface tension. This surface tension connectivity also leads to a faceting diagram, where the separated areas are, respectively, classified as a Gruber-Mullins-Pokrovsky-Talapov zone, step droplet zone, and step-faceting zone. Although the p-RSOS model is a simplified model, the model shows a wide variety of dynamics in the step-faceting zone. There are four characteristic driving forces: Δ μy,Δ μf,Δ μc o , and Δ μR . For the absolute value of the driving force, |Δ μ | is smaller than Max[ Δ μy,Δ μf] , the step attachment-detachments are inhibited, and the vicinal surface consists of (001) terraces and the (111) side surfaces of the faceted macrosteps. For Max[ Δ μy,Δ μf]<|Δ μ |<Δ μc o , the surface grows/recedes intermittently through the two-dimensional (2D) heterogeneous nucleation at the facet edge of the macrostep. For Δ μc o<|Δ μ | <Δ μR , the surface grows/recedes with the successive attachment-detachment of steps to/from a macrostep. When |Δ μ | exceeds Δ μR , the macrostep vanishes and the surface roughens kinetically. Classical 2D heterogeneous multinucleation was determined to be valid with slight modifications based on the Monte Carlo results of the step velocity and the change in the surface slope of the "terrace." The finite-size effects were also determined to be distinctive near equilibrium.
Jewett, Ethan M.; Steinrücken, Matthias; Song, Yun S.
2016-01-01
Many approaches have been developed for inferring selection coefficients from time series data while accounting for genetic drift. These approaches have been motivated by the intuition that properly accounting for the population size history can significantly improve estimates of selective strengths. However, the improvement in inference accuracy that can be attained by modeling drift has not been characterized. Here, by comparing maximum likelihood estimates of selection coefficients that account for the true population size history with estimates that ignore drift by assuming allele frequencies evolve deterministically in a population of infinite size, we address the following questions: how much can modeling the population size history improve estimates of selection coefficients? How much can mis-inferred population sizes hurt inferences of selection coefficients? We conduct our analysis under the discrete Wright–Fisher model by deriving the exact probability of an allele frequency trajectory in a population of time-varying size and we replicate our results under the diffusion model. For both models, we find that ignoring drift leads to estimates of selection coefficients that are nearly as accurate as estimates that account for the true population history, even when population sizes are small and drift is high. This result is of interest because inference methods that ignore drift are widely used in evolutionary studies and can be many orders of magnitude faster than methods that account for population sizes. PMID:27550904
Establishing intensively cultured hybrid poplar plantations for fuel and fiber.
Edward Hansen; Lincoln Moore; Daniel Netzer; Michael Ostry; Howard Phipps; Jaroslav Zavitkovski
1983-01-01
This paper describes a step-by-step procedure for establishing commercial size intensively cultured plantations of hybrid poplar and summarizes the state-of-knowledge as developed during 10 years of field research at Rhinelander, Wisconsin.
van Geest, Geert; Voorrips, Roeland E; Esselink, Danny; Post, Aike; Visser, Richard Gf; Arens, Paul
2017-08-07
Cultivated chrysanthemum is an outcrossing hexaploid (2n = 6× = 54) with a disputed mode of inheritance. In this paper, we present a single nucleotide polymorphism (SNP) selection pipeline that was used to design an Affymetrix Axiom array with 183 k SNPs from RNA sequencing data (1). With this array, we genotyped four bi-parental populations (with sizes of 405, 53, 76 and 37 offspring plants respectively), and a cultivar panel of 63 genotypes. Further, we present a method for dosage scoring in hexaploids from signal intensities of the array based on mixture models (2) and validation of selection steps in the SNP selection pipeline (3). The resulting genotypic data is used to draw conclusions on the mode of inheritance in chrysanthemum (4), and to make an inference on allelic expression bias (5). With use of the mixture model approach, we successfully called the dosage of 73,936 out of 183,130 SNPs (40.4%) that segregated in any of the bi-parental populations. To investigate the mode of inheritance, we analysed markers that segregated in the large bi-parental population (n = 405). Analysis of segregation of duplex x nulliplex SNPs resulted in evidence for genome-wide hexasomic inheritance. This evidence was substantiated by the absence of strong linkage between markers in repulsion, which indicated absence of full disomic inheritance. We present the success rate of SNP discovery out of RNA sequencing data as affected by different selection steps, among which SNP coverage over genotypes and use of different types of sequence read mapping software. Genomic dosage highly correlated with relative allele coverage from the RNA sequencing data, indicating that most alleles are expressed according to their genomic dosage. The large population, genotyped with a very large number of markers, is a unique framework for extensive genetic analyses in hexaploid chrysanthemum. As starting point, we show conclusive evidence for genome-wide hexasomic inheritance.
Huang, Chaonan; Li, Yun; Yang, Jiajia; Peng, Junyu; Jin, Jing; Dhanjai; Wang, Jincheng; Chen, Jiping
2017-10-27
The present work represents a simple and effective preparation of a novel mixed-mode anion-exchange (MAX) sorbent based on porous poly[2-(diethylamino)ethyl methacrylate-divinylbenzene] (poly(DEAEMA-DVB)) spherical particles synthesized by one-step Pickering emulsion polymerization. The poly(DEAEMA-DVB) particles were quaternized with 1,4-butanediol diglycidyl ether (BDDE) followed by triethylamine (TEA) via epoxy-amine reaction to offer strong anion exchange properties. The synthesized MAX sorbent was characterized by scanning electron microscopy, Fourier-transform infrared spectroscopy, nitrogen adsorption-desorption measurements and elemental analysis. The MAX sorbent possessed regular spherical shape and narrow diameter distribution (15-35μm), a high IEC of 0.54meq/g, with carbon and nitrogen contents of 80.3% and 1.62%, respectively. Compared to poly(DEAEMA-DVB), the MAX sorbent exhibited decreased S BET (390.5 vs. 515.3m 2 g -1 ), pore volume (0.74 vs. 0.85cm 3 g -1 ) and pore size (16.8 vs. 17.3nm). Moreover, changes of N content for producing the MAX sorbent reveal a successful two-step quaternization, which can be highly related to such a high IEC. Finally, the MAX sorbent was successfully evaluated for selective isolation and purification of some selected acidic pharmaceuticals (ketoprofen, KEP; naproxen, NAP; and ibuprofen, IBP) from neutral (hydrocortisone, HYC), basic (carbamazepine, CAZ; amitriptyline, AMT) pharmaceuticals and other interferences in water samples using solid phase extraction (SPE). An efficient analytical method based on the MAX-based mixed-mode SPE coupled with HPLC-UV was developed for highly selective extraction and cleanup of acidic KEP, NAP and IBP in spiked wastewater samples. The developed method exhibited good sensitivity (0.009-0.085μgL -1 limit of detection), satisfactory recoveries (82.1%-105.5%) and repeatabilities (relative standard deviation < 7.9%, n=3). Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kashfuddoja, Mohammad; Prasath, R. G. R.; Ramji, M.
2014-11-01
In this work, the experimental characterization of polymer-matrix and polymer based carbon fiber reinforced composite laminate by employing a whole field non-contact digital image correlation (DIC) technique is presented. The properties are evaluated based on full field data obtained from DIC measurements by performing a series of tests as per ASTM standards. The evaluated properties are compared with the results obtained from conventional testing and analytical models and they are found to closely match. Further, sensitivity of DIC parameters on material properties is investigated and their optimum value is identified. It is found that the subset size has more influence on material properties as compared to step size and their predicted optimum value for the case of both matrix and composite material is found consistent with each other. The aspect ratio of region of interest (ROI) chosen for correlation should be the same as that of camera resolution aspect ratio for better correlation. Also, an open cutout panel made of the same composite laminate is taken into consideration to demonstrate the sensitivity of DIC parameters on predicting complex strain field surrounding the hole. It is observed that the strain field surrounding the hole is much more sensitive to step size rather than subset size. Lower step size produced highly pixilated strain field, showing sensitivity of local strain at the expense of computational time in addition with random scattered noisy pattern whereas higher step size mitigates the noisy pattern at the expense of losing the details present in data and even alters the natural trend of strain field leading to erroneous maximum strain locations. The subset size variation mainly presents a smoothing effect, eliminating noise from strain field while maintaining the details in the data without altering their natural trend. However, the increase in subset size significantly reduces the strain data at hole edge due to discontinuity in correlation. Also, the DIC results are compared with FEA prediction to ascertain the suitable value of DIC parameters towards better accuracy.
NASA Astrophysics Data System (ADS)
Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro
Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.
Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Wada, Takao
2014-07-01
A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.
Process, including PSA and membrane separation, for separating hydrogen from hydrocarbons
Baker, Richard W.; Lokhandwala, Kaaeid A.; He, Zhenjie; Pinnau, Ingo
2001-01-01
An improved process for separating hydrogen from hydrocarbons. The process includes a pressure swing adsorption step, a compression/cooling step and a membrane separation step. The membrane step relies on achieving a methane/hydrogen selectivity of at least about 2.5 under the conditions of the process.
A method for tailoring the information content of a software process model
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Arend, Mark B.
1990-01-01
The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to set of accepted processes and products for achieving each criterion; (5) Select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.
A method for tailoring the information content of a software process model
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Arend, Mark B.
1990-01-01
The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to a set of accepted processes and products for achieving each criterion; (5) select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.