Sample records for variable step sizes

  1. An improved maximum power point tracking method for a photovoltaic system

    NASA Astrophysics Data System (ADS)

    Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes

    2016-06-01

    In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.

  2. A Conformational Transition in the Myosin VI Converter Contributes to the Variable Step Size

    PubMed Central

    Ovchinnikov, V.; Cecchini, M.; Vanden-Eijnden, E.; Karplus, M.

    2011-01-01

    Myosin VI (MVI) is a dimeric molecular motor that translocates backwards on actin filaments with a surprisingly large and variable step size, given its short lever arm. A recent x-ray structure of MVI indicates that the large step size can be explained in part by a novel conformation of the converter subdomain in the prepowerstroke state, in which a 53-residue insert, unique to MVI, reorients the lever arm nearly parallel to the actin filament. To determine whether the existence of the novel converter conformation could contribute to the step-size variability, we used a path-based free-energy simulation tool, the string method, to show that there is a small free-energy difference between the novel converter conformation and the conventional conformation found in other myosins. This result suggests that MVI can bind to actin with the converter in either conformation. Models of MVI/MV chimeric dimers show that the variability in the tilting angle of the lever arm that results from the two converter conformations can lead to step-size variations of ∼12 nm. These variations, in combination with other proposed mechanisms, could explain the experimentally determined step-size variability of ∼25 nm for wild-type MVI. Mutations to test the findings by experiment are suggested. PMID:22098742

  3. Simulation and experimental design of a new advanced variable step size Incremental Conductance MPPT algorithm for PV systems.

    PubMed

    Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir

    2016-05-01

    Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  4. A Variable Step-Size Proportionate Affine Projection Algorithm for Identification of Sparse Impulse Response

    NASA Astrophysics Data System (ADS)

    Liu, Ligang; Fukumoto, Masahiro; Saiki, Sachio; Zhang, Shiyong

    2009-12-01

    Proportionate adaptive algorithms have been proposed recently to accelerate convergence for the identification of sparse impulse response. When the excitation signal is colored, especially the speech, the convergence performance of proportionate NLMS algorithms demonstrate slow convergence speed. The proportionate affine projection algorithm (PAPA) is expected to solve this problem by using more information in the input signals. However, its steady-state performance is limited by the constant step-size parameter. In this article we propose a variable step-size PAPA by canceling the a posteriori estimation error. This can result in high convergence speed using a large step size when the identification error is large, and can then considerably decrease the steady-state misalignment using a small step size after the adaptive filter has converged. Simulation results show that the proposed approach can greatly improve the steady-state misalignment without sacrificing the fast convergence of PAPA.

  5. Finite-difference modeling with variable grid-size and adaptive time-step in porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Wu, Guochen

    2014-04-01

    Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.

  6. An improved VSS NLMS algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan

    2017-08-01

    In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.

  7. Steepest descent method implementation on unconstrained optimization problem using C++ program

    NASA Astrophysics Data System (ADS)

    Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.

    2018-03-01

    Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.

  8. A chaos wolf optimization algorithm with self-adaptive variable step-size

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  9. A new theory for multistep discretizations of stiff ordinary differential equations: Stability with large step sizes

    NASA Technical Reports Server (NTRS)

    Majda, G.

    1985-01-01

    A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.

  10. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  11. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review

    PubMed Central

    Morris, Tom; Gray, Laura

    2017-01-01

    Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637

  12. Influence of BMI and dietary restraint on self-selected portions of prepared meals in US women.

    PubMed

    Labbe, David; Rytz, Andréas; Brunstrom, Jeffrey M; Forde, Ciarán G; Martin, Nathalie

    2017-04-01

    The rise of obesity prevalence has been attributed in part to an increase in food and beverage portion sizes selected and consumed among overweight and obese consumers. Nevertheless, evidence from observations of adults is mixed and contradictory findings might reflect the use of small or unrepresentative samples. The objective of this study was i) to determine the extent to which BMI and dietary restraint predict self-selected portion sizes for a range of commercially available prepared savoury meals and ii) to consider the importance of these variables relative to two previously established predictors of portion selection, expected satiation and expected liking. A representative sample of female consumers (N = 300, range 18-55 years) evaluated 15 frozen savoury prepared meals. For each meal, participants rated their expected satiation and expected liking, and selected their ideal portion using a previously validated computer-based task. Dietary restraint was quantified using the Dutch Eating Behaviour Questionnaire (DEBQ-R). Hierarchical multiple regression was performed on self-selected portions with age, hunger level, and meal familiarity entered as control variables in the first step of the model, expected satiation and expected liking as predictor variables in the second step, and DEBQ-R and BMI as exploratory predictor variables in the third step. The second and third steps significantly explained variance in portion size selection (18% and 4%, respectively). Larger portion selections were significantly associated with lower dietary restraint and with lower expected satiation. There was a positive relationship between BMI and portion size selection (p = 0.06) and between expected liking and portion size selection (p = 0.06). Our discussion considers future research directions, the limited variance explained by our model, and the potential for portion size underreporting by overweight participants. Copyright © 2016 Nestec S.A. Published by Elsevier Ltd.. All rights reserved.

  13. Student failures on first-year medical basic science courses and the USMLE step 1: a retrospective study over a 20-year period.

    PubMed

    Burns, E Robert; Garrett, Judy

    2015-01-01

    Correlates of achievement in the basic science years in medical school and on the Step 1 of the United States Medical Licensing Examination® (USMLE®), (Step 1) in relation to preadmission variables have been the subject of considerable study. Preadmissions variables such as the undergraduate grade point average (uGPA) and Medical College Admission Test® (MCAT®) scores, solely or in combination, have previously been found to be predictors of achievement in the basic science years and/or on the Step 1. The purposes of this retrospective study were to: (1) determine if our statistical analysis confirmed previously published relationships between preadmission variables (MCAT, uGPA, and applicant pool size), and (2) study correlates of the number of failures in five M1 courses with those preadmission variables and failures on Step 1. Statistical analysis confirmed previously published relationships between all preadmission variables. Only one course, Microscopic Anatomy, demonstrated significant correlations with all variables studied including the Step 1 failures. Physiology correlated with three of the four variables studied, but not with the Step 1 failures. Analyses such as these provide a tool by which administrators will be able to identify what courses are or are not responding in appropriate ways to changes in the preadmissions variables that signal student performance on the Step 1. © 2014 American Association of Anatomists.

  14. Variable-mesh method of solving differential equations

    NASA Technical Reports Server (NTRS)

    Van Wyk, R.

    1969-01-01

    Multistep predictor-corrector method for numerical solution of ordinary differential equations retains high local accuracy and convergence properties. In addition, the method was developed in a form conducive to the generation of effective criteria for the selection of subsequent step sizes in step-by-step solution of differential equations.

  15. Counter-propagation network with variable degree variable step size LMS for single switch typing recognition.

    PubMed

    Yang, Cheng-Huei; Luo, Ching-Hsing; Yang, Cheng-Hong; Chuang, Li-Yeh

    2004-01-01

    Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, including mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for disabled persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. This restriction is a major hindrance. Therefore, a switch adaptive automatic recognition method with a high recognition rate is needed. The proposed system combines counter-propagation networks with a variable degree variable step size LMS algorithm. It is divided into five stages: space recognition, tone recognition, learning process, adaptive processing, and character recognition. Statistical analyses demonstrated that the proposed method elicited a better recognition rate in comparison to alternative methods in the literature.

  16. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review.

    PubMed

    Kristunas, Caroline; Morris, Tom; Gray, Laura

    2017-11-15

    To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Any, not limited to healthcare settings. Any taking part in an SW-CRT published up to March 2016. The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22-0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. Stability with large step sizes for multistep discretizations of stiff ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Majda, George

    1986-01-01

    One-leg and multistep discretizations of variable-coefficient linear systems of ODEs having both slow and fast time scales are investigated analytically. The stability properties of these discretizations are obtained independent of ODE stiffness and compared. The results of numerical computations are presented in tables, and it is shown that for large step sizes the stability of one-leg methods is better than that of the corresponding linear multistep methods.

  18. Variable Step-Size Selection Methods for Implicit Integration Schemes

    DTIC Science & Technology

    2005-10-01

    for ρk numerically. 23 4 Examples In this section we explore this variable step-size selection method for two problems, the Lotka - Volterra model and...the Kepler problem. 4.1 The Lotka - Volterra Model For this example we consider the Lotka - Volterra model of a simple predator- prey system from...problems. Consider this variation to the Lotka - Volterra problem:   u̇ v̇   =   u2v(v − 2) v2u(1− u)   = f(u, v); t ∈ [0, 50

  19. Multi-step rhodopsin inactivation schemes can account for the size variability of single photon responses in Limulus ventral photoreceptors

    PubMed Central

    1994-01-01

    Limulus ventral photoreceptors generate highly variable responses to the absorption of single photons. We have obtained data on the size distribution of these responses, derived the distribution predicted from simple transduction cascade models and compared the theory and data. In the simplest of models, the active state of the visual pigment (defined by its ability to activate G protein) is turned off in a single reaction. The output of such a cascade is predicted to be highly variable, largely because of stochastic variation in the number of G proteins activated. The exact distribution predicted is exponential, but we find that an exponential does not adequately account for the data. The data agree much better with the predictions of a cascade model in which the active state of the visual pigment is turned off by a multi-step process. PMID:8057085

  20. An improved affine projection algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo

    2017-08-01

    Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.

  1. Understanding the significance variables for fabrication of fish gelatin nanoparticles by Plackett-Burman design

    NASA Astrophysics Data System (ADS)

    Subara, Deni; Jaswir, Irwandi; Alkhatib, Maan Fahmi Rashid; Noorbatcha, Ibrahim Ali

    2018-01-01

    The aim of this experiment is to screen and to understand the process variables on the fabrication of fish gelatin nanoparticles by using quality-design approach. The most influencing process variables were screened by using Plackett-Burman design. Mean particles size, size distribution, and zeta potential were found in the range 240±9.76 nm, 0.3, and -9 mV, respectively. Statistical results explained that concentration of acetone, pH of solution during precipitation step and volume of cross linker had a most significant effect on particles size of fish gelatin nanoparticles. It was found that, time and chemical consuming is lower than previous research. This study revealed the potential of quality-by design in understanding the effects of process variables on the fish gelatin nanoparticles production.

  2. Learning Rate Updating Methods Applied to Adaptive Fuzzy Equalizers for Broadband Power Line Communications

    NASA Astrophysics Data System (ADS)

    Ribeiro, Moisés V.

    2004-12-01

    This paper introduces adaptive fuzzy equalizers with variable step size for broadband power line (PL) communications. Based on delta-bar-delta and local Lipschitz estimation updating rules, feedforward, and decision feedback approaches, we propose singleton and nonsingleton fuzzy equalizers with variable step size to cope with the intersymbol interference (ISI) effects of PL channels and the hardness of the impulse noises generated by appliances and nonlinear loads connected to low-voltage power grids. The computed results show that the convergence rates of the proposed equalizers are higher than the ones attained by the traditional adaptive fuzzy equalizers introduced by J. M. Mendel and his students. Additionally, some interesting BER curves reveal that the proposed techniques are efficient for mitigating the above-mentioned impairments.

  3. High dependency units in the UK: variable size, variable character, few in number.

    PubMed Central

    Thompson, F. J.; Singer, M.

    1995-01-01

    An exploratory descriptive survey was conducted to determine the size and character of high dependency units (HDUs) in the UK. A telephone survey and subsequent postal questionnaire was sent to the 39 general HDUs in the UK determined by a recent survey from the Royal College of Anaesthetists; replies were received from 28. Most HDUs (82%, n = 23) were geographically distinct from the intensive care unit and varied in size from three to 13 beds, although only 64% (n = 18) reported that all beds were currently open. Nurse: patient ratios were at least 1:3. Fifty per cent of units had one or more designated consultants in charge, although only 11% (n = 3) had specifically designated consultant sessions. Junior medical cover was provided mainly by the on-call speciality term. Twenty units acted as a step-down facility for discharged intensive care unit patients and 21 offered a step-up facility for patients from general wards. Provision of facilities and levels of monitoring varied between these units. Few HDUs exist in the UK and they are variable in size and in the facilities and monitoring procedures which they provide. Future studies are urgently required to determine cost-effectiveness and outcome benefit of this intermediate care facility. Images p221-a PMID:7784281

  4. High dependency units in the UK: variable size, variable character, few in number.

    PubMed

    Thompson, F J; Singer, M

    1995-04-01

    An exploratory descriptive survey was conducted to determine the size and character of high dependency units (HDUs) in the UK. A telephone survey and subsequent postal questionnaire was sent to the 39 general HDUs in the UK determined by a recent survey from the Royal College of Anaesthetists; replies were received from 28. Most HDUs (82%, n = 23) were geographically distinct from the intensive care unit and varied in size from three to 13 beds, although only 64% (n = 18) reported that all beds were currently open. Nurse: patient ratios were at least 1:3. Fifty per cent of units had one or more designated consultants in charge, although only 11% (n = 3) had specifically designated consultant sessions. Junior medical cover was provided mainly by the on-call speciality term. Twenty units acted as a step-down facility for discharged intensive care unit patients and 21 offered a step-up facility for patients from general wards. Provision of facilities and levels of monitoring varied between these units. Few HDUs exist in the UK and they are variable in size and in the facilities and monitoring procedures which they provide. Future studies are urgently required to determine cost-effectiveness and outcome benefit of this intermediate care facility.

  5. Variable aperture-based ptychographical iterative engine method

    NASA Astrophysics Data System (ADS)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  6. Orbit and uncertainty propagation: a comparison of Gauss-Legendre-, Dormand-Prince-, and Chebyshev-Picard-based approaches

    NASA Astrophysics Data System (ADS)

    Aristoff, Jeffrey M.; Horwood, Joshua T.; Poore, Aubrey B.

    2014-01-01

    We present a new variable-step Gauss-Legendre implicit-Runge-Kutta-based approach for orbit and uncertainty propagation, VGL-IRK, which includes adaptive step-size error control and which collectively, rather than individually, propagates nearby sigma points or states. The performance of VGL-IRK is compared to a professional (variable-step) implementation of Dormand-Prince 8(7) (DP8) and to a fixed-step, optimally-tuned, implementation of modified Chebyshev-Picard iteration (MCPI). Both nearly-circular and highly-elliptic orbits are considered using high-fidelity gravity models and realistic integration tolerances. VGL-IRK is shown to be up to eleven times faster than DP8 and up to 45 times faster than MCPI (for the same accuracy), in a serial computing environment. Parallelization of VGL-IRK and MCPI is also discussed.

  7. Variable is better than invariable: sparse VSS-NLMS algorithms with application to adaptive MIMO channel estimation.

    PubMed

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.

  8. Variable Is Better Than Invariable: Sparse VSS-NLMS Algorithms with Application to Adaptive MIMO Channel Estimation

    PubMed Central

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286

  9. Variable aperture-based ptychographical iterative engine method.

    PubMed

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  10. VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal

    NASA Astrophysics Data System (ADS)

    Satheeskumaran, S.; Sabrigiriraj, M.

    2016-06-01

    Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.

  11. Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.

    2004-01-01

    Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.

  12. A comparison of artificial compressibility and fractional step methods for incompressible flow computations

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Darian, Armen; Sindir, Munir

    1992-01-01

    We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).

  13. Influence of Age, Maturity, and Body Size on the Spatiotemporal Determinants of Maximal Sprint Speed in Boys.

    PubMed

    Meyers, Robert W; Oliver, Jon L; Hughes, Michael G; Lloyd, Rhodri S; Cronin, John B

    2017-04-01

    Meyers, RW, Oliver, JL, Hughes, MG, Lloyd, RS, and Cronin, JB. Influence of age, maturity, and body size on the spatiotemporal determinants of maximal sprint speed in boys. J Strength Cond Res 31(4): 1009-1016, 2017-The aim of this study was to investigate the influence of age, maturity, and body size on the spatiotemporal determinants of maximal sprint speed in boys. Three-hundred and seventy-five boys (age: 13.0 ± 1.3 years) completed a 30-m sprint test, during which maximal speed, step length, step frequency, contact time, and flight time were recorded using an optical measurement system. Body mass, height, leg length, and a maturity offset represented somatic variables. Step frequency accounted for the highest proportion of variance in speed (∼58%) in the pre-peak height velocity (pre-PHV) group, whereas step length explained the majority of the variance in speed (∼54%) in the post-PHV group. In the pre-PHV group, mass was negatively related to speed, step length, step frequency, and contact time; however, measures of stature had a positive influence on speed and step length yet a negative influence on step frequency. Speed and step length were also negatively influence by mass in the post-PHV group, whereas leg length continued to positively influence step length. The results highlighted that pre-PHV boys may be deemed step frequency reliant, whereas those post-PHV boys may be marginally step length reliant. Furthermore, the negative influence of body mass, both pre-PHV and post-PHV, suggests that training to optimize sprint performance in youth should include methods such as plyometric and strength training, where a high neuromuscular focus and the development force production relative to body weight are key foci.

  14. Evaluation of Second-Level Inference in fMRI Analysis

    PubMed Central

    Roels, Sanne P.; Loeys, Tom; Moerkerke, Beatrijs

    2016-01-01

    We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference. PMID:26819578

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumuluru, Jaya Shankar; McCulloch, Richard Chet James

    In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the mostmore » improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.« less

  16. Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Martini, Michael C.

    2011-01-01

    A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.

  17. A variable-step-size robust delta modulator.

    NASA Technical Reports Server (NTRS)

    Song, C. L.; Garodnick, J.; Schilling, D. L.

    1971-01-01

    Description of an analytically obtained optimum adaptive delta modulator-demodulator configuration. The device utilizes two past samples to obtain a step size which minimizes the mean square error for a Markov-Gaussian source. The optimum system is compared, using computer simulations, with a linear delta modulator and an enhanced Abate delta modulator. In addition, the performance is compared to the rate distortion bound for a Markov source. It is shown that the optimum delta modulator is neither quantization nor slope-overload limited. The highly nonlinear equations obtained for the optimum transmitter and receiver are approximated by piecewise-linear equations in order to obtain system equations which can be transformed into hardware. The derivation of the experimental system is presented.

  18. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  19. What controls channel form in steep mountain streams?

    NASA Astrophysics Data System (ADS)

    Palucis, M. C.; Lamb, M. P.

    2017-07-01

    Steep mountain streams have channel morphologies that transition from alternate bar to step-pool to cascade with increasing bed slope, which affect stream habitat, flow resistance, and sediment transport. Experimental and theoretical studies suggest that alternate bars form under large channel width-to-depth ratios, step-pools form in near supercritical flow or when channel width is narrow compared to bed grain size, and cascade morphology is related to debris flows. However, the connection between these process variables and bed slope—the apparent dominant variable for natural stream types—is unclear. Combining field data and theory, we find that certain bed slopes have unique channel morphologies because the process variables covary systematically with bed slope. Multiple stable states are predicted for other ranges in bed slope, suggesting that a competition of underlying processes leads to the emergence of the most stable channel form.

  20. An algorithm for fast elastic wave simulation using a vectorized finite difference operator

    NASA Astrophysics Data System (ADS)

    Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna

    2018-07-01

    Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.

  1. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  2. Improvements of the particle-in-cell code EUTERPE for petascaling machines

    NASA Astrophysics Data System (ADS)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Kleiber, Ralf; Castejón, Francisco; Cela, José M.

    2011-09-01

    In the present work we report some performance measures and computational improvements recently carried out using the gyrokinetic code EUTERPE (Jost, 2000 [1] and Jost et al., 1999 [2]), which is based on the general particle-in-cell (PIC) method. The scalability of the code has been studied for up to sixty thousand processing elements and some steps towards a complete hybridization of the code were made. As a numerical example, non-linear simulations of Ion Temperature Gradient (ITG) instabilities have been carried out in screw-pinch geometry and the results are compared with earlier works. A parametric study of the influence of variables (step size of the time integrator, number of markers, grid size) on the quality of the simulation is presented.

  3. Solution of elliptic PDEs by fast Poisson solvers using a local relaxation factor

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    1986-01-01

    A large class of two- and three-dimensional, nonseparable elliptic partial differential equations (PDEs) is presently solved by means of novel one-step (D'Yakanov-Gunn) and two-step (accelerated one-step) iterative procedures, using a local, discrete Fourier analysis. In addition to being easily implemented and applicable to a variety of boundary conditions, these procedures are found to be computationally efficient on the basis of the results of numerical comparison with other established methods, which lack the present one's: (1) insensitivity to grid cell size and aspect ratio, and (2) ease of convergence rate estimation by means of the coefficient of the PDE being solved. The two-step procedure is numerically demonstrated to outperform the one-step procedure in the case of PDEs with variable coefficients.

  4. School Climate: The Controllable and the Uncontrollable

    ERIC Educational Resources Information Center

    Sulak, Tracey N.

    2018-01-01

    A positive school climate impacts students by promoting positive relations among students, staff and faculty of the school. The current study used latent class analysis and multinomial regression with R3STEP to analyse patterns of negative behaviours in schools and test the association of these patterns with structural variables like school size,…

  5. Hysteresis modeling of magnetic shape memory alloy actuator based on Krasnosel'skii-Pokrovskii model.

    PubMed

    Zhou, Miaolei; Wang, Shoubin; Gao, Wei

    2013-01-01

    As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator.

  6. Hysteresis Modeling of Magnetic Shape Memory Alloy Actuator Based on Krasnosel'skii-Pokrovskii Model

    PubMed Central

    Wang, Shoubin; Gao, Wei

    2013-01-01

    As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator. PMID:23737730

  7. A combined NLP-differential evolution algorithm approach for the optimization of looped water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2011-08-01

    This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.

  8. Kinematic, muscular, and metabolic responses during exoskeletal-, elliptical-, or therapist-assisted stepping in people with incomplete spinal cord injury.

    PubMed

    Hornby, T George; Kinnaird, Catherine R; Holleran, Carey L; Rafferty, Miriam R; Rodriguez, Kelly S; Cain, Julie B

    2012-10-01

    Robotic-assisted locomotor training has demonstrated some efficacy in individuals with neurological injury and is slowly gaining clinical acceptance. Both exoskeletal devices, which control individual joint movements, and elliptical devices, which control endpoint trajectories, have been utilized with specific patient populations and are available commercially. No studies have directly compared training efficacy or patient performance during stepping between devices. The purpose of this study was to evaluate kinematic, electromyographic (EMG), and metabolic responses during elliptical- and exoskeletal-assisted stepping in individuals with incomplete spinal cord injury (SCI) compared with therapist-assisted stepping. Design A prospective, cross-sectional, repeated-measures design was used. Participants with incomplete SCI (n=11) performed 3 separate bouts of exoskeletal-, elliptical-, or therapist-assisted stepping. Unilateral hip and knee sagittal-plane kinematics, lower-limb EMG recordings, and oxygen consumption were compared across stepping conditions and with control participants (n=10) during treadmill stepping. Exoskeletal stepping kinematics closely approximated normal gait patterns, whereas significantly greater hip and knee flexion postures were observed during elliptical-assisted stepping. Measures of kinematic variability indicated consistent patterns in control participants and during exoskeletal-assisted stepping, whereas therapist- and elliptical-assisted stepping kinematics were more variable. Despite specific differences, EMG patterns generally were similar across stepping conditions in the participants with SCI. In contrast, oxygen consumption was consistently greater during therapist-assisted stepping. Limitations Limitations included a small sample size, lack of ability to evaluate kinetics during stepping, unilateral EMG recordings, and sagittal-plane kinematics. Despite specific differences in kinematics and EMG activity, metabolic activity was similar during stepping in each robotic device. Understanding potential differences and similarities in stepping performance with robotic assistance may be important in delivery of repeated locomotor training using robotic or therapist assistance and for consumers of robotic devices.

  9. Normalised subband adaptive filtering with extended adaptiveness on degree of subband filters

    NASA Astrophysics Data System (ADS)

    Samuyelu, Bommu; Rajesh Kumar, Pullakura

    2017-12-01

    This paper proposes an adaptive normalised subband adaptive filtering (NSAF) to accomplish the betterment of NSAF performance. In the proposed NSAF, an extended adaptiveness is introduced from its variants in two ways. In the first way, the step-size is set adaptive, and in the second way, the selection of subbands is set adaptive. Hence, the proposed NSAF is termed here as variable step-size-based NSAF with selected subbands (VS-SNSAF). Experimental investigations are carried out to demonstrate the performance (in terms of convergence) of the VS-SNSAF against the conventional NSAF and its state-of-the-art adaptive variants. The results report the superior performance of VS-SNSAF over the traditional NSAF and its variants. It is also proved for its stability, robustness against noise and substantial computing complexity.

  10. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  11. Simulation of drift of pesticides: development and validation of a model.

    PubMed

    Brusselman, E; Spanoghe, P; Van der Meeren, P; Gabriels, D; Steurbaut, W

    2003-01-01

    Over the last decade drift of pesticides has been recognized as a major problem for the environment. High fractions of pesticides can be transported through the air and deposited in neighbouring ecosystems during and after application. A new computer-two steps-drift model is developed: FYDRIMO or F(ph)Ysical DRift MOdel. In the first step the droplet size spectrum of a nozzle is analysed. In this way the volume percentage of droplets with a certain size is known. In the second step the model results in a prediction of deposition of each droplet with a certain size. This second part of the model runs in MATLAB and is grounded on a combination of two physical factors: gravity force and friction forces. In this stage of development corrections are included for evaporation and wind force following a certain measured wind profile. For validation wind tunnel experiments were performed. Salt solutions were sprayed at two wind velocities and variable distance above the floor. Small gutters in the floor filled with filter paper were used to collect the sprayed droplets. After analysing and comparing the wind tunnel results with the model predictions, FYDRIMO seems to have good predicting capacities.

  12. Correlation of USMLE Step 1 scores with performance on dermatology in-training examinations.

    PubMed

    Fening, Katherine; Vander Horst, Anthony; Zirwas, Matthew

    2011-01-01

    Although United States Medical Licensing Examination (USMLE) Step 1 was not designed to predict resident performance, scores are used to compare residency applicants. Multiple studies have displayed a significant correlation among Step 1 scores, in-training examination (ITE) scores, and board passage, although no such studies have been performed in dermatology. The purpose of this study is to determine if this correlation exists in dermatology, and how much of the variability in ITE scores is a result of differences in Step 1 scores. This study also seeks to determine if it is appropriate to individualize expectations for resident ITE performance. This project received institutional review board exemption. From 5 dermatology residency programs (86 residents), we collected Step 1 and ITE scores for each of the 3 years of dermatology residency, and recorded passage/failure on boards. Bivariate Pearson correlation analysis was used to assess correlation between USMLE and ITE scores. Ordinary least squares regression was computed to determine how much USMLE scores contribute to ITE variability. USMLE and ITE score correlations were highly significant (P < .001). Correlation coefficients with USMLE were: 0.467, 0.541, and 0.527 for ITE in years 1, 2, and 3, respectively. Variability in ITE scores caused by differences in USMLE scores were: ITE first-year residency = 21.8%, ITE second-year residency = 29.3%, and ITE third-year residency = 27.8%. This study had a relatively small sample size, with data from only 5 programs. There is a moderate correlation between USMLE and ITE scores, with USMLE scores explaining ∼26% of the variability in ITE scores. Copyright © 2009 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  13. On the primary variable switching technique for simulating unsaturated-saturated flows

    NASA Astrophysics Data System (ADS)

    Diersch, H.-J. G.; Perrochet, P.

    Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.

  14. Correlation of Metabolic Variables with the Number of ORFs in Human Pathogenic and Phylogenetically Related Non- or Less-Pathogenic Bacteria.

    PubMed

    Brambila-Tapia, Aniel Jessica Leticia; Poot-Hernández, Augusto Cesar; Garcia-Guevara, Jose Fernando; Rodríguez-Vázquez, Katya

    2016-06-01

    To date, a few works have performed a correlation of metabolic variables in bacteria; however specific correlations with these variables have not been reported. In this work, we included 36 human pathogenic bacteria and 18 non- or less-pathogenic-related bacteria and obtained all metabolic variables, including enzymes, metabolic pathways, enzymatic steps and specific metabolic pathways, and enzymatic steps of particular metabolic processes, from a reliable metabolic database (KEGG). Then, we correlated the number of the open reading frames (ORF) with these variables and with the proportions of these variables, and we observed a negative correlation with the proportion of enzymes (r = -0.506, p < 0.0001), metabolic pathways (r = -0.871, p < 00.0001), enzymatic reactions (r = -0.749, p < 00.0001), and with the proportions of central metabolism variables as well as a positive correlation with the proportions of multistep reactions (r = 0.650, p < 00.0001) and secondary metabolism variables. The proportion of multifunctional reactions (r: -0.114, p = 0.41) and the proportion of enzymatic steps (r: -0.205, p = 0.14) did not present a significant correlation. These correlations indicate that as the size of a genome (measured in the number of ORFs) increases, the proportion of genes that encode enzymes significantly diminishes (especially those related to central metabolism), suggesting that when essential metabolic pathways are complete, an increase in the number of ORFs does not require a similar increase in the metabolic pathways and enzymes, but only a slight increase is sufficient to cope with a large genome.

  15. Kepler

    NASA Technical Reports Server (NTRS)

    Howell, Steve B.

    2011-01-01

    The NASA Kepler mission recently announced over 1200 exoplanet candidates. While some are common Hot Jupiters, a large number are Neptune size and smaller, transit depths suggest sizes down to the radius of Earth. The Kepler project has a fairly high confidence that most of these candidates are real exoplanets. Many analysis steps and lessons learned from Kepler light curves are used during the vetting process. This talk will cover some new results in the areas of stellar variability, solar systems with multiple planets, and how transit-like signatures are vetted for false positives, especially those indicative of small planets.

  16. Demodulation algorithm for optical fiber F-P sensor.

    PubMed

    Yang, Huadong; Tong, Xinglin; Cui, Zhang; Deng, Chengwei; Guo, Qian; Hu, Pan

    2017-09-10

    The demodulation algorithm is very important to improving the measurement accuracy of a sensing system. In this paper, the variable step size hill climbing search method will be initially used for the optical fiber Fabry-Perot (F-P) sensing demodulation algorithm. Compared with the traditional discrete gap transformation demodulation algorithm, the computation is greatly reduced by changing step size of each climb, which could achieve nano-scale resolution, high measurement accuracy, high demodulation rates, and large dynamic demodulation range. An optical fiber F-P pressure sensor based on micro-electro-mechanical system (MEMS) has been fabricated to carry out the experiment, and the results show that the resolution of the algorithm can reach nano-scale level, the sensor's sensitivity is about 2.5  nm/KPa, which is similar to the theoretical value, and this sensor has great reproducibility.

  17. A film-rupture model of hydrogen-induced, slow crack growth in alpha-beta titanium

    NASA Technical Reports Server (NTRS)

    Nelson, H. G.

    1975-01-01

    The appearance of the terrace like fracture morphology of gaseous hydrogen induced crack growth in acicular alpha-beta titanium alloys is discussed as a function of specimen configuration, magnitude of applied stress intensity, test temperature, and hydrogen pressure. Although the overall appearance of the terrace structure remained essentially unchanged, a distinguishable variation is found in the size of the individual terrace steps, and step size is found to be inversely dependent upon the rate of hydrogen induced slow crack growth. Additionally, this inverse relationship is independent of all the variables investigated. These observations are quantitatively discussed in terms of the formation and growth of a thin hydride film along the alpha-beta boundaries and a qualitative model for hydrogen induced slow crack growth is presented, based on the film-rupture model of stress corrosion cracking.

  18. SIVA/DIVA- INITIAL VALUE ORDINARY DIFFERENTIAL EQUATION SOLUTION VIA A VARIABLE ORDER ADAMS METHOD

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.

    1994-01-01

    The SIVA/DIVA package is a collection of subroutines for the solution of ordinary differential equations. There are versions for single precision and double precision arithmetic. These solutions are applicable to stiff or nonstiff differential equations of first or second order. SIVA/DIVA requires fewer evaluations of derivatives than other variable order Adams predictor-corrector methods. There is an option for the direct integration of second order equations which can make integration of trajectory problems significantly more efficient. Other capabilities of SIVA/DIVA include: monitoring a user supplied function which can be separate from the derivative; dynamically controlling the step size; displaying or not displaying output at initial, final, and step size change points; saving the estimated local error; and reverse communication where subroutines return to the user for output or computation of derivatives instead of automatically performing calculations. The user must supply SIVA/DIVA with: 1) the number of equations; 2) initial values for the dependent and independent variables, integration stepsize, error tolerance, etc.; and 3) the driver program and operational parameters necessary for subroutine execution. SIVA/DIVA contains an extensive diagnostic message library should errors occur during execution. SIVA/DIVA is written in FORTRAN 77 for batch execution and is machine independent. It has a central memory requirement of approximately 120K of 8 bit bytes. This program was developed in 1983 and last updated in 1987.

  19. Targeting high value metals in lithium-ion battery recycling via shredding and size-based separation.

    PubMed

    Wang, Xue; Gaustad, Gabrielle; Babbitt, Callie W

    2016-05-01

    Development of lithium-ion battery recycling systems is a current focus of much research; however, significant research remains to optimize the process. One key area not studied is the utilization of mechanical pre-recycling steps to improve overall yield. This work proposes a pre-recycling process, including mechanical shredding and size-based sorting steps, with the goal of potential future scale-up to the industrial level. This pre-recycling process aims to achieve material segregation with a focus on the metallic portion and provide clear targets for subsequent recycling processes. The results show that contained metallic materials can be segregated into different size fractions at different levels. For example, for lithium cobalt oxide batteries, cobalt content has been improved from 35% by weight in the metallic portion before this pre-recycling process to 82% in the ultrafine (<0.5mm) fraction and to 68% in the fine (0.5-1mm) fraction, and been excluded in the larger pieces (>6mm). However, size fractions across multiple battery chemistries showed significant variability in material concentration. This finding indicates that sorting by cathode before pre-treatment could reduce the uncertainty of input materials and therefore improve the purity of output streams. Thus, battery labeling systems may be an important step towards implementation of any pre-recycling process. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Space-Time Joint Interference Cancellation Using Fuzzy-Inference-Based Adaptive Filtering Techniques in Frequency-Selective Multipath Channels

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng

    2006-12-01

    An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.

  1. Kinematic, Muscular, and Metabolic Responses During Exoskeletal-, Elliptical-, or Therapist-Assisted Stepping in People With Incomplete Spinal Cord Injury

    PubMed Central

    Kinnaird, Catherine R.; Holleran, Carey L.; Rafferty, Miriam R.; Rodriguez, Kelly S.; Cain, Julie B.

    2012-01-01

    Background Robotic-assisted locomotor training has demonstrated some efficacy in individuals with neurological injury and is slowly gaining clinical acceptance. Both exoskeletal devices, which control individual joint movements, and elliptical devices, which control endpoint trajectories, have been utilized with specific patient populations and are available commercially. No studies have directly compared training efficacy or patient performance during stepping between devices. Objective The purpose of this study was to evaluate kinematic, electromyographic (EMG), and metabolic responses during elliptical- and exoskeletal-assisted stepping in individuals with incomplete spinal cord injury (SCI) compared with therapist-assisted stepping. Design A prospective, cross-sectional, repeated-measures design was used. Methods Participants with incomplete SCI (n=11) performed 3 separate bouts of exoskeletal-, elliptical-, or therapist-assisted stepping. Unilateral hip and knee sagittal-plane kinematics, lower-limb EMG recordings, and oxygen consumption were compared across stepping conditions and with control participants (n=10) during treadmill stepping. Results Exoskeletal stepping kinematics closely approximated normal gait patterns, whereas significantly greater hip and knee flexion postures were observed during elliptical-assisted stepping. Measures of kinematic variability indicated consistent patterns in control participants and during exoskeletal-assisted stepping, whereas therapist- and elliptical-assisted stepping kinematics were more variable. Despite specific differences, EMG patterns generally were similar across stepping conditions in the participants with SCI. In contrast, oxygen consumption was consistently greater during therapist-assisted stepping. Limitations Limitations included a small sample size, lack of ability to evaluate kinetics during stepping, unilateral EMG recordings, and sagittal-plane kinematics. Conclusions Despite specific differences in kinematics and EMG activity, metabolic activity was similar during stepping in each robotic device. Understanding potential differences and similarities in stepping performance with robotic assistance may be important in delivery of repeated locomotor training using robotic or therapist assistance and for consumers of robotic devices. PMID:22700537

  2. Connecting spatial and temporal scales of tropical precipitation in observations and the MetUM-GA6

    NASA Astrophysics Data System (ADS)

    Martin, Gill M.; Klingaman, Nicholas P.; Moise, Aurel F.

    2017-01-01

    This study analyses tropical rainfall variability (on a range of temporal and spatial scales) in a set of parallel Met Office Unified Model (MetUM) simulations at a range of horizontal resolutions, which are compared with two satellite-derived rainfall datasets. We focus on the shorter scales, i.e. from the native grid and time step of the model through sub-daily to seasonal, since previous studies have paid relatively little attention to sub-daily rainfall variability and how this feeds through to longer scales. We find that the behaviour of the deep convection parametrization in this model on the native grid and time step is largely independent of the grid-box size and time step length over which it operates. There is also little difference in the rainfall variability on larger/longer spatial/temporal scales. Tropical convection in the model on the native grid/time step is spatially and temporally intermittent, producing very large rainfall amounts interspersed with grid boxes/time steps of little or no rain. In contrast, switching off the deep convection parametrization, albeit at an unrealistic resolution for resolving tropical convection, results in very persistent (for limited periods), but very sporadic, rainfall. In both cases, spatial and temporal averaging smoothes out this intermittency. On the ˜ 100 km scale, for oceanic regions, the spectra of 3-hourly and daily mean rainfall in the configurations with parametrized convection agree fairly well with those from satellite-derived rainfall estimates, while at ˜ 10-day timescales the averages are overestimated, indicating a lack of intra-seasonal variability. Over tropical land the results are more varied, but the model often underestimates the daily mean rainfall (partly as a result of a poor diurnal cycle) but still lacks variability on intra-seasonal timescales. Ultimately, such work will shed light on how uncertainties in modelling small-/short-scale processes relate to uncertainty in climate change projections of rainfall distribution and variability, with a view to reducing such uncertainty through improved modelling of small-/short-scale processes.

  3. Preparation of metallic nanoparticles by irradiation in starch aqueous solution

    NASA Astrophysics Data System (ADS)

    NemÅ£anu, Monica R.; Braşoveanu, Mirela; Iacob, Nicuşor

    2014-11-01

    Colloidal silver nanoparticles (AgNPs) were synthesized in a single step by electron beam irradiation reduction of silver ions in aqueous solution containing starch. The nanoparticles were characterized by spectrophotocolorimetry and compared with those obtained by chemical (thermal) reduction method. The results showed that the smaller sizes of AgNPs were prepared with higher yields as the irradiation dose increased. The broadening of particle size distribution occurred by increasing of irradiation dose and dose rate. Chromatic parameters such as b* (yellow-blue coordinate), C* (chroma) and ΔEab (total color difference) could characterize the nanoparticles with respect of their concentration. Hue angle ho was correlated to the particle size distribution. Experimental data of the irradiated samples were also subjected to factor analysis using principal component extraction and varimax rotation in order to reveal the relation between dependent variables and independent variables and to reduce their number. The radiation-based method provided silver nanoparticles with higher concentration and narrower size distribution than those produced by chemical reduction method. Therefore, the electron beam irradiation is effective for preparation of silver nanoparticles using starch aqueous solution as dispersion medium.

  4. Variability in syringe components and its impact on functionality of delivery systems.

    PubMed

    Rathore, Nitin; Pranay, Pratik; Eu, Bruce; Ji, Wenchang; Walls, Ed

    2011-01-01

    Prefilled syringes and autoinjectors are becoming increasingly common for parenteral drug administration primarily due to the convenience they offer to the patients. Successful commercialization of such delivery systems requires thorough characterization of individual components. Complete understanding of various sources of variability and their ranking is essential for robust device design. In this work, we studied the impact of variability in various primary container and device components on the delivery forces associated with syringe injection. More specifically, the effects of barrel size, needle size, autoinjector spring force, and frictional forces have been evaluated. An analytical model based on underlying physics is developed that can be used to fully characterize the design space for a product delivery system. Use of prefilled syringes (syringes prefilled with active drug) is becoming increasingly common for injectable drugs. Compared to vials, prefilled syringes offer higher dose accuracy and ease of use due to fewer steps required for dosage. Convenience to end users can be further enhanced through the use of prefilled syringes in combination with delivery devices such as autoinjectors. These devices allow patients to self-administer the drug by following simple steps such as pressing a button. These autoinjectors are often spring-loaded and are designed to keep the needle tip shielded prior to injection. Because the needle is not visible to the user, such autoinjectors are perceived to be less invasive than syringes and help the patient overcome the hesitation associated with self-administration. In order to successfully develop and market such delivery devices, we need to perform an in-depth analysis of the components that come into play during the activation of the device and dose delivery. Typically, an autoinjector is activated by the press of a button that releases a compressed spring; the spring relaxes and provides the driving force to push the drug out of the syringe and into the site of administration. Complete understanding of the spring force, syringe barrel dimensions, needle size, and drug product properties is essential for robust device design. It is equally important to estimate the extent of variability that exists in these components and the resulting impact it could have on the performance of the device. In this work, we studied the impact of variability in syringe and device components on the delivery forces associated with syringe injection. More specifically, the effect of barrel size, needle size, autoinjector spring force, and frictional forces has been evaluated. An analytical model based on underlying physics is developed that can be used to predict the functionality of the autoinjector.

  5. Branching random walk with step size coming from a power law

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Ayan; Subhra Hazra, Rajat; Roy, Parthanil

    2015-09-01

    In their seminal work, Brunet and Derrida made predictions on the random point configurations associated with branching random walks. We shall discuss the limiting behavior of such point configurations when the displacement random variables come from a power law. In particular, we establish that two prediction of remains valid in this setup and investigate various other issues mentioned in their paper.

  6. Improved silicon carbide for advanced heat engines. I - Process development for injection molding

    NASA Technical Reports Server (NTRS)

    Whalen, Thomas J.; Trela, Walter

    1989-01-01

    Alternate processing methods have been investigated as a means of improving the mechanical properties of injection-molded SiC. Various mixing processes (dry, high-sheer, and fluid) were evaluated along with the morphology and particle size of the starting beta-SiC powder. Statistically-designed experiments were used to determine significant effects and interactions of variables in the mixing, injection molding, and binder removal process steps. Improvements in mechanical strength can be correlated with the reduction in flaw size observed in the injection molded green bodies obtained with improved processing methods.

  7. Formulation and optimization by experimental design of eco-friendly emulsions based on d-limonene.

    PubMed

    Pérez-Mosqueda, Luis M; Trujillo-Cayado, Luis A; Carrillo, Francisco; Ramírez, Pablo; Muñoz, José

    2015-04-01

    d-Limonene is a natural occurring solvent that can replace more pollutant chemicals in agrochemical formulations. In the present work, a comprehensive study of the influence of dispersed phase mass fraction, ϕ, and of the surfactant/oil ratio, R, on the emulsion stability and droplet size distribution of d-limonene-in-water emulsions stabilized by a non-ionic triblock copolymer surfactant has been carried out. An experimental full factorial design 3(2) was conducted in order to optimize the emulsion formulation. The independent variables, ϕ and R were studied in the range 10-50 wt% and 0.02-0.1, respectively. The emulsions studied were mainly destabilized by both creaming and Ostwald ripening. Therefore, initial droplet size and an overall destabilization parameter, the so-called turbiscan stability index, were used as dependent variables. The optimal formulation, comprising minimum droplet size and maximum stability was achieved at ϕ=50 wt%; R=0.062. Furthermore, the surface response methodology allowed us to obtain the formulation yielding sub-micron emulsions by using a single step rotor/stator homogenizer process instead of most commonly used two-step emulsification methods. In addition, the optimal formulation was further improved against Ostwald ripening by adding silicone oil to the dispersed phase. The combination of these experimental findings allowed us to gain a deeper insight into the stability of these emulsions, which can be applied to the rational development of new formulations with potential application in agrochemical formulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Enhanced production of lovastatin by Omphalotus olearius (DC.) Singer in solid state fermentation.

    PubMed

    Atlı, Burcu; Yamaç, Mustafa; Yıldız, Zeki; Isikhuemnen, Omoanghe S

    2015-01-01

    Although lovastatin production has been reported for different microorganism species, there is limited information about lovastatin production by basidiomycetes. The optimization of culture parameters that enhances lovastatin production by Omphalotus olearius OBCC 2002 was investigated, using statistically based experimental designs under solid state fermentation. The Plackett Burman design was used in the first step to test the relative importance of the variables affecting production of lovastatin. Amount and particle size of barley were identified as efficient variables. In the latter step, the interactive effects of selected efficient variables were studied with a full factorial design. A maximum lovastatin yield of 139.47mg/g substrate was achieved by the fermentation of 5g of barley, 1-2mm particle diam., at 28°C. This study showed that O. olearius OBCC 2002 has a high capacity for lovastatin production which could be enhanced by using solid state fermentation with novel and cost-effective substrates, such as barley. Copyright © 2013 Revista Iberoamericana de Micología. Published by Elsevier Espana. All rights reserved.

  9. Discriminant function sexing of fragmentary and complete femora: standards for contemporary Croatia.

    PubMed

    Slaus, Mario; Strinović, Davor; Skavić, Josip; Petrovecki, Vedrana

    2003-05-01

    Determining sex is one of the first and most important steps in identifying decomposed corpses or skeletal remains. Previous studies have demonstrated that populations differ from each other in size and proportion and that these differences can affect metric assessment of sex. This paper establishes standards for determining sex from fragmentary and complete femurs in a modern Croatian population. The sample is composed of 195 femora (104 male and 91 female) from positively identified victims of the 1991 War in Croatia. Six discriminant functions were generated. one using seven variables, three using two variables, and two employing one variable. Results show that complete femora can be sexed with 94.4% accuracy. The same overall accuracy, with slight differences in male/female accuracy, was achieved using a combination of two variables defining the epiphyses, and with the variable maximum diameter of the femoral head.

  10. Continuous-variable measurement-device-independent quantum key distribution: Composable security against coherent attacks

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano

    2018-05-01

    We present a rigorous security analysis of continuous-variable measurement-device-independent quantum key distribution (CV MDI QKD) in a finite-size scenario. The security proof is obtained in two steps: by first assessing the security against collective Gaussian attacks, and then extending to the most general class of coherent attacks via the Gaussian de Finetti reduction. Our result combines recent state-of-the-art security proofs for CV QKD with findings about min-entropy calculus and parameter estimation. In doing so, we improve the finite-size estimate of the secret key rate. Our conclusions confirm that CV MDI protocols allow for high rates on the metropolitan scale, and may achieve a nonzero secret key rate against the most general class of coherent attacks after 107-109 quantum signal transmissions, depending on loss and noise, and on the required level of security.

  11. Variable-Size Bead Layer as Standard Reference for Endothelial Microscopes.

    PubMed

    Tufo, Simona; Prazzoli, Erica; Ferraro, Lorenzo; Cozza, Federica; Borghesi, Alessandro; Tavazzi, Silvia

    2017-02-01

    For morphometric analysis of the cell mosaic of corneal endothelium, checking accuracy and precision of instrumentation is a key step. In this study, a standard reference sample is proposed, developed to reproduce the cornea with its shape and the endothelium with its intrinsic variability in the cell size. A polystyrene bead layer (representing the endothelium) was deposited on a lens (representing the cornea). Bead diameters were 20, 25, and 30 μm (fractions in number 55%, 30%, and 15%, respectively). Bead density and hexagonality were simulated to obtain the expected true values and measured using a slit-lamp endothelial microscope applied to 1) a Takagi 700GL slit lamp at 40× magnification (recommended standard setup) and 2) a Takagi 2ZL slit lamp at 25× magnification. The simulation provided the expected bead density 2001 mm and hexagonality 47%. At 40×, density and hexagonality were measured to be 2009 mm (SD 93 mm) and 45% (SD 3%). At 25× on a different slit lamp, the comparison between measured and expected densities provided the factor 1.526 to resize the image and to use the current algorithms of the slit-lamp endothelial microscope for cell recognition. A variable-size polystyrene bead layer on a lens is proposed as a standard sample mimicking the real shape of the cornea and the variability of cell size and cell arrangement of corneal endothelium. The sample is suggested to evaluate accuracy and precision of cell density and hexagonality obtained by different endothelial microscopes, including a slit-lamp endothelial microscope applied to different slit lamps, also at different magnifications.

  12. Preparation of metallic nanoparticles by irradiation in starch aqueous solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nemţanu, Monica R., E-mail: monica.nemtanu@inflpr.ro; Braşoveanu, Mirela, E-mail: monica.nemtanu@inflpr.ro; Iacob, Nicuşor, E-mail: monica.nemtanu@inflpr.ro

    Colloidal silver nanoparticles (AgNPs) were synthesized in a single step by electron beam irradiation reduction of silver ions in aqueous solution containing starch. The nanoparticles were characterized by spectrophotocolorimetry and compared with those obtained by chemical (thermal) reduction method. The results showed that the smaller sizes of AgNPs were prepared with higher yields as the irradiation dose increased. The broadening of particle size distribution occurred by increasing of irradiation dose and dose rate. Chromatic parameters such as b* (yellow-blue coordinate), C* (chroma) and ΔE{sub ab} (total color difference) could characterize the nanoparticles with respect of their concentration. Hue angle h{supmore » o} was correlated to the particle size distribution. Experimental data of the irradiated samples were also subjected to factor analysis using principal component extraction and varimax rotation in order to reveal the relation between dependent variables and independent variables and to reduce their number. The radiation-based method provided silver nanoparticles with higher concentration and narrower size distribution than those produced by chemical reduction method. Therefore, the electron beam irradiation is effective for preparation of silver nanoparticles using starch aqueous solution as dispersion medium.« less

  13. Influence of an irregular surface and low light on the step variability of patients with peripheral neuropathy during level gait.

    PubMed

    Thies, Sibylle B; Richardson, James K; Demott, Trina; Ashton-Miller, James A

    2005-08-01

    Patients with peripheral neuropathy (PN) report greater difficulty walking on irregular surfaces with low light (IL) than on flat surfaces with regular lighting (FR). We tested the primary hypothesis that older PN patients would demonstrate greater step width and step width variability under IL conditions than under FR conditions. Forty-two subjects (22 male, 20 female: mean +/- S.D.: 64.7 +/- 9.8 years) with PN underwent history, physical examination, and electrodiagnostic testing. Subjects were asked to walk 10 m at a comfortable speed while kinematic and force data were measured at 100 Hz using optoelectronic markers and foot switches. Ten trials were conducted under both IL and FR conditions. Step width, time, length, and speed were calculated with a MATLAB algorithm, with the standard deviation serving as the measure of variability. The results showed that under IL, as compared to FR, conditions subjects demonstrated greater step width (197.1 +/- 40.8 mm versus 180.5 +/- 32.4 mm; P < 0.001) and step width variability (40.4 +/- 9.0 mm versus 34.5 +/- 8.4 mm; P < 0.001), step time and its variability (P < 0.001 and P = 0.003, respectively), and step length variability (P < 0.001). Average step length and gait speed decreased under IL conditions (P < 0.001 for both). Step width variability and step time variability correlated best under IL conditions with a clinical measure of PN severity and fall history, respectively. We conclude that IL conditions cause PN patients to increase the variability of their step width and other gait parameters.

  14. The role of particle jamming on the formation and stability of step-pool morphology: insight from a reduced-complexity model

    NASA Astrophysics Data System (ADS)

    Saletti, M.; Molnar, P.; Hassan, M. A.

    2017-12-01

    Granular processes have been recognized as key drivers in earth surface dynamics, especially in steep landscapes because of the large size of sediment found in channels. In this work we focus on step-pool morphologies, studying the effect of particle jamming on step formation. Starting from the jammed-state hypothesis, we assume that grains generate steps because of particle jamming and those steps are inherently more stable because of additional force chains in the transversal direction. We test this hypothesis with a particle-based reduced-complexity model, CAST2, where sediment is organized in patches and entrainment, transport and deposition of grains depend on flow stage and local topography through simplified phenomenological rules. The model operates with 2 grain sizes: fine grains, that can be mobilized both my large and moderate flows, and coarse grains, mobile only during large floods. First, we identify the minimum set of processes necessary to generate and maintain steps in a numerical channel: (a) occurrence of floods, (b) particle jamming, (c) low sediment supply, and (d) presence of sediment with different entrainment probabilities. Numerical results are compared with field observations collected in different step-pool channels in terms of step density, a variable that captures the proportion of the channel occupied by steps. Not only the longitudinal profiles of numerical channels display step sequences similar to those observed in real step-pool streams, but also the values of step density are very similar when all the processes mentioned before are considered. Moreover, with CAST2 it is possible to run long simulations with repeated flood events, to test the effect of flood frequency on step formation. Numerical results indicate that larger step densities belong to system more frequently perturbed by floods, compared to system having a lower flood frequency. Our results highlight the important interactions between external hydrological forcing and internal geomorphic adjustment (e.g. jamming) on the response of step-pool streams, showing the potential of reduced-complexity models in fluvial geomorphology.

  15. The role of environmental variables in structuring landscape-scale species distributions in seafloor habitats.

    PubMed

    Kraan, Casper; Aarts, Geert; Van der Meer, Jaap; Piersma, Theunis

    2010-06-01

    Ongoing statistical sophistication allows a shift from describing species' spatial distributions toward statistically disentangling the possible roles of environmental variables in shaping species distributions. Based on a landscape-scale benthic survey in the Dutch Wadden Sea, we show the merits of spatially explicit generalized estimating equations (GEE). The intertidal macrozoobenthic species, Macoma balthica, Cerastoderma edule, Marenzelleria viridis, Scoloplos armiger, Corophium volutator, and Urothoe poseidonis served as test cases, with median grain-size and inundation time as typical environmental explanatory variables. GEEs outperformed spatially naive generalized linear models (GLMs), and removed much residual spatial structure, indicating the importance of median grain-size and inundation time in shaping landscape-scale species distributions in the intertidal. GEE regression coefficients were smaller than those attained with GLM, and GEE standard errors were larger. The best fitting GEE for each species was used to predict species' density in relation to median grain-size and inundation time. Although no drastic changes were noted compared to previous work that described habitat suitability for benthic fauna in the Wadden Sea, our predictions provided more detailed and unbiased estimates of the determinants of species-environment relationships. We conclude that spatial GEEs offer the necessary methodological advances to further steps toward linking pattern to process.

  16. Temporal variability and memory in sediment transport in an experimental step-pool channel

    NASA Astrophysics Data System (ADS)

    Saletti, Matteo; Molnar, Peter; Zimmermann, André; Hassan, Marwan A.; Church, Michael

    2015-11-01

    Temporal dynamics of sediment transport in steep channels using two experiments performed in a steep flume (8%) with natural sediment composed of 12 grain sizes are studied. High-resolution (1 s) time series of sediment transport were measured for individual grain-size classes at the outlet of the flume for different combinations of sediment input rates and flow discharges. Our aim in this paper is to quantify (a) the relation of discharge and sediment transport and (b) the nature and strength of memory in grain-size-dependent transport. None of the simple statistical descriptors of sediment transport (mean, extreme values, and quantiles) display a clear relation with water discharge, in fact a large variability between discharge and sediment transport is observed. Instantaneous transport rates have probability density functions with heavy tails. Bed load bursts have a coarser grain-size distribution than that of the entire experiment. We quantify the strength and nature of memory in sediment transport rates by estimating the Hurst exponent and the autocorrelation coefficient of the time series for different grain sizes. Our results show the presence of the Hurst phenomenon in transport rates, indicating long-term memory which is grain-size dependent. The short-term memory in coarse grain transport increases with temporal aggregation and this reveals the importance of the sampling duration of bed load transport rates in natural streams, especially for large fractions.

  17. A two step method to treat variable winds in fallout smearing codes. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, A.T.

    1982-03-01

    A method was developed to treat non-constant winds in fallout smearing codes. The method consists of two steps: (1) location of the curved hotline (2) determination of the off-hotline activity. To locate the curved hotline, the method begins with an initial cloud of 20 discretely-sized pancake clouds, located at altitudes determined by weapon yield. Next, the particles are tracked through a 300 layer atmosphere, translating with different winds in each layer. The connection of the 20 particles' impact points is the fallout hotline. The hotline location was found to be independent of the assumed particle size distribution in the stabilizedmore » cloud. The off-hotline activity distribution is represented as a two-dimensional gaussian function, centered on the curved hotline. Hotline locator model results were compared to numerical calculations of hypothetical 100 kt burst and to the actual hotline produced by the Castle-Bravo 15 Mt nuclear test.« less

  18. One size fits all electronics for insole-based activity monitoring.

    PubMed

    Hegde, Nagaraj; Bries, Matthew; Melanson, Edward; Sazonov, Edward

    2017-07-01

    Footwear based wearable sensors are becoming prominent in many areas of monitoring health and wellness, such as gait and activity monitoring. In our previous research we introduced an insole based wearable system SmartStep, which is completely integrated in a socially acceptable package. From a manufacturing perspective, SmartStep's electronics had to be custom made for each shoe size, greatly complicating the manufacturing process. In this work we explore the possibility of making a universal electronics platform for SmartStep - SmartStep 3.0, which can be used in the most common insole sizes without modifications. A pilot human subject experiments were run to compare the accuracy between the one-size fits all (SmartStep 3.0) and custom size SmartStep 2.0. A total of ~10 hours of data was collected in the pilot study involving three participants performing different activities of daily living while wearing SmartStep 2.0 and SmartStep 3.0. Leave one out cross validation resulted in a 98.5% average accuracy from SmartStep 2.0, while SmartStep 3.0 resulted in 98.3% accuracy, suggesting that the SmartStep 3.0 can be as accurate as SmartStep 2.0, while fitting most common shoe sizes.

  19. Improving the API dissolution rate during pharmaceutical hot-melt extrusion I: Effect of the API particle size, and the co-rotating, twin-screw extruder screw configuration on the API dissolution rate.

    PubMed

    Li, Meng; Gogos, Costas G; Ioannidis, Nicolas

    2015-01-15

    The dissolution rate of the active pharmaceutical ingredients in pharmaceutical hot-melt extrusion is the most critical elementary step during the extrusion of amorphous solid solutions - total dissolution has to be achieved within the short residence time in the extruder. Dissolution and dissolution rates are affected by process, material and equipment variables. In this work, we examine the effect of one of the material variables and one of the equipment variables, namely, the API particle size and extruder screw configuration on the API dissolution rate, in a co-rotating, twin-screw extruder. By rapidly removing the extruder screws from the barrel after achieving a steady state, we collected samples along the length of the extruder screws that were characterized by polarized optical microscopy (POM) and differential scanning calorimetry (DSC) to determine the amount of undissolved API. Analyses of samples indicate that reduction of particle size of the API and appropriate selection of screw design can markedly improve the dissolution rate of the API during extrusion. In addition, angle of repose measurements and light microscopy images show that the reduction of particle size of the API can improve the flowability of the physical mixture feed and the adhesiveness between its components, respectively, through dry coating of the polymer particles by the API particles. Copyright © 2014. Published by Elsevier B.V.

  20. Quantifying in-stream nitrate reaction rates using continuously-collected water quality data

    Treesearch

    Matthew Miller; Anthony Tesoriero; Paul Capel

    2016-01-01

    High frequency in situ nitrate data from three streams of varying hydrologic condition, land use, and watershed size were used to quantify the mass loading of nitrate to streams from two sources – groundwater discharge and event flow – at a daily time step for one year. These estimated loadings were used to quantify temporally-variable in-stream nitrate processing ...

  1. Integrated Reconfigurable Intelligent Systems (IRIS) for Complex Naval Systems

    DTIC Science & Technology

    2010-02-21

    RKF45] and Adams Variable Step- Size Predictor - Corrector methods). While such algorithms naturally are usually used to numerically solve differential...verified by yet another function call. Due to their nature, such methods are referred to as predictor - corrector methods. While computationally expensive...CONTRACT NUMBER N00014-09- C -0394 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER N/A 6. Author(s) Dr. Dimitri N. Mavris Dr. Yongchang Li 5d

  2. Clustering of longitudinal data by using an extended baseline: A new method for treatment efficacy clustering in longitudinal data.

    PubMed

    Schramm, Catherine; Vial, Céline; Bachoud-Lévi, Anne-Catherine; Katsahian, Sandrine

    2018-01-01

    Heterogeneity in treatment efficacy is a major concern in clinical trials. Clustering may help to identify the treatment responders and the non-responders. In the context of longitudinal cluster analyses, sample size and variability of the times of measurements are the main issues with the current methods. Here, we propose a new two-step method for the Clustering of Longitudinal data by using an Extended Baseline. The first step relies on a piecewise linear mixed model for repeated measurements with a treatment-time interaction. The second step clusters the random predictions and considers several parametric (model-based) and non-parametric (partitioning, ascendant hierarchical clustering) algorithms. A simulation study compares all options of the clustering of longitudinal data by using an extended baseline method with the latent-class mixed model. The clustering of longitudinal data by using an extended baseline method with the two model-based algorithms was the more robust model. The clustering of longitudinal data by using an extended baseline method with all the non-parametric algorithms failed when there were unequal variances of treatment effect between clusters or when the subgroups had unbalanced sample sizes. The latent-class mixed model failed when the between-patients slope variability is high. Two real data sets on neurodegenerative disease and on obesity illustrate the clustering of longitudinal data by using an extended baseline method and show how clustering may help to identify the marker(s) of the treatment response. The application of the clustering of longitudinal data by using an extended baseline method in exploratory analysis as the first stage before setting up stratified designs can provide a better estimation of treatment effect in future clinical trials.

  3. Some practical aspects of lossless and nearly-lossless compression of AVHRR imagery

    NASA Technical Reports Server (NTRS)

    Hogan, David B.; Miller, Chris X.; Christensen, Than Lee; Moorti, Raj

    1994-01-01

    Compression of Advanced Very high Resolution Radiometers (AVHRR) imagery operating in a lossless or nearly-lossless mode is evaluated. Several practical issues are analyzed including: variability of compression over time and among channels, rate-smoothing buffer size, multi-spectral preprocessing of data, day/night handling, and impact on key operational data applications. This analysis is based on a DPCM algorithm employing the Universal Noiseless Coder, which is a candidate for inclusion in many future remote sensing systems. It is shown that compression rates of about 2:1 (daytime) can be achieved with modest buffer sizes (less than or equal to 2.5 Mbytes) and a relatively simple multi-spectral preprocessing step.

  4. N-terminus of Cardiac Myosin Essential Light Chain Modulates Myosin Step-Size

    PubMed Central

    Wang, Yihua; Ajtai, Katalin; Kazmierczak, Katarzyna; Szczesna-Cordary, Danuta; Burghardt, Thomas P.

    2016-01-01

    Muscle myosin cyclically hydrolyzes ATP to translate actin. Ventricular cardiac myosin (βmys) moves actin with three distinct unitary step-sizes resulting from its lever-arm rotation and with step-frequencies that are modulated in a myosin regulation mechanism. The lever-arm associated essential light chain (vELC) binds actin by its 43 residue N-terminal extension. Unitary steps were proposed to involve the vELC N-terminal extension with the 8 nm step engaging the vELC/actin bond facilitating an extra ~19 degrees of lever-arm rotation while the predominant 5 nm step forgoes vELC/actin binding. A minor 3 nm step is the unlikely conversion of the completed 5 to the 8 nm step. This hypothesis was tested using a 17 residue N-terminal truncated vELC in porcine βmys (Δ17βmys) and a 43 residue N-terminal truncated human vELC expressed in transgenic mouse heart (Δ43αmys). Step-size and step-frequency were measured using the Qdot motility assay. Both Δ17βmys and Δ43αmys had significantly increased 5 nm step-frequency and coincident loss in the 8 nm step-frequency compared to native proteins suggesting the vELC/actin interaction drives step-size preference. Step-size and step-frequency probability densities depend on the relative fraction of truncated vELC and relate linearly to pure myosin species concentrations in a mixture containing native vELC homodimer, two truncated vELCs in the modified homodimer, and one native and one truncated vELC in the heterodimer. Step-size and step-frequency, measured for native homodimer and at two or more known relative fractions of truncated vELC, are surmised for each pure species by using a new analytical method. PMID:26671638

  5. Living environment and mobility of older adults.

    PubMed

    Cress, M Elaine; Orini, Stefania; Kinsler, Laura

    2011-01-01

    Older adults often elect to move into smaller living environments. Smaller living space and the addition of services provided by a retirement community (RC) may make living easier for the individual, but it may also reduce the amount of daily physical activity and ultimately reduce functional ability. With home size as an independent variable, the primary purpose of this study was to evaluate daily physical activity and physical function of community dwellers (CD; n = 31) as compared to residents of an RC (n = 30). In this cross-sectional study design, assessments included: the Continuous Scale Physical Functional Performance - 10 test, with a possible range of 0-100, higher scores reflecting better function; Step Activity Monitor (StepWatch 3.1); a physical activity questionnaire, the area of the home (in square meters). Groups were compared by one-way ANOVA. A general linear regression model was used to predict the number of steps per day at home. The level of significance was p < 0.05. Of the 61 volunteers (mean age: 79 ± 6.3 years; range: 65-94 years), the RC living space (68 ± 37.7 m(2)) was 62% smaller than the CD living space (182.8 ± 77.9 m(2); p = 0.001). After correcting for age, the RC took fewer total steps per day excluding exercise (p = 0.03) and had lower function (p = 0.005) than the CD. On average, RC residents take 3,000 steps less per day and have approximately 60% of the living space of a CD. Home size and physical function were primary predictors of the number of steps taken at home, as found using a general linear regression analysis. Copyright © 2010 S. Karger AG, Basel.

  6. [A focused sound field measurement system by LabVIEW].

    PubMed

    Jiang, Zhan; Bai, Jingfeng; Yu, Ying

    2014-05-01

    In this paper, according to the requirement of the focused sound field measurement, a focused sound field measurement system was established based on the LabVIEW virtual instrument platform. The system can automatically search the focus position of the sound field, and adjust the scanning path according to the size of the focal region. Three-dimensional sound field scanning time reduced from 888 hours in uniform step to 9.25 hours in variable step. The efficiency of the focused sound field measurement was improved. There is a certain deviation between measurement results and theoretical calculation results. Focal plane--6 dB width difference rate was 3.691%, the beam axis--6 dB length differences rate was 12.937%.

  7. Beamline 10.3.2 at ALS: a hard X-ray microprobe for environmental and materials sciences.

    PubMed

    Marcus, Matthew A; MacDowell, Alastair A; Celestre, Richard; Manceau, Alain; Miller, Tom; Padmore, Howard A; Sublett, Robert E

    2004-05-01

    Beamline 10.3.2 at the ALS is a bend-magnet line designed mostly for work on environmental problems involving heavy-metal speciation and location. It offers a unique combination of X-ray fluorescence mapping, X-ray microspectroscopy and micro-X-ray diffraction. The optics allow the user to trade spot size for flux in a size range of 5-17 microm in an energy range of 3-17 keV. The focusing uses a Kirkpatrick-Baez mirror pair to image a variable-size virtual source onto the sample. Thus, the user can reduce the effective size of the source, thereby reducing the spot size on the sample, at the cost of flux. This decoupling from the actual source also allows for some independence from source motion. The X-ray fluorescence mapping is performed with a continuously scanning stage which avoids the time overhead incurred by step-and-repeat mapping schemes. The special features of this beamline are described, and some scientific results shown.

  8. Variability in the Length and Frequency of Steps of Sighted and Visually Impaired Walkers

    ERIC Educational Resources Information Center

    Mason, Sarah J.; Legge, Gordon E.; Kallie, Christopher S.

    2005-01-01

    The variability of the length and frequency of steps was measured in sighted and visually impaired walkers at three different paces. The variability was low, especially at the preferred pace, and similar for both groups. A model incorporating step counts and step frequency provides good estimates of the distance traveled. Applications to…

  9. Maternal-fetal unit interactions and eutherian neocortical development and evolution

    PubMed Central

    Montiel, Juan F.; Kaune, Heidy; Maliqueo, Manuel

    2013-01-01

    The conserved brain design that primates inherited from early mammals differs from the variable adult brain size and species-specific brain dominances observed across mammals. This variability relies on the emergence of specialized cerebral cortical regions and sub-compartments, triggering an increase in brain size, areal interconnectivity and histological complexity that ultimately lies on the activation of developmental programs. Structural placental features are not well correlated with brain enlargement; however, several endocrine pathways could be tuned with the activation of neuronal progenitors in the proliferative neocortical compartments. In this article, we reviewed some mechanisms of eutherians maternal–fetal unit interactions associated with brain development and evolution. We propose a hypothesis of brain evolution where proliferative compartments in primates become activated by “non-classical” endocrine placental signals participating in different steps of corticogenesis. Changes in the inner placental structure, along with placenta endocrine stimuli over the cortical proliferative activity would allow mammalian brain enlargement with a concomitant shorter gestation span, as an evolutionary strategy to escape from parent-offspring conflict. PMID:23882189

  10. Evaluation of sampling plans to detect Cry9C protein in corn flour and meal.

    PubMed

    Whitaker, Thomas B; Trucksess, Mary W; Giesbrecht, Francis G; Slate, Andrew B; Thomas, Francis S

    2004-01-01

    StarLink is a genetically modified corn that produces an insecticidal protein, Cry9C. Studies were conducted to determine the variability and Cry9C distribution among sample test results when Cry9C protein was estimated in a bulk lot of corn flour and meal. Emphasis was placed on measuring sampling and analytical variances associated with each step of the test procedure used to measure Cry9C in corn flour and meal. Two commercially available enzyme-linked immunosorbent assay kits were used: one for the determination of Cry9C protein concentration and the other for % StarLink seed. The sampling and analytical variances associated with each step of the Cry9C test procedures were determined for flour and meal. Variances were found to be functions of Cry9C concentration, and regression equations were developed to describe the relationships. Because of the larger particle size, sampling variability associated with cornmeal was about double that for corn flour. For cornmeal, the sampling variance accounted for 92.6% of the total testing variability. The observed sampling and analytical distributions were compared with the Normal distribution. In almost all comparisons, the null hypothesis that the Cry9C protein values were sampled from a Normal distribution could not be rejected at 95% confidence limits. The Normal distribution and the variance estimates were used to evaluate the performance of several Cry9C protein sampling plans for corn flour and meal. Operating characteristic curves were developed and used to demonstrate the effect of increasing sample size on reducing false positives (seller's risk) and false negatives (buyer's risk).

  11. Step-to-step spatiotemporal variables and ground reaction forces of intra-individual fastest sprinting in a single session.

    PubMed

    Nagahara, Ryu; Mizutani, Mirai; Matsuo, Akifumi; Kanehisa, Hiroaki; Fukunaga, Tetsuo

    2018-06-01

    We aimed to investigate the step-to-step spatiotemporal variables and ground reaction forces during the acceleration phase for characterising intra-individual fastest sprinting within a single session. Step-to-step spatiotemporal variables and ground reaction forces produced by 15 male athletes were measured over a 50-m distance during repeated (three to five) 60-m sprints using a long force platform system. Differences in measured variables between the fastest and slowest trials were examined at each step until the 22nd step using a magnitude-based inferences approach. There were possibly-most likely higher running speed and step frequency (2nd to 22nd steps) and shorter support time (all steps) in the fastest trial than in the slowest trial. Moreover, for the fastest trial there were likely-very likely greater mean propulsive force during the initial four steps and possibly-very likely larger mean net anterior-posterior force until the 17th step. The current results demonstrate that better sprinting performance within a single session is probably achieved by 1) a high step frequency (except the initial step) with short support time at all steps, 2) exerting a greater mean propulsive force during initial acceleration, and 3) producing a greater mean net anterior-posterior force during initial and middle acceleration.

  12. You Cannot Step Into the Same River Twice: When Power Analyses Are Optimistic.

    PubMed

    McShane, Blakeley B; Böckenholt, Ulf

    2014-11-01

    Statistical power depends on the size of the effect of interest. However, effect sizes are rarely fixed in psychological research: Study design choices, such as the operationalization of the dependent variable or the treatment manipulation, the social context, the subject pool, or the time of day, typically cause systematic variation in the effect size. Ignoring this between-study variation, as standard power formulae do, results in assessments of power that are too optimistic. Consequently, when researchers attempting replication set sample sizes using these formulae, their studies will be underpowered and will thus fail at a greater than expected rate. We illustrate this with both hypothetical examples and data on several well-studied phenomena in psychology. We provide formulae that account for between-study variation and suggest that researchers set sample sizes with respect to our generally more conservative formulae. Our formulae generalize to settings in which there are multiple effects of interest. We also introduce an easy-to-use website that implements our approach to setting sample sizes. Finally, we conclude with recommendations for quantifying between-study variation. © The Author(s) 2014.

  13. On cat's eyes and multiple disjoint cells natural convection flow in tall tilted cavities

    NASA Astrophysics Data System (ADS)

    Báez, Elsa; Nicolás, Alfredo

    2014-10-01

    Natural convection fluid flow in air-filled tall tilted cavities is studied numerically with a direct projection method applied on the unsteady Boussinesq approximation in primitive variables. The study is focused on the so called cat's eyes and multiple disjoint cells as the aspect ratio A and the angle of inclination ϕ of the cavity vary. Results have already been reported with primitive and stream function-vorticity variables. The former are validated with the latter ones, which in turn were validated through mesh size and time-step independence studies. The new results complemented with the previous ones lead to find out the fluid motion and heat transfer invariant properties of this thermal phenomenon, which is the novelty here.

  14. Novel Anthropometry Based on 3D-Bodyscans Applied to a Large Population Based Cohort.

    PubMed

    Löffler-Wirth, Henry; Willscher, Edith; Ahnert, Peter; Wirkner, Kerstin; Engel, Christoph; Loeffler, Markus; Binder, Hans

    2016-01-01

    Three-dimensional (3D) whole body scanners are increasingly used as precise measuring tools for the rapid quantification of anthropometric measures in epidemiological studies. We analyzed 3D whole body scanning data of nearly 10,000 participants of a cohort collected from the adult population of Leipzig, one of the largest cities in Eastern Germany. We present a novel approach for the systematic analysis of this data which aims at identifying distinguishable clusters of body shapes called body types. In the first step, our method aggregates body measures provided by the scanner into meta-measures, each representing one relevant dimension of the body shape. In a next step, we stratified the cohort into body types and assessed their stability and dependence on the size of the underlying cohort. Using self-organizing maps (SOM) we identified thirteen robust meta-measures and fifteen body types comprising between 1 and 18 percent of the total cohort size. Thirteen of them are virtually gender specific (six for women and seven for men) and thus reflect most abundant body shapes of women and men. Two body types include both women and men, and describe androgynous body shapes that lack typical gender specific features. The body types disentangle a large variability of body shapes enabling distinctions which go beyond the traditional indices such as body mass index, the waist-to-height ratio, the waist-to-hip ratio and the mortality-hazard ABSI-index. In a next step, we will link the identified body types with disease predispositions to study how size and shape of the human body impact health and disease.

  15. Multidisciplinary optimization of controlled space structures with global sensitivity equations

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.

    1991-01-01

    A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.

  16. In Vitro and In Vivo Single Myosin Step-Sizes in Striated Muscle a

    PubMed Central

    Burghardt, Thomas P.; Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin

    2016-01-01

    Myosin in muscle transduces ATP free energy into the mechanical work of moving actin. It has a motor domain transducer containing ATP and actin binding sites, and, mechanical elements coupling motor impulse to the myosin filament backbone providing transduction/mechanical-coupling. The mechanical coupler is a lever-arm stabilized by bound essential and regulatory light chains. The lever-arm rotates cyclically to impel bound filamentous actin. Linear actin displacement due to lever-arm rotation is the myosin step-size. A high-throughput quantum dot labeled actin in vitro motility assay (Qdot assay) measures motor step-size in the context of an ensemble of actomyosin interactions. The ensemble context imposes a constant velocity constraint for myosins interacting with one actin filament. In a cardiac myosin producing multiple step-sizes, a “second characterization” is step-frequency that adjusts longer step-size to lower frequency maintaining a linear actin velocity identical to that from a shorter step-size and higher frequency actomyosin cycle. The step-frequency characteristic involves and integrates myosin enzyme kinetics, mechanical strain, and other ensemble affected characteristics. The high-throughput Qdot assay suits a new paradigm calling for wide surveillance of the vast number of disease or aging relevant myosin isoforms that contrasts with the alternative model calling for exhaustive research on a tiny subset myosin forms. The zebrafish embryo assay (Z assay) performs single myosin step-size and step-frequency assaying in vivo combining single myosin mechanical and whole muscle physiological characterizations in one model organism. The Qdot and Z assays cover “bottom-up” and “top-down” assaying of myosin characteristics. PMID:26728749

  17. Investigation of the milling capabilities of the F10 Fine Grind mill using Box-Behnken designs.

    PubMed

    Tan, Bernice Mei Jin; Tay, Justin Yong Soon; Wong, Poh Mun; Chan, Lai Wah; Heng, Paul Wan Sia

    2015-01-01

    Size reduction or milling of the active is often the first processing step in the design of a dosage form. The ability of a mill to convert coarse crystals into the target size and size distribution efficiently is highly desirable as the quality of the final pharmaceutical product after processing is often still dependent on the dimensional attributes of its component constituents. The F10 Fine Grind mill is a mechanical impact mill designed to produce unimodal mid-size particles by utilizing a single-pass two-stage size reduction process for fine grinding of raw materials needed in secondary processing. Box-Behnken designs were used to investigate the effects of various mill variables (impeller, blower and feeder speeds and screen aperture size) on the milling of coarse crystals. Response variables included the particle size parameters (D10, D50 and D90), span and milling rate. Milled particles in the size range of 5-200 μm, with D50 ranging from 15 to 60 μm, were produced. The impeller and feeder speeds were the most critical factors influencing the particle size and milling rate, respectively. Size distributions of milled particles were better described by their goodness-of-fit to a log-normal distribution (i.e. unimodality) rather than span. Milled particles with symmetrical unimodal distributions were obtained when the screen aperture size was close to the median diameter of coarse particles employed. The capacity for high throughput milling of particles to a mid-size range, which is intermediate between conventional mechanical impact mills and air jet mills, was demonstrated in the F10 mill. Prediction models from the Box-Behnken designs will aid in providing a better guide to the milling process and milled product characteristics. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Role of step size and max dwell time in anatomy based inverse optimization for prostate implants

    PubMed Central

    Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha

    2013-01-01

    In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323

  19. An implicit scheme with memory reduction technique for steady state solutions of DVBE in all flow regimes

    NASA Astrophysics Data System (ADS)

    Yang, L. M.; Shu, C.; Yang, W. M.; Wu, J.

    2018-04-01

    High consumption of memory and computational effort is the major barrier to prevent the widespread use of the discrete velocity method (DVM) in the simulation of flows in all flow regimes. To overcome this drawback, an implicit DVM with a memory reduction technique for solving a steady discrete velocity Boltzmann equation (DVBE) is presented in this work. In the method, the distribution functions in the whole discrete velocity space do not need to be stored, and they are calculated from the macroscopic flow variables. As a result, its memory requirement is in the same order as the conventional Euler/Navier-Stokes solver. In the meantime, it is more efficient than the explicit DVM for the simulation of various flows. To make the method efficient for solving flow problems in all flow regimes, a prediction step is introduced to estimate the local equilibrium state of the DVBE. In the prediction step, the distribution function at the cell interface is calculated by the local solution of DVBE. For the flow simulation, when the cell size is less than the mean free path, the prediction step has almost no effect on the solution. However, when the cell size is much larger than the mean free path, the prediction step dominates the solution so as to provide reasonable results in such a flow regime. In addition, to further improve the computational efficiency of the developed scheme in the continuum flow regime, the implicit technique is also introduced into the prediction step. Numerical results showed that the proposed implicit scheme can provide reasonable results in all flow regimes and increase significantly the computational efficiency in the continuum flow regime as compared with the existing DVM solvers.

  20. Patients with Chronic Obstructive Pulmonary Disease Walk with Altered Step Time and Step Width Variability as Compared with Healthy Control Subjects.

    PubMed

    Yentes, Jennifer M; Rennard, Stephen I; Schmid, Kendra K; Blanke, Daniel; Stergiou, Nicholas

    2017-06-01

    Compared with control subjects, patients with chronic obstructive pulmonary disease (COPD) have an increased incidence of falls and demonstrate balance deficits and alterations in mediolateral trunk acceleration while walking. Measures of gait variability have been implicated as indicators of fall risk, fear of falling, and future falls. To investigate whether alterations in gait variability are found in patients with COPD as compared with healthy control subjects. Twenty patients with COPD (16 males; mean age, 63.6 ± 9.7 yr; FEV 1 /FVC, 0.52 ± 0.12) and 20 control subjects (9 males; mean age, 62.5 ± 8.2 yr) walked for 3 minutes on a treadmill while their gait was recorded. The amount (SD and coefficient of variation) and structure of variability (sample entropy, a measure of regularity) were quantified for step length, time, and width at three walking speeds (self-selected and ±20% of self-selected speed). Generalized linear mixed models were used to compare dependent variables. Patients with COPD demonstrated increased mean and SD step time across all speed conditions as compared with control subjects. They also walked with a narrower step width that increased with increasing speed, whereas the healthy control subjects walked with a wider step width that decreased as speed increased. Further, patients with COPD demonstrated less variability in step width, with decreased SD, compared with control subjects at all three speed conditions. No differences in regularity of gait patterns were found between groups. Patients with COPD walk with increased duration of time between steps, and this timing is more variable than that of control subjects. They also walk with a narrower step width in which the variability of the step widths from step to step is decreased. Changes in these parameters have been related to increased risk of falling in aging research. This provides a mechanism that could explain the increased prevalence of falls in patients with COPD.

  1. Finite Adaptation and Multistep Moves in the Metropolis-Hastings Algorithm for Variable Selection in Genome-Wide Association Analysis

    PubMed Central

    Peltola, Tomi; Marttinen, Pekka; Vehtari, Aki

    2012-01-01

    High-dimensional datasets with large amounts of redundant information are nowadays available for hypothesis-free exploration of scientific questions. A particular case is genome-wide association analysis, where variations in the genome are searched for effects on disease or other traits. Bayesian variable selection has been demonstrated as a possible analysis approach, which can account for the multifactorial nature of the genetic effects in a linear regression model. Yet, the computation presents a challenge and application to large-scale data is not routine. Here, we study aspects of the computation using the Metropolis-Hastings algorithm for the variable selection: finite adaptation of the proposal distributions, multistep moves for changing the inclusion state of multiple variables in a single proposal and multistep move size adaptation. We also experiment with a delayed rejection step for the multistep moves. Results on simulated and real data show increase in the sampling efficiency. We also demonstrate that with application specific proposals, the approach can overcome a specific mixing problem in real data with 3822 individuals and 1,051,811 single nucleotide polymorphisms and uncover a variant pair with synergistic effect on the studied trait. Moreover, we illustrate multimodality in the real dataset related to a restrictive prior distribution on the genetic effect sizes and advocate a more flexible alternative. PMID:23166669

  2. A Sensory Material Approach for Reducing Variability in Additively Manufactured Metal Parts.

    PubMed

    Franco, B E; Ma, J; Loveall, B; Tapia, G A; Karayagiz, K; Liu, J; Elwany, A; Arroyave, R; Karaman, I

    2017-06-15

    Despite the recent growth in interest for metal additive manufacturing (AM) in the biomedical and aerospace industries, variability in the performance, composition, and microstructure of AM parts remains a major impediment to its widespread adoption. The underlying physical mechanisms, which cause variability, as well as the scale and nature of variability are not well understood, and current methods are ineffective at capturing these details. Here, a Nickel-Titanium alloy is used as a sensory material in order to quantitatively, and rather rapidly, observe compositional and/or microstructural variability in selective laser melting manufactured parts; thereby providing a means to evaluate the role of process parameters on the variability. We perform detailed microstructural investigations using transmission electron microscopy at various locations to reveal the origins of microstructural variability in this sensory material. This approach helped reveal how reducing the distance between adjacent laser scans below a critical value greatly reduces both the in-sample and sample-to-sample variability. Microstructural investigations revealed that when the laser scan distance is wide, there is an inhomogeneity in subgrain size, precipitate distribution, and dislocation density in the microstructure, responsible for the observed variability. These results provide an important first step towards understanding the nature of variability in additively manufactured parts.

  3. Application of a multi-level grid method to transonic flow calculations

    NASA Technical Reports Server (NTRS)

    South, J. C., Jr.; Brandt, A.

    1976-01-01

    A multi-level grid method was studied as a possible means of accelerating convergence in relaxation calculations for transonic flows. The method employs a hierarchy of grids, ranging from very coarse to fine. The coarser grids are used to diminish the magnitude of the smooth part of the residuals. The method was applied to the solution of the transonic small disturbance equation for the velocity potential in conservation form. Nonlifting transonic flow past a parabolic arc airfoil is studied with meshes of both constant and variable step size.

  4. A maximum power point tracking algorithm for buoy-rope-drum wave energy converters

    NASA Astrophysics Data System (ADS)

    Wang, J. Q.; Zhang, X. C.; Zhou, Y.; Cui, Z. C.; Zhu, L. S.

    2016-08-01

    The maximum power point tracking control is the key link to improve the energy conversion efficiency of wave energy converters (WEC). This paper presents a novel variable step size Perturb and Observe maximum power point tracking algorithm with a power classification standard for control of a buoy-rope-drum WEC. The algorithm and simulation model of the buoy-rope-drum WEC are presented in details, as well as simulation experiment results. The results show that the algorithm tracks the maximum power point of the WEC fast and accurately.

  5. Variable flexure-based fluid filter

    DOEpatents

    Brown, Steve B.; Colston, Jr., Billy W.; Marshall, Graham; Wolcott, Duane

    2007-03-13

    An apparatus and method for filtering particles from a fluid comprises a fluid inlet, a fluid outlet, a variable size passage between the fluid inlet and the fluid outlet, and means for adjusting the size of the variable size passage for filtering the particles from the fluid. An inlet fluid flow stream is introduced to a fixture with a variable size passage. The size of the variable size passage is set so that the fluid passes through the variable size passage but the particles do not pass through the variable size passage.

  6. A successful backward step correlates with hip flexion moment of supporting limb in elderly people.

    PubMed

    Takeuchi, Yahiko

    2018-01-01

    The objective of this study was to determine the positional relationship between the center of mass (COM) and the center of pressure (COP) at the time of step landing, and to examine their relationship with the joint moments exerted by the supporting limb, with regard to factors of the successful backward step response. The study population comprised 8 community-dwelling elderly people that were observed to take successive multi steps after the landing of a backward stepping. Using a motion capture system and force plate, we measured the COM, COP and COM-COP deviation distance on landing during backward stepping. In addition, we measured the moment of the supporting limb joint during backward stepping. The multi-step data were compared with data from instances when only one step was taken (single-step). Variables that differed significantly between the single- and multi-step data were used as objective variables and the joint moments of the supporting limb were used as explanatory variables in single regression analyses. The COM-COP deviation in the anteroposterior was significantly larger in the single-step. A regression analysis with COM-COP deviation as the objective variable obtained a significant regression equation in the hip flexion moment (R2 = 0.74). The hip flexion moment of supporting limb was shown to be a significant explanatory variable in both the PS and SS phases for the relationship with COM-COP distance. This study found that to create an appropriate backward step response after an external disturbance (i.e. the ability to stop after 1 step), posterior braking of the COM by a hip flexion moment are important during the single-limbed standing phase.

  7. Two Independent Contributions to Step Variability during Over-Ground Human Walking

    PubMed Central

    Collins, Steven H.; Kuo, Arthur D.

    2013-01-01

    Human walking exhibits small variations in both step length and step width, some of which may be related to active balance control. Lateral balance is thought to require integrative sensorimotor control through adjustment of step width rather than length, contributing to greater variability in step width. Here we propose that step length variations are largely explained by the typical human preference for step length to increase with walking speed, which itself normally exhibits some slow and spontaneous fluctuation. In contrast, step width variations should have little relation to speed if they are produced more for lateral balance. As a test, we examined hundreds of overground walking steps by healthy young adults (N = 14, age < 40 yrs.). We found that slow fluctuations in self-selected walking speed (2.3% coefficient of variation) could explain most of the variance in step length (59%, P < 0.01). The residual variability not explained by speed was small (1.5% coefficient of variation), suggesting that step length is actually quite precise if not for the slow speed fluctuations. Step width varied over faster time scales and was independent of speed fluctuations, with variance 4.3 times greater than that for step length (P < 0.01) after accounting for the speed effect. That difference was further magnified by walking with eyes closed, which appears detrimental to control of lateral balance. Humans appear to modulate fore-aft foot placement in precise accordance with slow fluctuations in walking speed, whereas the variability of lateral foot placement appears more closely related to balance. Step variability is separable in both direction and time scale into balance- and speed-related components. The separation of factors not related to balance may reveal which aspects of walking are most critical for the nervous system to control. PMID:24015308

  8. Effects of the lateral amplitude and regularity of upper body fluctuation on step time variability evaluated using return map analysis

    PubMed Central

    2017-01-01

    The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls. PMID:28700633

  9. Effects of the lateral amplitude and regularity of upper body fluctuation on step time variability evaluated using return map analysis.

    PubMed

    Chidori, Kazuhiro; Yamamoto, Yuji

    2017-01-01

    The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls.

  10. Auxotonic to isometric contraction transitioning in a beating heart causes myosin step-size to down shift

    PubMed Central

    Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin

    2017-01-01

    Myosin motors in cardiac ventriculum convert ATP free energy to the work of moving blood volume under pressure. The actin bound motor cyclically rotates its lever-arm/light-chain complex linking motor generated torque to the myosin filament backbone and translating actin against resisting force. Previous research showed that the unloaded in vitro motor is described with high precision by single molecule mechanical characteristics including unitary step-sizes of approximately 3, 5, and 8 nm and their relative step-frequencies of approximately 13, 50, and 37%. The 3 and 8 nm unitary step-sizes are dependent on myosin essential light chain (ELC) N-terminus actin binding. Step-size and step-frequency quantitation specifies in vitro motor function including duty-ratio, power, and strain sensitivity metrics. In vivo, motors integrated into the muscle sarcomere form the more complex and hierarchically functioning muscle machine. The goal of the research reported here is to measure single myosin step-size and step-frequency in vivo to assess how tissue integration impacts motor function. A photoactivatable GFP tags the ventriculum myosin lever-arm/light-chain complex in the beating heart of a live zebrafish embryo. Detected single GFP emission reports time-resolved myosin lever-arm orientation interpreted as step-size and step-frequency providing single myosin mechanical characteristics over the active cycle. Following step-frequency of cardiac ventriculum myosin transitioning from low to high force in relaxed to auxotonic to isometric contraction phases indicates that the imposition of resisting force during contraction causes the motor to down-shift to the 3 nm step-size accounting for >80% of all the steps in the near-isometric phase. At peak force, the ATP initiated actomyosin dissociation is the predominant strain inhibited transition in the native myosin contraction cycle. The proposed model for motor down-shifting and strain sensing involves ELC N-terminus actin binding. Overall, the approach is a unique bottom-up single molecule mechanical characterization of a hierarchically functional native muscle myosin. PMID:28423017

  11. Sources of variability in collection and preparation of paint and lead-coating samples.

    PubMed

    Harper, S L; Gutknecht, W F

    2001-06-01

    Chronic exposure of children to lead (Pb) can result in permanent physiological impairment. Since surfaces coated with lead-containing paints and varnishes are potential sources of exposure, it is extremely important that reliable methods for sampling and analysis be available. The sources of variability in the collection and preparation of samples were investigated to improve the performance and comparability of methods and to ensure that data generated will be adequate for its intended use. Paint samples of varying sizes (areas and masses) were collected at different locations across a variety of surfaces including metal, plaster, concrete, and wood. A variety of grinding techniques were compared. Manual mortar and pestle grinding for at least 1.5 min and mechanized grinding techniques were found to generate similar homogenous particle size distributions required for aliquots as small as 0.10 g. When 342 samples were evaluated for sample weight loss during mortar and pestle grinding, 4% had 20% or greater loss with a high of 41%. Homogenization and sub-sampling steps were found to be the principal sources of variability related to the size of the sample collected. Analysis of samples from different locations on apparently identical surfaces were found to vary by more than a factor of two both in Pb concentration (mg cm-2 or %) and areal coating density (g cm-2). Analyses of substrates were performed to determine the Pb remaining after coating removal. Levels as high as 1% Pb were found in some substrate samples, corresponding to more than 35 mg cm-2 Pb. In conclusion, these sources of variability must be considered in development and/or application of any sampling and analysis methodologies.

  12. Secondary mediation and regression analyses of the PTClinResNet database: determining causal relationships among the International Classification of Functioning, Disability and Health levels for four physical therapy intervention trials.

    PubMed

    Mulroy, Sara J; Winstein, Carolee J; Kulig, Kornelia; Beneck, George J; Fowler, Eileen G; DeMuth, Sharon K; Sullivan, Katherine J; Brown, David A; Lane, Christianne J

    2011-12-01

    Each of the 4 randomized clinical trials (RCTs) hosted by the Physical Therapy Clinical Research Network (PTClinResNet) targeted a different disability group (low back disorder in the Muscle-Specific Strength Training Effectiveness After Lumbar Microdiskectomy [MUSSEL] trial, chronic spinal cord injury in the Strengthening and Optimal Movements for Painful Shoulders in Chronic Spinal Cord Injury [STOMPS] trial, adult stroke in the Strength Training Effectiveness Post-Stroke [STEPS] trial, and pediatric cerebral palsy in the Pediatric Endurance and Limb Strengthening [PEDALS] trial for children with spastic diplegic cerebral palsy) and tested the effectiveness of a muscle-specific or functional activity-based intervention on primary outcomes that captured pain (STOMPS, MUSSEL) or locomotor function (STEPS, PEDALS). The focus of these secondary analyses was to determine causal relationships among outcomes across levels of the International Classification of Functioning, Disability and Health (ICF) framework for the 4 RCTs. With the database from PTClinResNet, we used 2 separate secondary statistical approaches-mediation analysis for the MUSSEL and STOMPS trials and regression analysis for the STEPS and PEDALS trials-to test relationships among muscle performance, primary outcomes (pain related and locomotor related), activity and participation measures, and overall quality of life. Predictive models were stronger for the 2 studies with pain-related primary outcomes. Change in muscle performance mediated or predicted reductions in pain for the MUSSEL and STOMPS trials and, to some extent, walking speed for the STEPS trial. Changes in primary outcome variables were significantly related to changes in activity and participation variables for all 4 trials. Improvement in activity and participation outcomes mediated or predicted increases in overall quality of life for the 3 trials with adult populations. Variables included in the statistical models were limited to those measured in the 4 RCTs. It is possible that other variables also mediated or predicted the changes in outcomes. The relatively small sample size in the PEDALS trial limited statistical power for those analyses. Evaluating the mediators or predictors of change between each ICF level and for 2 fundamentally different outcome variables (pain versus walking) provided insights into the complexities inherent across 4 prevalent disability groups.

  13. Development of a frequency-separated knob with variable change rates by rotation speed.

    PubMed

    Kim, Huhn; Ham, Dong-Han

    2014-11-01

    The principle of frequency separation is a design method to display different information or feedback in accordance with the frequency of interaction between users and systems. This principle can be usefully applied to the design of knobs. Particularly, their rotation speed can be a meaningful criterion for applying the principle. Hence a knob can be developed, which shows change rates varying depending on its rotation speed. Such a knob would be more efficient than conventional knobs with constant change rate. We developed a prototype of frequency-separated knobs that has different combinations of the number of rotation speed steps and the size of the variation of change rate. With this prototype, we conducted an experiment to examine whether a speed frequency-separated knob enhances users' task performance. The results showed that the newly designed knob was effective in enhancing task performance, and that task efficiency was the best when its change rate increases exponentially and its rotation speed has three steps. We conducted another experiment to investigate how a more rapid exponential increase of change rate and a more number of steps of rotation speed influence users' task performance. The results showed that merely increasing both the size of the variation of change rates and the number of speed steps did not result in better task performance. Although two experimental results cannot easily be generalized to other contexts, they still offer practical information useful for designing a speed frequency-separated knob in various consumer electronics and control panels of industrial systems. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. Analysis of intraspecific seed diversity in Astragalus aquilanus (Fabaceae), an endemic species of Central Apennine.

    PubMed

    Di Cecco, V; Di Musciano, M; D'Archivio, A A; Frattaroli, A R; Di Martino, L

    2018-05-20

    This work aims to study seeds of the endemic species Astragalus aquilanus from four different populations of central Italy. We investigated seed morpho-colorimetric features (shape and size) and chemical differences (through infrared spectroscopy) among populations and between dark and light seeds. Seed morpho-colorimetric quantitative variables, describing shape, size and colour traits, were measured using image analysis techniques. Fourier transform infrared (FT-IR) spectroscopy was used to attempt seed chemical characterisation. The measured data were analysed by step-wise linear discriminant analysis (LDA). Moreover, we analysed the correlation between the four most important traits and six climatic variables extracted from WorldClim 2.0. The LDA on seeds traits shows clear differentiation of the four populations, which can be attributed to different chemical composition, as confirmed by Wilk's lambda test (P < 0.001). A strong correlation between morphometric traits and temperature (annual mean temperature, mean temperature of the warmest and coolest quarter), colorimetric traits and precipitation (annual precipitation, precipitation of wettest and driest quarter) was observed. The characterisation of A. aquilanus seeds shows large intraspecific plasticity both in morpho-colorimetric and chemical composition. These results confirm the strong relationship between the type of seed produced and the climatic variables. © 2018 German Society for Plant Sciences and The Royal Botanical Society of the Netherlands.

  15. The Importance of Gestational Sac Size of Ectopic Pregnancy in Response to Single-Dose Methotrexate

    PubMed Central

    Kimiaei, Parichehr; Khani, Zahra; Marefian, Azadeh; Gholampour Ghavamabadi, Maryam; Salimnejad, Maryam

    2013-01-01

    This retrospective cohort study was designed in a selective group of 185 patients diagnosed with and treated for ectopic pregnancy. Intramuscular administration of a single dose of methotrexate (50 mg/m2) was performed to measure predictors of failure or resistance to treatment necessitating surgical intervention. During the time of treatment with a single dose of MTX, 20 patients (10.8%) failed to response, in which 6 of 20 (30%) indicated side effects to MTX and rupture of the ectopic pregnancy. Remaining cases (n = 14) showed resistance to the drug; the level of β-hCG did not fall at least 15% during 7 days after treatment and necessitated laparotomy. In backward-step analysis by multiple logistic regressions of various types of predictor factors, size of gestational sac (coefficient = 1.91, OR = 6.78, 95% confidence interval = 3.18–8.22) and baseline level β-hCG (coefficient = 1.60, OR = 5.0, 95% confidence interval = 4.26–6.72) had significant correlation with leading EP patients failing to response to MTX. This study suggests that further investigation for finding relative contraindications of MTX treatment in EP women should be considered on the gestational sac size because other variables are in the causal pathway of this variable. PMID:23762575

  16. Impact of Preadmission Variables on USMLE Step 1 and Step 2 Performance

    ERIC Educational Resources Information Center

    Kleshinski, James; Khuder, Sadik A.; Shapiro, Joseph I.; Gold, Jeffrey P.

    2009-01-01

    Purpose: To examine the predictive ability of preadmission variables on United States Medical Licensing Examinations (USMLE) step 1 and step 2 performance, incorporating the use of a neural network model. Method: Preadmission data were collected on matriculants from 1998 to 2004. Linear regression analysis was first used to identify predictors of…

  17. Speaker and Accent Variation Are Handled Differently: Evidence in Native and Non-Native Listeners

    PubMed Central

    Kriengwatana, Buddhamas; Terry, Josephine; Chládková, Kateřina; Escudero, Paola

    2016-01-01

    Listeners are able to cope with between-speaker variability in speech that stems from anatomical sources (i.e. individual and sex differences in vocal tract size) and sociolinguistic sources (i.e. accents). We hypothesized that listeners adapt to these two types of variation differently because prior work indicates that adapting to speaker/sex variability may occur pre-lexically while adapting to accent variability may require learning from attention to explicit cues (i.e. feedback). In Experiment 1, we tested our hypothesis by training native Dutch listeners and Australian-English (AusE) listeners without any experience with Dutch or Flemish to discriminate between the Dutch vowels /I/ and /ε/ from a single speaker. We then tested their ability to classify /I/ and /ε/ vowels of a novel Dutch speaker (i.e. speaker or sex change only), or vowels of a novel Flemish speaker (i.e. speaker or sex change plus accent change). We found that both Dutch and AusE listeners could successfully categorize vowels if the change involved a speaker/sex change, but not if the change involved an accent change. When AusE listeners were given feedback on their categorization responses to the novel speaker in Experiment 2, they were able to successfully categorize vowels involving an accent change. These results suggest that adapting to accents may be a two-step process, whereby the first step involves adapting to speaker differences at a pre-lexical level, and the second step involves adapting to accent differences at a contextual level, where listeners have access to word meaning or are given feedback that allows them to appropriately adjust their perceptual category boundaries. PMID:27309889

  18. Novel Anthropometry Based on 3D-Bodyscans Applied to a Large Population Based Cohort

    PubMed Central

    Löffler-Wirth, Henry; Willscher, Edith; Ahnert, Peter; Wirkner, Kerstin; Engel, Christoph; Loeffler, Markus; Binder, Hans

    2016-01-01

    Three-dimensional (3D) whole body scanners are increasingly used as precise measuring tools for the rapid quantification of anthropometric measures in epidemiological studies. We analyzed 3D whole body scanning data of nearly 10,000 participants of a cohort collected from the adult population of Leipzig, one of the largest cities in Eastern Germany. We present a novel approach for the systematic analysis of this data which aims at identifying distinguishable clusters of body shapes called body types. In the first step, our method aggregates body measures provided by the scanner into meta-measures, each representing one relevant dimension of the body shape. In a next step, we stratified the cohort into body types and assessed their stability and dependence on the size of the underlying cohort. Using self-organizing maps (SOM) we identified thirteen robust meta-measures and fifteen body types comprising between 1 and 18 percent of the total cohort size. Thirteen of them are virtually gender specific (six for women and seven for men) and thus reflect most abundant body shapes of women and men. Two body types include both women and men, and describe androgynous body shapes that lack typical gender specific features. The body types disentangle a large variability of body shapes enabling distinctions which go beyond the traditional indices such as body mass index, the waist-to-height ratio, the waist-to-hip ratio and the mortality-hazard ABSI-index. In a next step, we will link the identified body types with disease predispositions to study how size and shape of the human body impact health and disease. PMID:27467550

  19. Real-time inverse planning for Gamma Knife radiosurgery.

    PubMed

    Wu, Q Jackie; Chankong, Vira; Jitprapaikulsarn, Suradet; Wessels, Barry W; Einstein, Douglas B; Mathayomchan, Boonyanit; Kinsella, Timothy J

    2003-11-01

    The challenges of real-time Gamma Knife inverse planning are the large number of variables involved and the unknown search space a priori. With limited collimator sizes, shots have to be heavily overlapped to form a smooth prescription isodose line that conforms to the irregular target shape. Such overlaps greatly influence the total number of shots per plan, making pre-determination of the total number of shots impractical. However, this total number of shots usually defines the search space, a pre-requisite for most of the optimization methods. Since each shot only covers part of the target, a collection of shots in different locations and various collimator sizes selected makes up the global dose distribution that conforms to the target. Hence, planning or placing these shots is a combinatorial optimization process that is computationally expensive by nature. We have previously developed a theory of shot placement and optimization based on skeletonization. The real-time inverse planning process, reported in this paper, is an expansion and the clinical implementation of this theory. The complete planning process consists of two steps. The first step is to determine an optimal number of shots including locations and sizes and to assign initial collimator size to each of the shots. The second step is to fine-tune the weights using a linear-programming technique. The objective function is to minimize the total dose to the target boundary (i.e., maximize the dose conformity). Results of an ellipsoid test target and ten clinical cases are presented. The clinical cases are also compared with physician's manual plans. The target coverage is more than 99% for manual plans and 97% for all the inverse plans. The RTOG PITV conformity indices for the manual plans are between 1.16 and 3.46, compared to 1.36 to 2.4 for the inverse plans. All the inverse plans are generated in less than 2 min, making real-time inverse planning a reality.

  20. Construction of Core Collections Suitable for Association Mapping to Optimize Use of Mediterranean Olive (Olea europaea L.) Genetic Resources

    PubMed Central

    El Bakkali, Ahmed; Haouane, Hicham; Moukhli, Abdelmajid; Costes, Evelyne; Van Damme, Patrick; Khadari, Bouchaib

    2013-01-01

    Phenotypic characterisation of germplasm collections is a decisive step towards association mapping analyses, but it is particularly expensive and tedious for woody perennial plant species. Characterisation could be more efficient if focused on a reasonably sized subset of accessions, or so-called core collection (CC), reflecting the geographic origin and variability of the germplasm. The questions that arise concern the sample size to use and genetic parameters that should be optimized in a core collection to make it suitable for association mapping. Here we investigated these questions in olive (Olea europaea L.), a perennial fruit species. By testing different sampling methods and sizes in a worldwide olive germplasm bank (OWGB Marrakech, Morocco) containing 502 unique genotypes characterized by nuclear and plastid loci, a two-step sampling method was proposed. The Shannon-Weaver diversity index was found to be the best criterion to be maximized in the first step using the Core Hunter program. A primary core collection of 50 entries (CC50) was defined that captured more than 80% of the diversity. This latter was subsequently used as a kernel with the Mstrat program to capture the remaining diversity. 200 core collections of 94 entries (CC94) were thus built for flexibility in the choice of varieties to be studied. Most entries of both core collections (CC50 and CC94) were revealed to be unrelated due to the low kinship coefficient, whereas a genetic structure spanning the eastern and western/central Mediterranean regions was noted. Linkage disequilibrium was observed in CC94 which was mainly explained by a genetic structure effect as noted for OWGB Marrakech. Since they reflect the geographic origin and diversity of olive germplasm and are of reasonable size, both core collections will be of major interest to develop long-term association studies and thus enhance genomic selection in olive species. PMID:23667437

  1. End-Point Variability Is Not Noise in Saccade Adaptation

    PubMed Central

    Herman, James P.; Cloud, C. Phillip; Wallman, Josh

    2013-01-01

    When each of many saccades is made to overshoot its target, amplitude gradually decreases in a form of motor learning called saccade adaptation. Overshoot is induced experimentally by a secondary, backwards intrasaccadic target step (ISS) triggered by the primary saccade. Surprisingly, however, no study has compared the effectiveness of different sizes of ISS in driving adaptation by systematically varying ISS amplitude across different sessions. Additionally, very few studies have examined the feasibility of adaptation with relatively small ISSs. In order to best understand saccade adaptation at a fundamental level, we addressed these two points in an experiment using a range of small, fixed ISS values (from 0° to 1° after a 10° primary target step). We found that significant adaptation occurred across subjects with an ISS as small as 0.25°. Interestingly, though only adaptation in response to 0.25° ISSs appeared to be complete (the magnitude of change in saccade amplitude was comparable to size of the ISS), further analysis revealed that a comparable proportion of the ISS was compensated for across conditions. Finally, we found that ISS size alone was sufficient to explain the magnitude of adaptation we observed; additional factors did not significantly improve explanatory power. Overall, our findings suggest that current assumptions regarding the computation of saccadic error may need to be revisited. PMID:23555763

  2. Mendelian Randomization.

    PubMed

    Grover, Sandeep; Del Greco M, Fabiola; Stein, Catherine M; Ziegler, Andreas

    2017-01-01

    Confounding and reverse causality have prevented us from drawing meaningful clinical interpretation even in well-powered observational studies. Confounding may be attributed to our inability to randomize the exposure variable in observational studies. Mendelian randomization (MR) is one approach to overcome confounding. It utilizes one or more genetic polymorphisms as a proxy for the exposure variable of interest. Polymorphisms are randomly distributed in a population, they are static throughout an individual's lifetime, and may thus help in inferring directionality in exposure-outcome associations. Genome-wide association studies (GWAS) or meta-analyses of GWAS are characterized by large sample sizes and the availability of many single nucleotide polymorphisms (SNPs), making GWAS-based MR an attractive approach. GWAS-based MR comes with specific challenges, including multiple causality. Despite shortcomings, it still remains one of the most powerful techniques for inferring causality.With MR still an evolving concept with complex statistical challenges, the literature is relatively scarce in terms of providing working examples incorporating real datasets. In this chapter, we provide a step-by-step guide for causal inference based on the principles of MR with a real dataset using both individual and summary data from unrelated individuals. We suggest best possible practices and give recommendations based on the current literature.

  3. Methodological aspects of an adaptive multidirectional pattern search to optimize speech perception using three hearing-aid algorithms

    NASA Astrophysics Data System (ADS)

    Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes

    2004-12-01

    In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .

  4. Effects of homogenization treatment on recrystallization behavior of 7150 aluminum sheet during post-rolling annealing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Zhanying; Department of Applied Science, University of Québec at Chicoutimi, Saguenay, QC G7H 2B1; Zhao, Gang

    2016-04-15

    The effects of two homogenization treatments applied to the direct chill (DC) cast billet on the recrystallization behavior in 7150 aluminum alloy during post-rolling annealing have been investigated using the electron backscatter diffraction (EBSD) technique. Following hot and cold rolling to the sheet, measured orientation maps, the recrystallization fraction and grain size, the misorientation angle and the subgrain size were used to characterize the recovery and recrystallization processes at different annealing temperatures. The results were compared between the conventional one-step homogenization and the new two-step homogenization, with the first step being pretreated at 250 °C. Al{sub 3}Zr dispersoids with highermore » densities and smaller sizes were obtained after the two-step homogenization, which strongly retarded subgrain/grain boundary mobility and inhibited recrystallization. Compared with the conventional one-step homogenized samples, a significantly lower recrystallized fraction and a smaller recrystallized grain size were obtained under all annealing conditions after cold rolling in the two-step homogenized samples. - Highlights: • Effects of two homogenization treatments on recrystallization in 7150 Al sheets • Quantitative study on the recrystallization evolution during post-rolling annealing • Al{sub 3}Zr dispersoids with higher densities and smaller sizes after two-step treatment • Higher recrystallization resistance of 7150 sheets with two-step homogenization.« less

  5. Variability of Anticipatory Postural Adjustments During Gait Initiation in Individuals With Parkinson Disease.

    PubMed

    Lin, Cheng-Chieh; Creath, Robert A; Rogers, Mark W

    2016-01-01

    In people with Parkinson disease (PD), difficulties with initiating stepping may be related to impairments of anticipatory postural adjustments (APAs). Increased variability in step length and step time has been observed in gait initiation in individuals with PD. In this study, we investigated whether the ability to generate consistent APAs during gait initiation is compromised in these individuals. Fifteen subjects with PD and 8 healthy control subjects were instructed to take rapid forward steps after a verbal cue. The changes in vertical force and ankle marker position were recorded via force platforms and a 3-dimensional motion capture system, respectively. Means, standard deviations, and coefficients of variation of both timing and magnitude of vertical force, as well as stepping variables, were calculated. During the postural phase of gait initiation the interval was longer and the force modulation was smaller in subjects with PD. Both the variability of timing and force modulation were larger in subjects with PD. Individuals with PD also had a longer time to complete the first step, but no significant differences were found for the variability of step time, length, and speed between groups. The increased variability of APAs during gait initiation in subjects with PD could affect posture-locomotion coupling, and lead to start hesitation, and even falls. Future studies are needed to investigate the effect of rehabilitation interventions on the variability of APAs during gait initiation in individuals with PD.Video abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A119).

  6. Accuracy of the Yamax CW-701 Pedometer for measuring steps in controlled and free-living conditions

    PubMed Central

    Coffman, Maren J; Reeve, Charlie L; Butler, Shannon; Keeling, Maiya; Talbot, Laura A

    2016-01-01

    Objective The Yamax Digi-Walker CW-701 (Yamax CW-701) is a low-cost pedometer that includes a 7-day memory, a 2-week cumulative memory, and automatically resets to zero at midnight. To date, the accuracy of the Yamax CW-701 has not been determined. The purpose of this study was to assess the accuracy of steps recorded by the Yamax CW-701 pedometer compared with actual steps and two other devices. Methods The study was conducted in a campus-based lab and in free-living settings with 22 students, faculty, and staff at a mid-sized university in the Southeastern US. While wearing a Yamax CW-701, Yamax Digi-Walker SW-200, and an ActiGraph GTX3 accelerometer, participants engaged in activities at variable speeds and conditions. To assess accuracy of each device, steps recorded were compared with actual step counts. Statistical tests included paired sample t-tests, percent accuracy, intraclass correlation coefficient, and Bland–Altman plots. Results The Yamax CW-701 demonstrated reliability and concurrent validity during walking at a fast pace and walking on a track, and in free-living conditions. Decreased accuracy was noted walking at a slow pace. Conclusions These findings are consistent with prior research. With most pedometers and accelerometers, adequate force and intensity must be present for a step to register. The Yamax CW-701 is accurate in recording steps taken while walking at a fast pace and in free-living settings. PMID:29942555

  7. Accuracy of the Yamax CW-701 Pedometer for measuring steps in controlled and free-living conditions.

    PubMed

    Coffman, Maren J; Reeve, Charlie L; Butler, Shannon; Keeling, Maiya; Talbot, Laura A

    2016-01-01

    The Yamax Digi-Walker CW-701 (Yamax CW-701) is a low-cost pedometer that includes a 7-day memory, a 2-week cumulative memory, and automatically resets to zero at midnight. To date, the accuracy of the Yamax CW-701 has not been determined. The purpose of this study was to assess the accuracy of steps recorded by the Yamax CW-701 pedometer compared with actual steps and two other devices. The study was conducted in a campus-based lab and in free-living settings with 22 students, faculty, and staff at a mid-sized university in the Southeastern US. While wearing a Yamax CW-701, Yamax Digi-Walker SW-200, and an ActiGraph GTX3 accelerometer, participants engaged in activities at variable speeds and conditions. To assess accuracy of each device, steps recorded were compared with actual step counts. Statistical tests included paired sample t -tests, percent accuracy, intraclass correlation coefficient, and Bland-Altman plots. The Yamax CW-701 demonstrated reliability and concurrent validity during walking at a fast pace and walking on a track, and in free-living conditions. Decreased accuracy was noted walking at a slow pace. These findings are consistent with prior research. With most pedometers and accelerometers, adequate force and intensity must be present for a step to register. The Yamax CW-701 is accurate in recording steps taken while walking at a fast pace and in free-living settings.

  8. Genetic variability and effective population size when local extinction and recolonization of subpopulations are frequent

    PubMed Central

    Maruyama, Takeo; Kimura, Motoo

    1980-01-01

    If a population (species) consists of n haploid lines (subpopulations) which reproduce asexually and each of which is subject to random extinction and subsequent replacement, it is shown that, at equilibrium in which mutational production of new alleles and their random extinction balance each other, the genetic diversity (1 minus the sum of squares of allelic frequencies) is given by 2Nev/(1 + 2Nev), where [Formula: see text] in which Ñ is the harmonic mean of the population size per line, n is the number of lines (assumed to be large), λ is the rate of line extinction, and v is the mutation rate (assuming the infinite neutral allele model). In a diploid population (species) consisting of n colonies, if migration takes place between colonies at the rate m (the island model) in addition to extinction and recolonization of colonies, it is shown that effective population size is [Formula: see text] If the rate of colony extinction (λ) is much larger than the migration rate of individuals, the effective population size is greatly reduced compared with the case in which no colony extinctions occur (in which case Ne = nÑ). The stepping-stone type of recolonization scheme is also considered. Bearing of these results on the interpretation of the level of genetic variability at the enzyme level observed in natural populations is discussed from the standpoint of the neutral mutation-random drift hypothesis. PMID:16592920

  9. The associations between physical fitness and cardiometabolic risk and body-size phenotypes in perimenopausal women.

    PubMed

    Gregorio-Arenas, E; Ruiz-Cabello, P; Camiletti-Moirón, D; Moratalla-Cecilia, N; Aranda, P; López-Jurado, M; Llopis, J; Aparicio, V A

    2016-10-01

    To study the association between physical fitness and body-size phenotypes, and to test which aspects of physical fitness show the greatest independent association with cardiometabolic risk in perimenopausal women. This cross-sectional study involved 228 women aged 53±5years from southern Spain. Physical fitness was assessed by means of the Senior Fitness Test Battery (additionally including handgrip strength and timed up-and-go tests). Anthropometry, resting heart rate, blood pressure and plasma markers of lipid, glycaemic and inflammatory status were measured by standard procedures. The harmonized definition of the 'metabolically healthy but obese' (MHO) phenotype was employed to classify individuals. The overall prevalence of the MHO phenotype was 13% but was 43% among the obese women. Apart from traditional markers, metabolically healthy non-obese women had lower levels of C-reactive protein than women with the other phenotypes (p<0.001), and levels of glycosylated haemoglobin were lower in MHO women than in metabolically abnormal non-obese women (overall p=0.004). Most of the components of physical fitness differed with body-size phenotypes. The 6-min walk and the back-scratch tests presented the most robust differences (both p<0.001). Moreover, the women's performance on the back-scratch (β=0.32; p<0.001) and the 6-min walk (β=0.22; p=0.003) tests was independently associated with the clustered cardiometabolic risk. The back-scratch test explained 10% of the variability (step 1, p<0.001), and the final model, which also included the 6-min walk test (step 2, p=0.003), explained 14% of the variability. Low upper-body flexibility was the most important fitness indicator of cardiometabolic risk in perimenopausal women, but cardiorespiratory fitness also played an important role. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Effects of aging on the relationship between cognitive demand and step variability during dual-task walking.

    PubMed

    Decker, Leslie M; Cignetti, Fabien; Hunt, Nathaniel; Potter, Jane F; Stergiou, Nicholas; Studenski, Stephanie A

    2016-08-01

    A U-shaped relationship between cognitive demand and gait control may exist in dual-task situations, reflecting opposing effects of external focus of attention and attentional resource competition. The purpose of the study was twofold: to examine whether gait control, as evaluated from step-to-step variability, is related to cognitive task difficulty in a U-shaped manner and to determine whether age modifies this relationship. Young and older adults walked on a treadmill without attentional requirement and while performing a dichotic listening task under three attention conditions: non-forced (NF), forced-right (FR), and forced-left (FL). The conditions increased in their attentional demand and requirement for inhibitory control. Gait control was evaluated by the variability of step parameters related to balance control (step width) and rhythmic stepping pattern (step length and step time). A U-shaped relationship was found for step width variability in both young and older adults and for step time variability in older adults only. Cognitive performance during dual tasking was maintained in both young and older adults. The U-shaped relationship, which presumably results from a trade-off between an external focus of attention and competition for attentional resources, implies that higher-level cognitive processes are involved in walking in young and older adults. Specifically, while these processes are initially involved only in the control of (lateral) balance during gait, they become necessary for the control of (fore-aft) rhythmic stepping pattern in older adults, suggesting that attentional resources turn out to be needed in all facets of walking with aging. Finally, despite the cognitive resources required by walking, both young and older adults spontaneously adopted a "posture second" strategy, prioritizing the cognitive task over the gait task.

  11. Anticipatory Postural Adjustment During Self-Initiated, Cued, and Compensatory Stepping in Healthy Older Adults and Patients With Parkinson Disease.

    PubMed

    Schlenstedt, Christian; Mancini, Martina; Horak, Fay; Peterson, Daniel

    2017-07-01

    To characterize anticipatory postural adjustments (APAs) across a variety of step initiation tasks in people with Parkinson disease (PD) and healthy subjects. Cross-sectional study. Step initiation was analyzed during self-initiated gait, perceptual cued gait, and compensatory forward stepping after platform perturbation. People with PD were assessed on and off levodopa. University research laboratory. People (N=31) with PD (n=19) and healthy aged-matched subjects (n=12). Not applicable. Mediolateral (ML) size of APAs (calculated from center of pressure recordings), step kinematics, and body alignment. With respect to self-initiated gait, the ML size of APAs was significantly larger during the cued condition and significantly smaller during the compensatory condition (P<.001). Healthy subjects and patients with PD did not differ in body alignment during the stance phase prior to stepping. No significant group effect was found for ML size of APAs between healthy subjects and patients with PD. However, the reduction in APA size from cued to compensatory stepping was significantly less pronounced in PD off medication compared with healthy subjects, as indicated by a significant group by condition interaction effect (P<.01). No significant differences were found comparing patients with PD on and off medications. Specific stepping conditions had a significant effect on the preparation and execution of step initiation. Therefore, APA size should be interpreted with respect to the specific stepping condition. Across-task changes in people with PD were less pronounced compared with healthy subjects. Antiparkinsonian medication did not significantly improve step initiation in this mildly affected PD cohort. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  12. Critical Motor Number for Fractional Steps of Cytoskeletal Filaments in Gliding Assays

    PubMed Central

    Li, Xin; Lipowsky, Reinhard; Kierfeld, Jan

    2012-01-01

    In gliding assays, filaments are pulled by molecular motors that are immobilized on a solid surface. By varying the motor density on the surface, one can control the number of motors that pull simultaneously on a single filament. Here, such gliding assays are studied theoretically using Brownian (or Langevin) dynamics simulations and taking the local force balance between motors and filaments as well as the force-dependent velocity of the motors into account. We focus on the filament stepping dynamics and investigate how single motor properties such as stalk elasticity and step size determine the presence or absence of fractional steps of the filaments. We show that each gliding assay can be characterized by a critical motor number, . Because of thermal fluctuations, fractional filament steps are only detectable as long as . The corresponding fractional filament step size is where is the step size of a single motor. We first apply our computational approach to microtubules pulled by kinesin-1 motors. For elastic motor stalks that behave as linear springs with a zero rest length, the critical motor number is found to be , and the corresponding distributions of the filament step sizes are in good agreement with the available experimental data. In general, the critical motor number depends on the elastic stalk properties and is reduced to for linear springs with a nonzero rest length. Furthermore, is shown to depend quadratically on the motor step size . Therefore, gliding assays consisting of actin filaments and myosin-V are predicted to exhibit fractional filament steps up to motor number . Finally, we show that fractional filament steps are also detectable for a fixed average motor number as determined by the surface density (or coverage) of the motors on the substrate surface. PMID:22927953

  13. Step-by-step variability of swing phase trajectory area during steady state walking at a range of speeds

    PubMed Central

    Hurt, Christopher P.; Brown, David A.

    2018-01-01

    Background Step kinematic variability has been characterized during gait using spatial and temporal kinematic characteristics. However, people can adopt different trajectory paths both between individuals and even within individuals at different speeds. Single point measures such as minimum toe clearance (MTC) and step length (SL) do not necessarily account for the multiple paths that the foot may take during the swing phase to reach the same foot fall endpoint. The purpose of this study was to test a step-by-step foot trajectory area (SBS-FTA) variability measure that is able to characterize sagittal plane foot trajectories of varying areas, and compare this measure against MTC and SL variability at different speeds. We hypothesize that the SBS-FTA variability would demonstrate increased variability with speed. Second, we hypothesize that SBS-FTA would have a stronger curvilinear fit compared with the CV and SD of SL and MTC. Third, we hypothesize SBS-FTA would be more responsive to change in the foot trajectory at a given speed compared to SL and MTC. Fourth, SBS-FTA variability would not strongly co-vary with SL and MTC variability measures since it represents a different construct related to foot trajectory area variability. Methods We studied 15 nonimpaired individuals during walking at progressively faster speeds. We calculated SL, MTC, and SBS-FTA area. Results SBS-FTA variability increased with speed, had a stronger curvilinear fit compared with the CV and SD of SL and MTC, was more responsive at a given speed, and did not strongly co-vary with SL and MTC variability measures. Conclusion SBS foot trajectory area variability was sensitive to change with faster speeds, captured a relationship that the majority of the other measures did not demonstrate, and did not co-vary strongly with other measures that are also components of the trajectory. PMID:29370202

  14. Elbow joint variability for different hand positions of the round off in gymnastics.

    PubMed

    Farana, Roman; Irwin, Gareth; Jandacka, Daniel; Uchytil, Jaroslav; Mullineaux, David R

    2015-02-01

    The aim of the present study was to conduct within-gymnast analyses of biological movement variability in impact forces, elbow joint kinematics and kinetics of expert gymnasts in the execution of the round-off with different hand positions. Six international level female gymnasts performed 10 trials of the round-off from a hurdle step to a back-handspring using two hand potions: parallel and T-shape. Two force plates were used to determine ground reaction forces. Eight infrared cameras were employed to collect the kinematic data automatically. Within gymnast variability was calculated using biological coefficient of variation (BCV) discretely for ground reaction force, kinematic and kinetic measures. Variability of the continuous data was quantified using coefficient of multiple correlations (CMC). Group BCV and CMC were calculated and T-test with effect size statistics determined differences between the variability of the two techniques examined in this study. The major observation was a higher level of biological variability in the elbow joint abduction angle and adduction moment of force in the T-shaped hand position. This finding may lead to a reduced repetitive abduction stress and thus protect the elbow joint from overload. Knowledge of the differences in biological variability can inform clinicians and practitioners with effective skill selection. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Bearing fault diagnosis under unknown variable speed via gear noise cancellation and rotational order sideband identification

    NASA Astrophysics Data System (ADS)

    Wang, Tianyang; Liang, Ming; Li, Jianyong; Cheng, Weidong; Li, Chuan

    2015-10-01

    The interfering vibration signals of a gearbox often represent a challenging issue in rolling bearing fault detection and diagnosis, particularly under unknown variable rotational speed conditions. Though some methods have been proposed to remove the gearbox interfering signals based on their discrete frequency nature, such methods may not work well under unknown variable speed conditions. As such, we propose a new approach to address this issue. The new approach consists of three main steps: (a) adaptive gear interference removal, (b) fault characteristic order (FCO) based fault detection, and (c) rotational-order-sideband (ROS) based fault type identification. For gear interference removal, an enhanced adaptive noise cancellation (ANC) algorithm has been developed in this study. The new ANC algorithm does not require an additional accelerometer to provide reference input. Instead, the reference signal is adaptively constructed from signal maxima and instantaneous dominant meshing multiple (IDMM) trend. Key ANC parameters such as filter length and step size have also been tailored to suit the variable speed conditions, The main advantage of using ROS for fault type diagnosis is that it is insusceptible to confusion caused by the co-existence of bearing and gear rotational frequency peaks in the identification of the bearing fault characteristic frequency in the FCO sub-order region. The effectiveness of the proposed method has been demonstrated using both simulation and experimental data. Our experimental study also indicates that the proposed method is applicable regardless whether the bearing and gear rotational speeds are proportional to each other or not.

  16. Automatic differentiation evaluated as a tool for rotorcraft design and optimization

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.

    1995-01-01

    This paper investigates the use of automatic differentiation (AD) as a means for generating sensitivity analyses in rotorcraft design and optimization. This technique transforms an existing computer program into a new program that performs sensitivity analysis in addition to the original analysis. The original FORTRAN program calculates a set of dependent (output) variables from a set of independent (input) variables, the new FORTRAN program calculates the partial derivatives of the dependent variables with respect to the independent variables. The AD technique is a systematic implementation of the chain rule of differentiation, this method produces derivatives to machine accuracy at a cost that is comparable with that of finite-differencing methods. For this study, an analysis code that consists of the Langley-developed hover analysis HOVT, the comprehensive rotor analysis CAMRAD/JA, and associated preprocessors is processed through the AD preprocessor ADIFOR 2.0. The resulting derivatives are compared with derivatives obtained from finite-differencing techniques. The derivatives obtained with ADIFOR 2.0 are exact within machine accuracy and do not depend on the selection of step-size, as are the derivatives obtained with finite-differencing techniques.

  17. Preparation and characterization of ibuprofen-cetyl alcohol beads by melt solidification technique: effect of variables.

    PubMed

    Maheshwari, Manish; Ketkar, Anant R; Chauhan, Bhaskar; Patil, Vinay B; Paradkar, Anant R

    2003-08-11

    Ibuprofen (IBU) exhibits short half-life, poor compressibility, flowability and caking tendency. IBU melt has sufficiently low viscosity and exhibits interfacial tension sufficient to form droplet even at low temperature. A single step novel melt solidification technique (MST) was developed to produce IBU beads with lower amounts of excipient. Effect of variables was studied using a 3(2) factorial approach with speed of agitation and amount of cetyl alcohol (CA) as variables. The beads were evaluated using DSC, FT-IR and scanning electron microscope (SEM). Yield, micromeritic properties, crushing strength and release kinetics were also studied. Spherical beads with a method yield of above 90% were obtained. The data was analyzed by response surface methodology. The variables showed curvilinear relationship with yield in desired particle size range, crushing strength and, bulk and tap density. The drug release followed non-Fickian case II transport and the release rate decreased linearly with respect to amount of CA in the initial stages followed by curvilinearity at later stages of elution. The effect of changing porosity and tortuosity was well correlated.

  18. Characterizing the Joint Effect of Diverse Test-Statistic Correlation Structures and Effect Size on False Discovery Rates in a Multiple-Comparison Study of Many Outcome Measures

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Ploutz-Snyder, Robert; Fiedler, James

    2011-01-01

    In their 2009 Annals of Statistics paper, Gavrilov, Benjamini, and Sarkar report the results of a simulation assessing the robustness of their adaptive step-down procedure (GBS) for controlling the false discovery rate (FDR) when normally distributed test statistics are serially correlated. In this study we extend the investigation to the case of multiple comparisons involving correlated non-central t-statistics, in particular when several treatments or time periods are being compared to a control in a repeated-measures design with many dependent outcome measures. In addition, we consider several dependence structures other than serial correlation and illustrate how the FDR depends on the interaction between effect size and the type of correlation structure as indexed by Foerstner s distance metric from an identity. The relationship between the correlation matrix R of the original dependent variables and R, the correlation matrix of associated t-statistics is also studied. In general R depends not only on R, but also on sample size and the signed effect sizes for the multiple comparisons.

  19. Automated margin analysis of contemporary adhesive systems in vitro: evaluation of discriminatory variables.

    PubMed

    Heintze, Siegward D; Forjanic, Monika; Roulet, François-Jean

    2007-08-01

    Using an optical sensor, to automatically evaluate the marginal seal of restorations placed with 21 adhesive systems of all four adhesive categories in cylindrical cavities of bovine dentin applying different outcome variables, and to evaluate their discriminatory power. Twenty-one adhesive systems were evaluated: three 3-step etch-and-rinse systems, three 2-step etch-and-rinse systems, five 2-step self-etching systems, and ten 1-step self-etching systems. All adhesives were applied in cylindrical cavities in bovine dentin together with Tetric Ceram (n=8). In the control group, no adhesive system was used. After 24 h of storage in water at 37 degrees C, the surface was polished with 4000-grit SiC paper, and epoxy resin replicas were produced. An optical sensor (FRT MicroProf) created 100 profiles of the restoration margin, and an algorithm detected gaps and calculated their depths and widths. The following evaluation criteria were used: percentage of specimens without gaps, the percentage of gap-free profiles in relation to all profiles per specimen, mean gap width, mean gap depth, largest gap, modified marginal integrity index MI. The statistical analysis was carried out on log-transformed data for all variables with ANOVA and post-hoc Tukey's test for multiple comparisons. The correlation between the variables was tested with regression analysis, and the pooled data accordingto the four adhesive categories were compared by applying the Mann-Whitney nonparametric test (p < 0.05). For all the variables that characterized the marginal adaptation, there was a great variation from material to material. In general, the etch-and-rinse adhesive systems demonstrated the best marginal adaptation, followed by the 2-step self-etching and the 1-step self-etching adhesives; the latter showed the highest variability in test results between materials and within the same material. The only exception to this rule was Xeno IV, which showed a marginal adaptation that was comparable to that of the best 3-step etch-and-rinse systems. Except for the variables "largest gap" and "mean gap depth", all the other variables had a similar ability to discriminate between materials. Pooled data according to the four adhesive categories revealed statistically significant differences between the one-step self-etching systems and the other three systems as well as between two-step self-etching and three-step etch-and-rinse systems. With one exception, the one-step self-etching systems yielded the poorest marginal adaptation results and the highest variability between materials and within the same material. Except for the variable "largest gap", the percentage of continuous margin, mean gap width, mean gap depth, and the marginal integrity index MI were closely related to one another and showed--with the exception of "mean gap depth"--similar discriminatory power.

  20. A Time Integration Algorithm Based on the State Transition Matrix for Structures with Time Varying and Nonlinear Properties

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2003-01-01

    A variable order method of integrating the structural dynamics equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. When the time variation of the system can be modeled exactly by a polynomial it produces nearly exact solutions for a wide range of time step sizes. Solutions of a model nonlinear dynamic response exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with solutions obtained by established methods.

  1. The Screening Tool of Feeding Problems Applied to Children (STEP-CHILD): Psychometric Characteristics and Associations with Child and Parent Variables

    ERIC Educational Resources Information Center

    Seiverling, Laura; Hendy, Helen M.; Williams, Keith

    2011-01-01

    The present study evaluated the 23-item Screening Tool for Feeding Problems (STEP; Matson & Kuhn, 2001) with a sample of children referred to a hospital-based feeding clinic to examine the scale's psychometric characteristics and then demonstrate how a children's revision of the STEP, the STEP-CHILD is associated with child and parent variables.…

  2. Audiovisual integration increases the intentional step synchronization of side-by-side walkers.

    PubMed

    Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A

    2017-12-01

    When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Stepwise and stagewise approaches for spatial cluster detection

    PubMed Central

    Xu, Jiale

    2016-01-01

    Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either hypothesis testing framework or Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic area. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power of detections. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. PMID:27246273

  4. Stepwise and stagewise approaches for spatial cluster detection.

    PubMed

    Xu, Jiale; Gangnon, Ronald E

    2016-05-01

    Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either a hypothesis testing framework or a Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with a tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic areas. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. A seminested PCR assay for detection and typing of human papillomavirus based on E1 gene sequences.

    PubMed

    Cavalcante, Gustavo Henrique O; de Araújo, Josélio M G; Fernandes, José Veríssimo; Lanza, Daniel C F

    2018-05-01

    HPV infection is considered one of the leading causes of cervical cancer in the world. To date, more than 180 types of HPV have been described and viral typing is critical for defining the prognosis of cancer. In this work, a seminested PCR which allow fast and inexpensively detection and typing of HPV is presented. The system is based on the amplification of a variable length region within the viral gene E1, using three primers that potentially anneal in all HPV genomes. The amplicons produced in the first step can be identified by high resolution electrophoresis or direct sequencing. The seminested step includes nine specific primers which can be used in multiplex or individual reactions to discriminate the main types of HPV by amplicon size differentiation using agarose electrophoresis, reducing the time spent and cost per analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Modeling myosin VI stepping dynamics

    NASA Astrophysics Data System (ADS)

    Tehver, Riina

    Myosin VI is a molecular motor that transports intracellular cargo as well as acts as an anchor. The motor has been measured to have unusually large step size variation and it has been reported to make both long forward and short inchworm-like forward steps, as well as step backwards. We have been developing a model that incorporates this diverse stepping behavior in a consistent framework. Our model allows us to predict the dynamics of the motor under different conditions and investigate the evolutionary advantages of the large step size variation.

  7. Evaluation of TOPLATS on three Mediterranean catchments

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel

    2016-08-01

    Physically based hydrological models are complex tools that provide a complete description of the different processes occurring on a catchment. The TOPMODEL-based Land-Atmosphere Transfer Scheme (TOPLATS) simulates water and energy balances at different time steps, in both lumped and distributed modes. In order to gain insight on the behavior of TOPLATS and its applicability in different conditions a detailed evaluation needs to be carried out. This study aimed to develop a complete evaluation of TOPLATS including: (1) a detailed review of previous research works using this model; (2) a sensitivity analysis (SA) of the model with two contrasted methods (Morris and Sobol) of different complexity; (3) a 4-step calibration strategy based on a multi-start Powell optimization algorithm; and (4) an analysis of the influence of simulation time step (hourly vs. daily). The model was applied on three catchments of varying size (La Tejeria, Cidacos and Arga), located in Navarre (Northern Spain), and characterized by different levels of Mediterranean climate influence. Both Morris and Sobol methods showed very similar results that identified Brooks-Corey Pore Size distribution Index (B), Bubbling pressure (ψc) and Hydraulic conductivity decay (f) as the three overall most influential parameters in TOPLATS. After calibration and validation, adequate streamflow simulations were obtained in the two wettest catchments, but the driest (Cidacos) gave poor results in validation, due to the large climatic variability between calibration and validation periods. To overcome this issue, an alternative random and discontinuous method of cal/val period selection was implemented, improving model results.

  8. Effect of experimental and sample factors on dehydration kinetics of mildronate dihydrate: mechanism of dehydration and determination of kinetic parameters.

    PubMed

    Bērziņš, Agris; Actiņš, Andris

    2014-06-01

    The dehydration kinetics of mildronate dihydrate [3-(1,1,1-trimethylhydrazin-1-ium-2-yl)propionate dihydrate] was analyzed in isothermal and nonisothermal modes. The particle size, sample preparation and storage, sample weight, nitrogen flow rate, relative humidity, and sample history were varied in order to evaluate the effect of these factors and to more accurately interpret the data obtained from such analysis. It was determined that comparable kinetic parameters can be obtained in both isothermal and nonisothermal mode. However, dehydration activation energy values obtained in nonisothermal mode showed variation with conversion degree because of different rate-limiting step energy at higher temperature. Moreover, carrying out experiments in this mode required consideration of additional experimental complications. Our study of the different sample and experimental factor effect revealed information about changes of the dehydration rate-limiting step energy, variable contribution from different rate limiting steps, as well as clarified the dehydration mechanism. Procedures for convenient and fast determination of dehydration kinetic parameters were offered. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  9. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width.

    PubMed

    Learn, R; Feigenbaum, E

    2016-06-01

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  10. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Learn, R.; Feigenbaum, E.

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  11. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE PAGES

    Learn, R.; Feigenbaum, E.

    2016-05-27

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  12. Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples

    NASA Astrophysics Data System (ADS)

    Petit, Johan; Lallemant, Lucile

    2017-05-01

    In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.

  13. Arrays of size and distance controlled platinum nanoparticles fabricated by a colloidal method

    NASA Astrophysics Data System (ADS)

    Manzke, Achim; Vogel, Nicolas; Weiss, Clemens K.; Ziener, Ulrich; Plettl, Alfred; Landfester, Katharina; Ziemann, Paul

    2011-06-01

    Based on emulsion polymerization in the presence of a Pt complex, polystyrene (PS) particles were prepared exhibiting a well defined average diameter with narrow size-distribution. Furthermore, the colloids contain a controlled concentration of the Pt precursor complex. Optimized coating of Si substrates with such colloids leads to extended areas of hexagonally ordered close-packed PS particles. Subsequent application of plasma etching and annealing steps allows complete removal of the PS carriers and in parallel nucleation and growth of Pt nanoparticles (NPs) which are located at the original center of the PS colloids. In this way, hexagonally arranged spherical Pt NPs are obtained with controlled size and interparticle distances demonstrating variability and precision with so far unknown parameter scalability. This control is demonstrated by the fabrication of Pt NP arrays at a fixed particle distance of 185 nm while systematically varying the diameters between 8 and 15 nm. Further progress could be achieved by seeded emulsion polymerization. Here, Pt loaded PS colloids of 130 nm were used as seeds for a subsequent additional emulsion polymerization, systematically enlarging the diameter of the PS particles. Applying the plasma and annealing steps as above, in this way hexagonally ordered arrays of 9 nm Pt NPs could be obtained at distances up to 260 nm. To demonstrate their stability, such Pt particles were used as etching masks during reactive ion etching thereby transferring their hexagonal pattern into the Si substrate resulting in corresponding arrays of nanopillars.Based on emulsion polymerization in the presence of a Pt complex, polystyrene (PS) particles were prepared exhibiting a well defined average diameter with narrow size-distribution. Furthermore, the colloids contain a controlled concentration of the Pt precursor complex. Optimized coating of Si substrates with such colloids leads to extended areas of hexagonally ordered close-packed PS particles. Subsequent application of plasma etching and annealing steps allows complete removal of the PS carriers and in parallel nucleation and growth of Pt nanoparticles (NPs) which are located at the original center of the PS colloids. In this way, hexagonally arranged spherical Pt NPs are obtained with controlled size and interparticle distances demonstrating variability and precision with so far unknown parameter scalability. This control is demonstrated by the fabrication of Pt NP arrays at a fixed particle distance of 185 nm while systematically varying the diameters between 8 and 15 nm. Further progress could be achieved by seeded emulsion polymerization. Here, Pt loaded PS colloids of 130 nm were used as seeds for a subsequent additional emulsion polymerization, systematically enlarging the diameter of the PS particles. Applying the plasma and annealing steps as above, in this way hexagonally ordered arrays of 9 nm Pt NPs could be obtained at distances up to 260 nm. To demonstrate their stability, such Pt particles were used as etching masks during reactive ion etching thereby transferring their hexagonal pattern into the Si substrate resulting in corresponding arrays of nanopillars. Electronic supplementary information (ESI) available: Detailed description of the experimental part (S1-S4) platinum concentration inside the polymer particles synthesized by a seeded polymerization from the same seed particles measured by ICP-OES (Fig. S1 and S5); SEM image of Pt complex containing PS particles after oxygen plasma treatment (Fig. S2 and S6); effect of hydrofluoric acid treatment on silicon oxide elevation under Pt NPs (Fig. S3 and S6); SEM images demonstrating the variability of Pt NP distance while keeping the diameter constant (Fig. S4 and S8); results of experimental determination of Pt content by ICP-OES (Tables S1 and S9); diameter of the particles at different fabrication states (Tables S2 and S10). See DOI: 10.1039/c1nr10169b

  14. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  15. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  16. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  17. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  18. Realistic dust and water cycles in the MarsWRF GCM using coupled two-moment microphysics

    NASA Astrophysics Data System (ADS)

    Lee, Christopher; Richardson, Mark Ian; Mischna, Michael A.; Newman, Claire E.

    2017-10-01

    Dust and water ice aerosols significantly complicate the Martian climate system because the evolution of the two aerosol fields is coupled through microphysics and because both aerosols strongly interact with visible and thermal radiation. The combination of strong forcing feedback and coupling has led to various problems in understanding and modeling of the Martian climate: in reconciling cloud abundances at different locations in the atmosphere, in generating a stable dust cycle, and in preventing numerical instability within models.Using a new microphysics model inside the MarsWRF GCM we show that fully coupled simulations produce more realistic simulation of the Martian climate system compared to a dry, dust only simulations. In the coupled simulations, interannual variability and intra-annual variability are increased, strong 'solstitial pause' features are produced in both winter high latitude regions, and dust storm seasons are more varied, with early southern summer (Ls 180) dust storms and/or more than one storm occurring in some seasons.A new microphysics scheme was developed as a part of this work and has been included in the MarsWRF model. The scheme uses split spectral/spatial size distribution numerics with adaptive bin sizes to track particle size evolution. Significantly, this scheme is highly accurate, numerically stable, and is capable of running with time steps commensurate with those of the parent atmospheric model.

  19. Selecting predictors for discriminant analysis of species performance: an example from an amphibious softwater plant.

    PubMed

    Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M

    2012-03-01

    Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.

  20. Quantifying Grain-Size Variability of Metal Pollutants in Road-Deposited Sediments Using the Coefficient of Variation

    PubMed Central

    Wang, Xiaoxue; Li, Xuyong

    2017-01-01

    Particle grain size is an important indicator for the variability in physical characteristics and pollutants composition of road-deposited sediments (RDS). Quantitative assessment of the grain-size variability in RDS amount, metal concentration, metal load and GSFLoad is essential to elimination of the uncertainty it causes in estimation of RDS emission load and formulation of control strategies. In this study, grain-size variability was explored and quantified using the coefficient of variation (Cv) of the particle size compositions, metal concentrations, metal loads, and GSFLoad values in RDS. Several trends in grain-size variability of RDS were identified: (i) the medium class (105–450 µm) variability in terms of particle size composition, metal loads, and GSFLoad values in RDS was smaller than the fine (<105 µm) and coarse (450–2000 µm) class; (ii) The grain-size variability in terms of metal concentrations increased as the particle size increased, while the metal concentrations decreased; (iii) When compared to the Lorenz coefficient (Lc), the Cv was similarly effective at describing the grain-size variability, whereas it is simpler to calculate because it did not require the data to be pre-processed. The results of this study will facilitate identification of the uncertainty in modelling RDS caused by grain-size class variability. PMID:28788078

  1. Effects of an aft facing step on the surface of a laminar flow glider wing

    NASA Technical Reports Server (NTRS)

    Sandlin, Doral R.; Saiki, Neal

    1993-01-01

    A motor glider was used to perform a flight test study on the effects of aft facing steps in a laminar boundary layer. This study focuses on two dimensional aft facing steps oriented spanwise to the flow. The size and location of the aft facing steps were varied in order to determine the critical size that will force premature transition. Transition over a step was found to be primarily a function of Reynolds number based on step height. Both of the step height Reynolds numbers for premature and full transition were determined. A hot film anemometry system was used to detect transition.

  2. Kinesin Steps Do Not Alternate in Size☆

    PubMed Central

    Fehr, Adrian N.; Asbury, Charles L.; Block, Steven M.

    2008-01-01

    Abstract Kinesin is a two-headed motor protein that transports cargo inside cells by moving stepwise on microtubules. Its exact trajectory along the microtubule is unknown: alternative pathway models predict either uniform 8-nm steps or alternating 7- and 9-nm steps. By analyzing single-molecule stepping traces from “limping” kinesin molecules, we were able to distinguish alternate fast- and slow-phase steps and thereby to calculate the step sizes associated with the motions of each of the two heads. We also compiled step distances from nonlimping kinesin molecules and compared these distributions against models predicting uniform or alternating step sizes. In both cases, we find that kinesin takes uniform 8-nm steps, a result that strongly constrains the allowed models. PMID:18083906

  3. Growth of group II-VI semiconductor quantum dots with strong quantum confinement and low size dispersion

    NASA Astrophysics Data System (ADS)

    Pandey, Praveen K.; Sharma, Kriti; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.

    2003-11-01

    CdTe quantum dots embedded in glass matrix are grown using two-step annealing method. The results for the optical transmission characterization are analysed and compared with the results obtained from CdTe quantum dots grown using conventional single-step annealing method. A theoretical model for the absorption spectra is used to quantitatively estimate the size dispersion in the two cases. In the present work, it is established that the quantum dots grown using two-step annealing method have stronger quantum confinement, reduced size dispersion and higher volume ratio as compared to the single-step annealed samples. (

  4. The Technical Efficiency of Specialised Milk Farms: A Regional View

    PubMed Central

    Špička, Jindřich; Smutka, Luboš

    2014-01-01

    The aim of the article is to evaluate production efficiency and its determinants of specialised dairy farming among the EU regions. In the most of European regions, there is a relatively high significance of small specialised farms including dairy farms. The DEAVRS method (data envelopment analysis with variable returns to scale) reveals efficient and inefficient regions including the scale efficiency. In the next step, the two-sample t-test determines differences of economic and structural indicators between efficient and inefficient regions. The research reveals that substitution of labour by capital/contract work explains the variability of the farm net value added per AWU (annual work unit) income indicator by more than 30%. The significant economic determinants of production efficiency in specialised dairy farming are farm size, herd size, crop output per hectare, productivity of energy, and capital (at α = 0.01). Specialised dairy farms in efficient regions have significantly higher farm net value added per AWU than inefficient regions. Agricultural enterprises in inefficient regions have a more extensive structure and produce more noncommodity output (public goods). Specialised dairy farms in efficient regions have a slightly higher milk yield, specific livestock costs of feed, bedding, and veterinary services per livestock unit. PMID:25050408

  5. A shape-based segmentation method for mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen

    2013-07-01

    Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.

  6. Modeling ultrasound propagation through material of increasing geometrical complexity.

    PubMed

    Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen

    2018-06-01

    Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Analysis of Binary Multivariate Longitudinal Data via 2-Dimensional Orbits: An Application to the Agincourt Health and Socio-Demographic Surveillance System in South Africa

    PubMed Central

    Visaya, Maria Vivien; Sherwell, David; Sartorius, Benn; Cromieres, Fabien

    2015-01-01

    We analyse demographic longitudinal survey data of South African (SA) and Mozambican (MOZ) rural households from the Agincourt Health and Socio-Demographic Surveillance System in South Africa. In particular, we determine whether absolute poverty status (APS) is associated with selected household variables pertaining to socio-economic determination, namely household head age, household size, cumulative death, adults to minor ratio, and influx. For comparative purposes, households are classified according to household head nationality (SA or MOZ) and APS (rich or poor). The longitudinal data of each of the four subpopulations (SA rich, SA poor, MOZ rich, and MOZ poor) is a five-dimensional space defined by binary variables (questions), subjects, and time. We use the orbit method to represent binary multivariate longitudinal data (BMLD) of each household as a two-dimensional orbit and to visualise dynamics and behaviour of the population. At each time step, a point (x, y) from the orbit of a household corresponds to the observation of the household, where x is a binary sequence of responses and y is an ordering of variables. The ordering of variables is dynamically rearranged such that clusters and holes associated to least and frequently changing variables in the state space respectively, are exposed. Analysis of orbits reveals information of change at both individual- and population-level, change patterns in the data, capacity of states in the state space, and density of state transitions in the orbits. Analysis of household orbits of the four subpopulations show association between (i) households headed by older adults and rich households, (ii) large household size and poor households, and (iii) households with more minors than adults and poor households. Our results are compared to other methods of BMLD analysis. PMID:25919116

  8. Transdermal film-loaded finasteride microplates to enhance drug skin permeation: Two-step optimization study.

    PubMed

    Ahmed, Tarek A; El-Say, Khalid M

    2016-06-10

    The goal was to develop an optimized transdermal finasteride (FNS) film loaded with drug microplates (MIC), utilizing two-step optimization, to decrease the dosing schedule and inconsistency in gastrointestinal absorption. First; 3-level factorial design was implemented to prepare optimized FNS-MIC of minimum particle size. Second; Box-Behnken design matrix was used to develop optimized transdermal FNS-MIC film. Interaction among MIC components was studied using physicochemical characterization tools. Film components namely; hydroxypropyl methyl cellulose (X1), dimethyl sulfoxide (X2) and propylene glycol (X3) were optimized for their effects on the film thickness (Y1) and elongation percent (Y2), and for FNS steady state flux (Y3), permeability coefficient (Y4), and diffusion coefficient (Y5) following ex-vivo permeation through the rat skin. Morphological study of the optimized MIC and transdermal film was also investigated. Results revealed that stabilizer concentration and anti-solvent percent were significantly affecting MIC formulation. Optimized FNS-MIC of particle size 0.93μm was successfully prepared in which there was no interaction observed among their components. An enhancement in the aqueous solubility of FNS-MIC by more than 23% was achieved. All the studied variables, most of their interaction and quadratic effects were significantly affecting the studied variables (Y1-Y5). Morphological observation illustrated non-spherical, short rods, flakes like small plates that were homogeneously distributed in the optimized transdermal film. Ex-vivo study showed enhanced FNS permeation from film loaded MIC when compared to that contains pure drug. So, MIC is a successful technique to enhance aqueous solubility and skin permeation of poor water soluble drug especially when loaded into transdermal films. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Reducing the computational footprint for real-time BCPNN learning

    PubMed Central

    Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian

    2015-01-01

    The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware. PMID:25657618

  10. A Unified Probabilistic Framework for Dose-Response Assessment of Human Health Effects.

    PubMed

    Chiu, Weihsueh A; Slob, Wout

    2015-12-01

    When chemical health hazards have been identified, probabilistic dose-response assessment ("hazard characterization") quantifies uncertainty and/or variability in toxicity as a function of human exposure. Existing probabilistic approaches differ for different types of endpoints or modes-of-action, lacking a unifying framework. We developed a unified framework for probabilistic dose-response assessment. We established a framework based on four principles: a) individual and population dose responses are distinct; b) dose-response relationships for all (including quantal) endpoints can be recast as relating to an underlying continuous measure of response at the individual level; c) for effects relevant to humans, "effect metrics" can be specified to define "toxicologically equivalent" sizes for this underlying individual response; and d) dose-response assessment requires making adjustments and accounting for uncertainty and variability. We then derived a step-by-step probabilistic approach for dose-response assessment of animal toxicology data similar to how nonprobabilistic reference doses are derived, illustrating the approach with example non-cancer and cancer datasets. Probabilistically derived exposure limits are based on estimating a "target human dose" (HDMI), which requires risk management-informed choices for the magnitude (M) of individual effect being protected against, the remaining incidence (I) of individuals with effects ≥ M in the population, and the percent confidence. In the example datasets, probabilistically derived 90% confidence intervals for HDMI values span a 40- to 60-fold range, where I = 1% of the population experiences ≥ M = 1%-10% effect sizes. Although some implementation challenges remain, this unified probabilistic framework can provide substantially more complete and transparent characterization of chemical hazards and support better-informed risk management decisions.

  11. Reducing the computational footprint for real-time BCPNN learning.

    PubMed

    Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian

    2015-01-01

    The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.

  12. Variable pixel size ionospheric tomography

    NASA Astrophysics Data System (ADS)

    Zheng, Dunyong; Zheng, Hongwei; Wang, Yanjun; Nie, Wenfeng; Li, Chaokui; Ao, Minsi; Hu, Wusheng; Zhou, Wei

    2017-06-01

    A novel ionospheric tomography technique based on variable pixel size was developed for the tomographic reconstruction of the ionospheric electron density (IED) distribution. In variable pixel size computerized ionospheric tomography (VPSCIT) model, the IED distribution is parameterized by a decomposition of the lower and upper ionosphere with different pixel sizes. Thus, the lower and upper IED distribution may be very differently determined by the available data. The variable pixel size ionospheric tomography and constant pixel size tomography are similar in most other aspects. There are some differences between two kinds of models with constant and variable pixel size respectively, one is that the segments of GPS signal pay should be assigned to the different kinds of pixel in inversion; the other is smoothness constraint factor need to make the appropriate modified where the pixel change in size. For a real dataset, the variable pixel size method distinguishes different electron density distribution zones better than the constant pixel size method. Furthermore, it can be non-chided that when the effort is spent to identify the regions in a model with best data coverage. The variable pixel size method can not only greatly improve the efficiency of inversion, but also produce IED images with high fidelity which are the same as a used uniform pixel size method. In addition, variable pixel size tomography can reduce the underdetermined problem in an ill-posed inverse problem when the data coverage is irregular or less by adjusting quantitative proportion of pixels with different sizes. In comparison with constant pixel size tomography models, the variable pixel size ionospheric tomography technique achieved relatively good results in a numerical simulation. A careful validation of the reliability and superiority of variable pixel size ionospheric tomography was performed. Finally, according to the results of the statistical analysis and quantitative comparison, the proposed method offers an improvement of 8% compared with conventional constant pixel size tomography models in the forward modeling.

  13. Controlling dental enamel-cavity ablation depth with optimized stepping parameters along the focal plane normal using a three axis, numerically controlled picosecond laser.

    PubMed

    Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Sun, Yuchun; Wang, Yong

    2015-02-01

    The purpose of this study was to establish a depth-control method in enamel-cavity ablation by optimizing the timing of the focal-plane-normal stepping and the single-step size of a three axis, numerically controlled picosecond laser. Although it has been proposed that picosecond lasers may be used to ablate dental hard tissue, the viability of such a depth-control method in enamel-cavity ablation remains uncertain. Forty-two enamel slices with approximately level surfaces were prepared and subjected to two-dimensional ablation by a picosecond laser. The additive-pulse layer, n, was set to 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70. A three-dimensional microscope was then used to measure the ablation depth, d, to obtain a quantitative function relating n and d. Six enamel slices were then subjected to three dimensional ablation to produce 10 cavities, respectively, with additive-pulse layer and single-step size set to corresponding values. The difference between the theoretical and measured values was calculated for both the cavity depth and the ablation depth of a single step. These were used to determine minimum-difference values for both the additive-pulse layer (n) and single-step size (d). When the additive-pulse layer and the single-step size were set 5 and 45, respectively, the depth error had a minimum of 2.25 μm, and 450 μm deep enamel cavities were produced. When performing three-dimensional ablating of enamel with a picosecond laser, adjusting the timing of the focal-plane-normal stepping and the single-step size allows for the control of ablation-depth error to the order of micrometers.

  14. Critical motor number for fractional steps of cytoskeletal filaments in gliding assays.

    PubMed

    Li, Xin; Lipowsky, Reinhard; Kierfeld, Jan

    2012-01-01

    In gliding assays, filaments are pulled by molecular motors that are immobilized on a solid surface. By varying the motor density on the surface, one can control the number N of motors that pull simultaneously on a single filament. Here, such gliding assays are studied theoretically using brownian (or Langevin) dynamics simulations and taking the local force balance between motors and filaments as well as the force-dependent velocity of the motors into account. We focus on the filament stepping dynamics and investigate how single motor properties such as stalk elasticity and step size determine the presence or absence of fractional steps of the filaments. We show that each gliding assay can be characterized by a critical motor number, N(c). Because of thermal fluctuations, fractional filament steps are only detectable as long as N < N(c). The corresponding fractional filament step size is l/N where l is the step size of a single motor. We first apply our computational approach to microtubules pulled by kinesin-1 motors. For elastic motor stalks that behave as linear springs with a zero rest length, the critical motor number is found to be N(c) = 4, and the corresponding distributions of the filament step sizes are in good agreement with the available experimental data. In general, the critical motor number N(c) depends on the elastic stalk properties and is reduced to N(c) = 3 for linear springs with a nonzero rest length. Furthermore, N(c) is shown to depend quadratically on the motor step size l. Therefore, gliding assays consisting of actin filaments and myosin-V are predicted to exhibit fractional filament steps up to motor number N = 31. Finally, we show that fractional filament steps are also detectable for a fixed average motor number as determined by the surface density (or coverage) of the motors on the substrate surface.

  15. Effect of study design on the reported effect of cardiac resynchronization therapy (CRT) on quantitative physiological measures: stratified meta-analysis in narrow-QRS heart failure and implications for planning future studies.

    PubMed

    Jabbour, Richard J; Shun-Shin, Matthew J; Finegold, Judith A; Afzal Sohaib, S M; Cook, Christopher; Nijjer, Sukhjinder S; Whinnett, Zachary I; Manisty, Charlotte H; Brugada, Josep; Francis, Darrel P

    2015-01-06

    Biventricular pacing (CRT) shows clear benefits in heart failure with wide QRS, but results in narrow QRS have appeared conflicting. We tested the hypothesis that study design might have influenced findings. We identified all reports of CRT-P/D therapy in subjects with narrow QRS reporting effects on continuous physiological variables. Twelve studies (2074 patients) met these criteria. Studies were stratified by presence of bias-resistance steps: the presence of a randomized control arm over a single arm, and blinded outcome measurement. Change in each endpoint was quantified using a standardized effect size (Cohen's d). We conducted separate meta-analyses for each variable in turn, stratified by trial quality. In non-randomized, non-blinded studies, the majority of variables (10 of 12, 83%) showed significant improvement, ranging from a standardized mean effect size of +1.57 (95%CI +0.43 to +2.7) for ejection fraction to +2.87 (+1.78 to +3.95) for NYHA class. In the randomized, non-blinded study, only 3 out of 6 variables (50%) showed improvement. For the randomized blinded studies, 0 out of 9 variables (0%) showed benefit, ranging from -0.04 (-0.31 to +0.22) for ejection fraction to -0.1 (-0.73 to +0.53) for 6-minute walk test. Differences in degrees of resistance to bias, rather than choice of endpoint, explain the variation between studies of CRT in narrow-QRS heart failure addressing physiological variables. When bias-resistance features are implemented, it becomes clear that these patients do not improve in any tested physiological variable. Guidance from studies without careful planning to resist bias may be far less useful than commonly perceived. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  16. A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models

    NASA Astrophysics Data System (ADS)

    Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.

    2010-09-01

    For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.

  17. A cross-sectional study of the relationship between parents' and children's physical activity.

    PubMed

    Stearns, Jodie A; Rhodes, Ryan; Ball, Geoff D C; Boule, Normand; Veugelers, Paul J; Cutumisu, Nicoleta; Spence, John C

    2016-10-28

    Though parents' physical activity (PA) is thought to be a predictor of children's PA, findings have been mixed. The purpose of this study was to examine the relationship between pedometer-measured steps/day of parents' and their children and potential moderators of this relationship. We also assessed the parent-child PA relationship as measured by questionnaires. Six-hundred and twelve 7-8 year olds and one of their parents wore Steps Count (SC)-T2 pedometers for four consecutive days. Parents reported their PA from the last seven days and their child's usual PA. Hierarchical linear regressions were used to assess the parent-child PA relationships, controlling for covariates. Gender (parent, child), gender homogeneity, weight status (parent, child), weight status homogeneity, and socioeconomic status (SES) variables (parent education, household income, area-level SES) were tested as potential moderators of this relationship. Partial r's were used as an estimate of effect size. Parents' steps was significantly related to children's steps (r partial  = .24). For every 1,000 step increase in parents' steps, the children took 260 additional steps. None of the tested interactions were found to moderate this relationship. Using questionnaires, a relatively smaller parent-child PA relationship was found (r partial  = .14). Physically active parents tend to have physically active children. Interventions designed to get children moving more throughout the day could benefit from including a parent component. Future research should explore the mechanisms by which parents influence their children, and other parent attributes and styles as potential moderators.

  18. Microstructure of room temperature ionic liquids at stepped graphite electrodes

    DOE PAGES

    Feng, Guang; Li, Song; Zhao, Wei; ...

    2015-07-14

    Molecular dynamics simulations of room temperature ionic liquid (RTIL) [emim][TFSI] at stepped graphite electrodes were performed to investigate the influence of the thickness of the electrode surface step on the microstructure of interfacial RTILs. A strong correlation was observed between the interfacial RTIL structure and the step thickness in electrode surface as well as the ion size. Specifically, when the step thickness is commensurate with ion size, the interfacial layering of cation/anion is more evident; whereas, the layering tends to be less defined when the step thickness is close to the half of ion size. Furthermore, two-dimensional microstructure of ionmore » layers exhibits different patterns and alignments of counter-ion/co-ion lattice at neutral and charged electrodes. As the cation/anion layering could impose considerable effects on ion diffusion, the detailed information of interfacial RTILs at stepped graphite presented here would help to understand the molecular mechanism of RTIL-electrode interfaces in supercapacitors.« less

  19. The spinal control of locomotion and step-to-step variability in left-right symmetry from slow to moderate speeds

    PubMed Central

    Dambreville, Charline; Labarre, Audrey; Thibaudier, Yann; Hurteau, Marie-France

    2015-01-01

    When speed changes during locomotion, both temporal and spatial parameters of the pattern must adjust. Moreover, at slow speeds the step-to-step pattern becomes increasingly variable. The objectives of the present study were to assess if the spinal locomotor network adjusts both temporal and spatial parameters from slow to moderate stepping speeds and to determine if it contributes to step-to-step variability in left-right symmetry observed at slow speeds. To determine the role of the spinal locomotor network, the spinal cord of 6 adult cats was transected (spinalized) at low thoracic levels and the cats were trained to recover hindlimb locomotion. Cats were implanted with electrodes to chronically record electromyography (EMG) in several hindlimb muscles. Experiments began once a stable hindlimb locomotor pattern emerged. During experiments, EMG and bilateral video recordings were made during treadmill locomotion from 0.1 to 0.4 m/s in 0.05 m/s increments. Cycle and stance durations significantly decreased with increasing speed, whereas swing duration remained unaffected. Extensor burst duration significantly decreased with increasing speed, whereas sartorius burst duration remained unchanged. Stride length, step length, and the relative distance of the paw at stance offset significantly increased with increasing speed, whereas the relative distance at stance onset and both the temporal and spatial phasing between hindlimbs were unaffected. Both temporal and spatial step-to-step left-right asymmetry decreased with increasing speed. Therefore, the spinal cord is capable of adjusting both temporal and spatial parameters during treadmill locomotion, and it is responsible, at least in part, for the step-to-step variability in left-right symmetry observed at slow speeds. PMID:26084910

  20. Aerobic Steps As Measured by Pedometry and Their Relation to Central Obesity

    PubMed Central

    DUCHEČKOVÁ, Petra; FOREJT, Martin

    2014-01-01

    Abstract Background The purpose of this study was to examine the relation between daily steps and aerobic steps, and anthropometric variables, using the waist-to-hip ratio (WHR) and waist-to-height ratio (WHtR). Methods The participants in this cross-sectional study were taken the measurements of by a trained anthropologist and then instructed to wear an Omron pedometer for seven consecutive days. A series of statistical tests (Mann-Whitney U test, Kruskal-Wallis ANOVA, multiple comparisons of z’ values and contingency tables) was performed in order to assess the relation between daily steps and aerobic steps, and anthropometric variables. Results A total of 507 individuals (380 females and 127 males) participated in the study. The average daily number of steps and aerobic steps was significantly lower in the individuals with risky WHR and WHtR as compared to the individuals with normal WHR (P=0.005) and WHtR (P=0.000). A comparison of age and anthropometric variables across aerobic steps activity categories was statistically significant for all the studied parameters. According to the contingency tables for normal steps, there is a 5.75x higher risk in the low-activity category of having WHtR>0.50 as compared to the high-activity category. Conclusions Both normal and aerobic steps are significantly associated with central obesity and other body composition variables. This result is important for older people, who are more likely to perform low-intensity activities rather than moderate- or high-intensity activities. Our results also indicate that risk of having WHtR>0.50 can be reduced by almost 6x by increasing daily steps over 8985 steps per day. PMID:25927036

  1. A Taxonomy of Instructional Strategies in Early Childhood Education; Toward a Developmental Theory of Instructional Design.

    ERIC Educational Resources Information Center

    Vance, Barbara

    This paper suggests two steps in instructional deisgn for early childhood that can be derived from a recent major paper on instructional strategy taxonomy. These steps, together with the instructional design variables involved in each step, are reviewed relative to current research in child development and early education. The variables reviewed…

  2. Development of nanostructured lipid carriers containing salicyclic acid for dermal use based on the Quality by Design method.

    PubMed

    Kovács, A; Berkó, Sz; Csányi, E; Csóka, I

    2017-03-01

    The aim of our present work was to evaluate the applicability of the Quality by Design (QbD) methodology in the development and optimalization of nanostructured lipid carriers containing salicyclic acid (NLC SA). Within the Quality by Design methology, special emphasis is layed on the adaptation of the initial risk assessment step in order to properly identify the critical material attributes and critical process parameters in formulation development. NLC SA products were formulated by the ultrasonication method using Compritol 888 ATO as solid lipid, Miglyol 812 as liquid lipid and Cremophor RH 60® as surfactant. LeanQbD Software and StatSoft. Inc. Statistica for Windows 11 were employed to indentify the risks. Three highly critical quality attributes (CQAs) for NLC SA were identified, namely particle size, particle size distribution and aggregation. Five attributes of medium influence were identified, including dissolution rate, dissolution efficiency, pH, lipid solubility of the active pharmaceutical ingredient (API) and entrapment efficiency. Three critical material attributes (CMA) and critical process parameters (CPP) were identified: surfactant concentration, solid lipid/liquid lipid ratio and ultrasonication time. The CMAs and CPPs are considered as independent variables and the CQAs are defined as dependent variables. The 2 3 factorial design was used to evaluate the role of the independent and dependent variables. Based on our experiments, an optimal formulation can be obtained when the surfactant concentration is set to 5%, the solid lipid/liquid lipid ratio is 7:3 and ultrasonication time is 20min. The optimal NLC SA showed narrow size distribution (0.857±0.014) with a mean particle size of 114±2.64nm. The NLC SA product showed a significantly higher in vitro drug release compared to the micro-particle reference preparation containing salicylic acid (MP SA). Copyright © 2016 Elsevier B.V. All rights reserved.

  3. The Reliability and Validity of Measures of Gait Variability in Community-Dwelling Older Adults

    PubMed Central

    Brach, Jennifer S.; Perera, Subashan; Studenski, Stephanie; Newman, Anne B.

    2009-01-01

    Objective To examine the test-retest reliability and concurrent validity of variability of gait characteristics. Design Cross-sectional study. Setting Research laboratory. Participants Older adults (N=558) from the Cardiovascular Health Study. Interventions Not applicable. Main Outcome Measures Gait characteristics were measured using a 4-m computerized walkway. SD determined from the steps recorded were used as the measures of variability. Intraclass correlation coefficients (ICC) were calculated to examine test-retest reliability of a 4-m walk and two 4-m walks. To establish concurrent validity, the measures of gait variability were compared across levels of health, functional status, and physical activity using independent t tests and analysis of variances. Results Gait variability measures from the two 4-m walks demonstrated greater test-retest reliability than those from the single 4-m walk (ICC=.22–.48 and ICC=.40–.63, respectively). Greater step length and stance time variability were associated with poorer health, functional status and physical activity (P<.05). Conclusions Gait variability calculated from a limited number of steps has fair to good test-retest reliability and concurrent validity. Reliability of gait variability calculated from a greater number of steps should be assessed to determine if the consistency can be improved. PMID:19061741

  4. Molecular imprint of enzyme active site by camel nanobodies: rapid and efficient approach to produce abzymes with alliinase activity.

    PubMed

    Li, Jiang-Wei; Xia, Lijie; Su, Youhong; Liu, Hongchun; Xia, Xueqing; Lu, Qinxia; Yang, Chunjin; Reheman, Kalbinur

    2012-04-20

    Screening of inhibitory Ab1 antibodies is a critical step for producing catalytic antibodies in the anti-idiotypic approach. However, the incompatible surface of the active site of the enzyme and the antigen-binding site of heterotetrameric conventional antibodies become the limiting step. Because camelid-derived nanobodies possess the potential to preferentially bind to the active site of enzymes due to their small size and long CDR3, we have developed a novel approach to produce antibodies with alliinase activities by exploiting the molecular mimicry of camel nanobodies. By screening the camelid-derived variable region of the heavy chain cDNA phage display library with alliinase, we obtained an inhibitory nanobody VHHA4 that recognizes the active site. Further screening with VHHA4 from the same variable domain of the heavy chain of a heavy-chain antibody library led to a higher incidence of anti-idiotypic Ab2 abzymes with alliinase activities. One of the abzymes, VHHC10, showed the highest activity that can be inhibited by Ab1 VHHA4 and alliinase competitive inhibitor penicillamine and significantly suppressed the B16 tumor cell growth in the presence of alliin in vitro. The results highlight the feasibility of producing abzymes via anti-idiotypic nanobody approach.

  5. A systematic methodology for the robust quantification of energy efficiency at wastewater treatment plants featuring Data Envelopment Analysis.

    PubMed

    Longo, S; Hospido, A; Lema, J M; Mauricio-Iglesias, M

    2018-05-10

    This article examines the potential benefits of using Data Envelopment Analysis (DEA) for conducting energy-efficiency assessment of wastewater treatment plants (WWTPs). WWTPs are characteristically heterogeneous (in size, technology, climate, function …) which limits the correct application of DEA. This paper proposes and describes the Robust Energy Efficiency DEA (REED) in its various stages, a systematic state-of-the-art methodology aimed at including exogenous variables in nonparametric frontier models and especially designed for WWTP operation. In particular, the methodology systematizes the modelling process by presenting an integrated framework for selecting the correct variables and appropriate models, possibly tackling the effect of exogenous factors. As a result, the application of REED improves the quality of the efficiency estimates and hence the significance of benchmarking. For the reader's convenience, this article is presented as a step-by-step guideline to guide the user in the determination of WWTPs energy efficiency from beginning to end. The application and benefits of the developed methodology are demonstrated by a case study related to the comparison of the energy efficiency of a set of 399 WWTPs operating in different countries and under heterogeneous environmental conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Semiautomatic Segmentation of Glioma on Mobile Devices.

    PubMed

    Wu, Ya-Ping; Lin, Yu-Song; Wu, Wei-Guo; Yang, Cong; Gu, Jian-Qin; Bai, Yan; Wang, Mei-Yun

    2017-01-01

    Brain tumor segmentation is the first and the most critical step in clinical applications of radiomics. However, segmenting brain images by radiologists is labor intense and prone to inter- and intraobserver variability. Stable and reproducible brain image segmentation algorithms are thus important for successful tumor detection in radiomics. In this paper, we propose a supervised brain image segmentation method, especially for magnetic resonance (MR) brain images with glioma. This paper uses hard edge multiplicative intrinsic component optimization to preprocess glioma medical image on the server side, and then, the doctors could supervise the segmentation process on mobile devices in their convenient time. Since the preprocessed images have the same brightness for the same tissue voxels, they have small data size (typically 1/10 of the original image size) and simple structure of 4 types of intensity value. This observation thus allows follow-up steps to be processed on mobile devices with low bandwidth and limited computing performance. Experiments conducted on 1935 brain slices from 129 patients show that more than 30% of the sample can reach 90% similarity; over 60% of the samples can reach 85% similarity, and more than 80% of the sample could reach 75% similarity. The comparisons with other segmentation methods also demonstrate both efficiency and stability of the proposed approach.

  7. Relative dosimetrical verification in high dose rate brachytherapy using two-dimensional detector array IMatriXX

    PubMed Central

    Manikandan, A.; Biplab, Sarkar; David, Perianayagam A.; Holla, R.; Vivek, T. R.; Sujatha, N.

    2011-01-01

    For high dose rate (HDR) brachytherapy, independent treatment verification is needed to ensure that the treatment is performed as per prescription. This study demonstrates dosimetric quality assurance of the HDR brachytherapy using a commercially available two-dimensional ion chamber array called IMatriXX, which has a detector separation of 0.7619 cm. The reference isodose length, step size, and source dwell positional accuracy were verified. A total of 24 dwell positions, which were verified for positional accuracy gave a total error (systematic and random) of –0.45 mm, with a standard deviation of 1.01 mm and maximum error of 1.8 mm. Using a step size of 5 mm, reference isodose length (the length of 100% isodose line) was verified for single and multiple catheters of same and different source loadings. An error ≤1 mm was measured in 57% of tests analyzed. Step size verification for 2, 3, 4, and 5 cm was performed and 70% of the step size errors were below 1 mm, with maximum of 1.2 mm. The step size ≤1 cm could not be verified by the IMatriXX as it could not resolve the peaks in dose profile. PMID:21897562

  8. The optimal design of stepped wedge trials with equal allocation to sequences and a comparison to other trial designs.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew

    2017-12-01

    Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.

  9. Improving Efficiency in Multi-Strange Baryon Reconstruction in d-Au at STAR

    NASA Astrophysics Data System (ADS)

    Leight, William

    2003-10-01

    We report preliminary multi-strange baryon measurements for d-Au collisions recorded at RHIC by the STAR experiment. After using classical topological analysis, in which cuts for each discriminating variable are adjusted by hand, we investigate improvements in signal-to-noise optimization using Linear Discriminant Analysis (LDA). LDA is an algorithm for finding, in the n-dimensional space of the n discriminating variables, the axis on which the signal and noise distributions are most separated. LDA is the first step in moving towards more sophisticated techniques for signal-to-noise optimization, such as Artificial Neural Nets. Due to the relatively low background and sufficiently high yields of d-Au collisions, they form an ideal system to study these possibilities for improving reconstruction methods. Such improvements will be extremely important for forthcoming Au-Au runs in which the size of the combinatoric background is a major problem in reconstruction efforts.

  10. Fuzzy support vector machines for adaptive Morse code recognition.

    PubMed

    Yang, Cheng-Hong; Jin, Li-Cheng; Chuang, Li-Yeh

    2006-11-01

    Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, facilitating mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. Therefore, an adaptive automatic recognition method with a high recognition rate is needed. The proposed system uses both fuzzy support vector machines and the variable-degree variable-step-size least-mean-square algorithm to achieve these objectives. We apply fuzzy memberships to each point, and provide different contributions to the decision learning function for support vector machines. Statistical analyses demonstrated that the proposed method elicited a higher recognition rate than other algorithms in the literature.

  11. TRUMP. Transient & S-State Temperature Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    1992-03-03

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables, temperature, pressure, or field strength. Initial conditions may vary with spatial position,more » and among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.« less

  12. A general optimality criteria algorithm for a class of engineering optimization problems

    NASA Astrophysics Data System (ADS)

    Belegundu, Ashok D.

    2015-05-01

    An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables, temperature, pressure, or field strength. Initial conditions may vary with spatial position,more » and among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.« less

  14. Location specific solidification microstructure control in electron beam melting of Ti-6Al-4V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narra, Sneha P.; Cunningham, Ross; Beuth, Jack

    Relationships between prior beta grain size in solidified Ti-6Al-4V and melting process parameters in the Electron Beam Melting (EBM) process are investigated. Samples are built by varying a machine-dependent proprietary speed function to cover the process space. Optical microscopy is used to measure prior beta grain widths and assess the number of prior beta grains present in a melt pool in the raster region of the build. Despite the complicated evolution of beta grain sizes, the beta grain width scales with melt pool width. The resulting understanding of the relationship between primary machine variables and prior beta grain widths ismore » a key step toward enabling the location specific control of as-built microstructure in the EBM process. Control of grain width in separate specimens and within a single specimen is demonstrated.« less

  15. Linear micromechanical stepping drive for pinhole array positioning

    NASA Astrophysics Data System (ADS)

    Endrödy, Csaba; Mehner, Hannes; Grewe, Adrian; Hoffmann, Martin

    2015-05-01

    A compact linear micromechanical stepping drive for positioning a 7 × 5.5 mm2 optical pinhole array is presented. The system features a step size of 13.2 µm and a full displacement range of 200 µm. The electrostatic inch-worm stepping mechanism shows a compact design capable of positioning a payload 50% of its own weight. The stepping drive movement, step sizes and position accuracy are characterized. The actuated pinhole array is integrated in a confocal chromatic hyperspectral imaging system, where coverage of the object plane, and therefore the useful picture data, can be multiplied by 14 in contrast to a non-actuated array.

  16. The 1981 NASA ASEE Summer Faculty Fellowship Program, volume 2

    NASA Technical Reports Server (NTRS)

    Robertson, N. G.; Huang, C. J.

    1981-01-01

    A collection of papers on miscellaneous subjects in aerospace research is presented. Topics discussed are: (1) Langmuir probe theory and the problem of anisotropic collection; (2) anthropometric program analysis of reach and body movement; (3) analysis of IV characteristics of negatively biased panels in a magnetoplasma; (4) analytic solution to classical two body drag problem; (5) fast variable step size integration algorithm for computer simulations of physiological systems; (6) spectroscopic experimental computer assisted empirical model for the production of energetics of excited oxygen molecules formed by atom recombination shuttle tile surfaces; and (7) capillary priming characteristics of dual passage heat pipe in zero-g.

  17. Automatic measurement of images on astrometric plates

    NASA Astrophysics Data System (ADS)

    Ortiz Gil, A.; Lopez Garcia, A.; Martinez Gonzalez, J. M.; Yershov, V.

    1994-04-01

    We present some results on the process of automatic detection and measurement of objects in overlapped fields of astrometric plates. The main steps of our algorithm are the following: determination of the Scale and Tilt between charge coupled devices (CCD) and microscope coordinate systems and estimation of signal-to-noise ratio in each field;--image identification and improvement of its position and size;--image final centering;--image selection and storage. Several parameters allow the use of variable criteria for image identification, characterization and selection. Problems related with faint images and crowded fields will be approached by special techniques (morphological filters, histogram properties and fitting models).

  18. A Numerical Scheme for Ordinary Differential Equations Having Time Varying and Nonlinear Coefficients Based on the State Transition Matrix

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2002-01-01

    A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.

  19. Neural correlates of gait variability in people with multiple sclerosis with fall history.

    PubMed

    Kalron, Alon; Allali, Gilles; Achiron, Anat

    2018-05-28

    Investigate the association between step time variability and related brain structures in accordance with fall status in people with multiple sclerosis (PwMS). The study included 225 PwMS. A whole-brain MRI was performed by a high-resolution 3.0-Telsa MR scanner in addition to volumetric analysis based on 3D T1-weighted images using the FreeSurfer image analysis suite. Step time variability was measured by an electronic walkway. Participants were defined as "fallers" (at least two falls during the previous year) and "non-fallers". One hundred and five PwMS were defined as fallers and had a greater step time variability compared to non-fallers (5.6% (S.D.=3.4) vs. 3.4% (S.D.=1.5); p=0.001). MS fallers exhibited a reduced volume in the left caudate and both cerebellum hemispheres compared to non-fallers. By using a linear regression analysis no association was found between gait variability and related brain structures in the total cohort and non-fallers group. However, the analysis found an association between the left hippocampus and left putamen volumes with step time variability in the faller group; p=0.031, 0.048, respectively, controlling for total cranial volume, walking speed, disability, age and gender. Nevertheless, according to the hierarchical regression model, the contribution of these brain measures to predict gait variability was relatively small compared to walking speed. An association between low left hippocampal, putamen volumes and step time variability was found in PwMS with a history of falls, suggesting brain structural characteristics may be related to falls and increased gait variability in PwMS. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  20. Statistical Modeling of Robotic Random Walks on Different Terrain

    NASA Astrophysics Data System (ADS)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  1. Experimental study on the stability and failure of individual step-pool

    NASA Astrophysics Data System (ADS)

    Zhang, Chendi; Xu, Mengzhen; Hassan, Marwan A.; Chartrand, Shawn M.; Wang, Zhaoyin

    2018-06-01

    Step-pools are one of the most common bedforms in mountain streams, the stability and failure of which play a significant role for riverbed stability and fluvial processes. Given this importance, flume experiments were performed with a manually constructed step-pool model. The experiments were carried out with a constant flow rate to study features of step-pool stability as well as failure mechanisms. The results demonstrate that motion of the keystone grain (KS) caused 90% of the total failure events. The pool reached its maximum depth and either exhibited relative stability for a period before step failure, which was called the stable phase, or the pool collapsed before its full development. The critical scour depth for the pool increased linearly with discharge until the trend was interrupted by step failure. Variability of the stable phase duration ranged by one order of magnitude, whereas variability of pool scour depth was constrained within 50%. Step adjustment was detected in almost all of the runs with step-pool failure and was one or two orders smaller than the diameter of the step stones. Two discharge regimes for step-pool failure were revealed: one regime captures threshold conditions and frames possible step-pool failure, whereas the second regime captures step-pool failure conditions and is the discharge of an exceptional event. In the transitional stage between the two discharge regimes, pool and step adjustment magnitude displayed relatively large variabilities, which resulted in feedbacks that extended the duration of step-pool stability. Step adjustment, which was a type of structural deformation, increased significantly before step failure. As a result, we consider step deformation as the direct explanation to step-pool failure rather than pool scour, which displayed relative stability during step deformations in our experiments.

  2. Effect of water hardness on cardiovascular mortality: an ecological time series approach.

    PubMed

    Lake, I R; Swift, L; Catling, L A; Abubakar, I; Sabel, C E; Hunter, P R

    2010-12-01

    Numerous studies have suggested an inverse relationship between drinking water hardness and cardiovascular disease. However, the weight of evidence is insufficient for the WHO to implement a health-based guideline for water hardness. This study followed WHO recommendations to assess the feasibility of using ecological time series data from areas exposed to step changes in water hardness to investigate this issue. Monthly time series of cardiovascular mortality data, subdivided by age and sex, were systematically collected from areas reported to have undergone step changes in water hardness, calcium and magnesium in England and Wales between 1981 and 2005. Time series methods were used to investigate the effect of water hardness changes on mortality. No evidence was found of an association between step changes in drinking water hardness or drinking water calcium and cardiovascular mortality. The lack of areas with large populations and a reasonable change in magnesium levels precludes a definitive conclusion about the impact of this cation. We use our results on the variability of the series to consider the data requirements (size of population, time of water hardness change) for such a study to have sufficient power. Only data from areas with large populations (>500,000) are likely to be able to detect a change of the size suggested by previous studies (rate ratio of 1.06). Ecological time series studies of populations exposed to changes in drinking water hardness may not be able to provide conclusive evidence on the links between water hardness and cardiovascular mortality unless very large populations are studied. Investigations of individuals may be more informative.

  3. Dependence of Hurricane intensity and structures on vertical resolution and time-step size

    NASA Astrophysics Data System (ADS)

    Zhang, Da-Lin; Wang, Xiaoxue

    2003-09-01

    In view of the growing interests in the explicit modeling of clouds and precipitation, the effects of varying vertical resolution and time-step sizes on the 72-h explicit simulation of Hurricane Andrew (1992) are studied using the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR) mesoscale model (i.e., MM5) with the finest grid size of 6 km. It is shown that changing vertical resolution and time-step size has significant effects on hurricane intensity and inner-core cloud/precipitation, but little impact on the hurricane track. In general, increasing vertical resolution tends to produce a deeper storm with lower central pressure and stronger three-dimensional winds, and more precipitation. Similar effects, but to a less extent, occur when the time-step size is reduced. It is found that increasing the low-level vertical resolution is more efficient in intensifying a hurricane, whereas changing the upper-level vertical resolution has little impact on the hurricane intensity. Moreover, the use of a thicker surface layer tends to produce higher maximum surface winds. It is concluded that the use of higher vertical resolution, a thin surface layer, and smaller time-step sizes, along with higher horizontal resolution, is desirable to model more realistically the intensity and inner-core structures and evolution of tropical storms as well as the other convectively driven weather systems.

  4. Defining process design space for a hydrophobic interaction chromatography (HIC) purification step: application of quality by design (QbD) principles.

    PubMed

    Jiang, Canping; Flansburg, Lisa; Ghose, Sanchayita; Jorjorian, Paul; Shukla, Abhinav A

    2010-12-15

    The concept of design space has been taking root under the quality by design paradigm as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. This paper outlines the development of a design space for a hydrophobic interaction chromatography (HIC) process step. The design space included the impact of raw material lot-to-lot variability and variations in the feed stream from cell culture. A failure modes and effects analysis was employed as the basis for the process characterization exercise. During mapping of the process design space, the multi-dimensional combination of operational variables were studied to quantify the impact on process performance in terms of yield and product quality. Variability in resin hydrophobicity was found to have a significant influence on step yield and high-molecular weight aggregate clearance through the HIC step. A robust operating window was identified for this process step that enabled a higher step yield while ensuring acceptable product quality. © 2010 Wiley Periodicals, Inc.

  5. Statistical Analyses of Femur Parameters for Designing Anatomical Plates.

    PubMed

    Wang, Lin; He, Kunjin; Chen, Zhengming

    2016-01-01

    Femur parameters are key prerequisites for scientifically designing anatomical plates. Meanwhile, individual differences in femurs present a challenge to design well-fitting anatomical plates. Therefore, to design anatomical plates more scientifically, analyses of femur parameters with statistical methods were performed in this study. The specific steps were as follows. First, taking eight anatomical femur parameters as variables, 100 femur samples were classified into three classes with factor analysis and Q-type cluster analysis. Second, based on the mean parameter values of the three classes of femurs, three sizes of average anatomical plates corresponding to the three classes of femurs were designed. Finally, based on Bayes discriminant analysis, a new femur could be assigned to the proper class. Thereafter, the average anatomical plate suitable for that new femur was selected from the three available sizes of plates. Experimental results showed that the classification of femurs was quite reasonable based on the anatomical aspects of the femurs. For instance, three sizes of condylar buttress plates were designed. Meanwhile, 20 new femurs are judged to which classes the femurs belong. Thereafter, suitable condylar buttress plates were determined and selected.

  6. Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo

    With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.

  7. Student Failures on First-Year Medical Basic Science Courses and the USMLE Step 1: A Retrospective Study over a 20-Year Period

    ERIC Educational Resources Information Center

    Burns, E. Robert; Garrett, Judy

    2015-01-01

    Correlates of achievement in the basic science years in medical school and on the Step 1 of the United States Medical Licensing Examination® (USMLE®), (Step 1) in relation to preadmission variables have been the subject of considerable study. Preadmissions variables such as the undergraduate grade point average (uGPA) and Medical College Admission…

  8. WAKES: Wavelet Adaptive Kinetic Evolution Solvers

    NASA Astrophysics Data System (ADS)

    Mardirian, Marine; Afeyan, Bedros; Larson, David

    2016-10-01

    We are developing a general capability to adaptively solve phase space evolution equations mixing particle and continuum techniques in an adaptive manner. The multi-scale approach is achieved using wavelet decompositions which allow phase space density estimation to occur with scale dependent increased accuracy and variable time stepping. Possible improvements on the SFK method of Larson are discussed, including the use of multiresolution analysis based Richardson-Lucy Iteration, adaptive step size control in explicit vs implicit approaches. Examples will be shown with KEEN waves and KEEPN (Kinetic Electrostatic Electron Positron Nonlinear) waves, which are the pair plasma generalization of the former, and have a much richer span of dynamical behavior. WAKES techniques are well suited for the study of driven and released nonlinear, non-stationary, self-organized structures in phase space which have no fluid, limit nor a linear limit, and yet remain undamped and coherent well past the drive period. The work reported here is based on the Vlasov-Poisson model of plasma dynamics. Work supported by a Grant from the AFOSR.

  9. Protein complex purification from Thermoplasma acidophilum using a phage display library.

    PubMed

    Hubert, Agnes; Mitani, Yasuo; Tamura, Tomohiro; Boicu, Marius; Nagy, István

    2014-03-01

    We developed a novel protein complex isolation method using a single-chain variable fragment (scFv) based phage display library in a two-step purification procedure. We adapted the antibody-based phage display technology which has been developed for single target proteins to a protein mixture containing about 300 proteins, mostly subunits of Thermoplasma acidophilum complexes. T. acidophilum protein specific phages were selected and corresponding scFvs were expressed in Escherichia coli. E. coli cell lysate containing the expressed His-tagged scFv specific against one antigen protein and T. acidophilum crude cell lysate containing intact target protein complexes were mixed, incubated and subjected to protein purification using affinity and size exclusion chromatography steps. This method was confirmed to isolate intact particles of thermosome and proteasome suitable for electron microscopy analysis and provides a novel protein complex isolation strategy applicable to organisms where no genetic tools are available. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Data-Driven Simulation-Enhanced Optimization of People-Based Print Production Service

    NASA Astrophysics Data System (ADS)

    Rai, Sudhendu

    This paper describes a systematic six-step data-driven simulation-based methodology for optimizing people-based service systems on a large distributed scale that exhibit high variety and variability. The methodology is exemplified through its application within the printing services industry where it has been successfully deployed by Xerox Corporation across small, mid-sized and large print shops generating over 250 million in profits across the customer value chain. Each step of the methodology consisting of innovative concepts co-development and testing in partnership with customers, development of software and hardware tools to implement the innovative concepts, establishment of work-process and practices for customer-engagement and service implementation, creation of training and infrastructure for large scale deployment, integration of the innovative offering within the framework of existing corporate offerings and lastly the monitoring and deployment of the financial and operational metrics for estimating the return-on-investment and the continual renewal of the offering are described in detail.

  11. Step training with body weight support: effect of treadmill speed and practice paradigms on poststroke locomotor recovery.

    PubMed

    Sullivan, Katherine J; Knowlton, Barbara J; Dobkin, Bruce H

    2002-05-01

    To investigate the effect of practice paradigms that varied treadmill speed during step training with body weight support in subjects with chronic hemiparesis after stroke. Randomized, repeated-measures pilot study with 1- and 3-month follow-ups. Outpatient locomotor laboratory. Twenty-four individuals with hemiparetic gait deficits whose walking speeds were at least 50% below normal. Participants were stratified by locomotor severity based on initial walking velocity and randomly assigned to treadmill training at slow (0.5mph), fast (2.0mph), or variable (0.5, 1.0, 1.5, 2.0mph) speeds. Participants received 20 minutes of training per session for 12 sessions over 4 weeks. Self-selected overground walking velocity (SSV) was assessed at the onset, middle, and end of training, and 1 and 3 months later. SSV improved in all groups compared with baseline (P<.001). All groups increased SSV in the 1-month follow-up (P<.01) and maintained these gains at the 3-month follow-up (P=.77). The greatest improvement in SSV across training occurred with fast training speeds compared with the slow and variable groups combined (P=.04). Effect size (ES) was large between fast compared with slow (ES=.75) and variable groups (ES=.73). Training at speeds comparable with normal walking velocity was more effective in improving SSV than training at speeds at or below the patient's typical overground walking velocity. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation

  12. Development and Validation of a New Fallout Transport Method Using Variable Spectral Winds

    NASA Astrophysics Data System (ADS)

    Hopkins, Arthur Thomas

    A new method has been developed to incorporate variable winds into fallout transport calculations. The method uses spectral coefficients derived by the National Meteorological Center. Wind vector components are computed with the coefficients along the trajectories of falling particles. Spectral winds are used in the two-step method to compute dose rate on the ground, downwind of a nuclear cloud. First, the hotline is located by computing trajectories of particles from an initial, stabilized cloud, through spectral winds, to the ground. The connection of particle landing points is the hotline. Second, dose rate on and around the hotline is computed by analytically smearing the falling cloud's activity along the ground. The feasibility of using specgtral winds for fallout particle transport was validated by computing Mount St. Helens ashfall locations and comparing calculations to fallout data. In addition, an ashfall equation was derived for computing volcanic ash mass/area on the ground. Ashfall data and the ashfall equation were used to back-calculate an aggregated particle size distribution for the Mount St. Helens eruption cloud. Further validation was performed by comparing computed and actual trajectories of a high explosive dust cloud (DIRECT COURSE). Using an error propagation formula, it was determined that uncertainties in spectral wind components produce less than four percent of the total dose rate variance. In summary, this research demonstrated the feasibility of using spectral coefficients for fallout transport calculations, developed a two-step smearing model to treat variable winds, and showed that uncertainties in spectral winds do not contribute significantly to the error in computed dose rate.

  13. Characterizing the roles of changing population size and selection on the evolution of flux control in metabolic pathways.

    PubMed

    Orlenko, Alena; Chi, Peter B; Liberles, David A

    2017-05-25

    Understanding the genotype-phenotype map is fundamental to our understanding of genomes. Genes do not function independently, but rather as part of networks or pathways. In the case of metabolic pathways, flux through the pathway is an important next layer of biological organization up from the individual gene or protein. Flux control in metabolic pathways, reflecting the importance of mutation to individual enzyme genes, may be evolutionarily variable due to the role of mutation-selection-drift balance. The evolutionary stability of rate limiting steps and the patterns of inter-molecular co-evolution were evaluated in a simulated pathway with a system out of equilibrium due to fluctuating selection, population size, or positive directional selection, to contrast with those under stabilizing selection. Depending upon the underlying population genetic regime, fluctuating population size was found to increase the evolutionary stability of rate limiting steps in some scenarios. This result was linked to patterns of local adaptation of the population. Further, during positive directional selection, as with more complex mutational scenarios, an increase in the observation of inter-molecular co-evolution was observed. Differences in patterns of evolution when systems are in and out of equilibrium, including during positive directional selection may lead to predictable differences in observed patterns for divergent evolutionary scenarios. In particular, this result might be harnessed to detect differences between compensatory processes and directional processes at the pathway level based upon evolutionary observations in individual proteins. Detecting functional shifts in pathways reflects an important milestone in predicting when changes in genotypes result in changes in phenotypes.

  14. Analysis of stability for stochastic delay integro-differential equations.

    PubMed

    Zhang, Yu; Li, Longsuo

    2018-01-01

    In this paper, we concern stability of numerical methods applied to stochastic delay integro-differential equations. For linear stochastic delay integro-differential equations, it is shown that the mean-square stability is derived by the split-step backward Euler method without any restriction on step-size, while the Euler-Maruyama method could reproduce the mean-square stability under a step-size constraint. We also confirm the mean-square stability of the split-step backward Euler method for nonlinear stochastic delay integro-differential equations. The numerical experiments further verify the theoretical results.

  15. Single step optimization of manipulator maneuvers with variable structure control

    NASA Technical Reports Server (NTRS)

    Chen, N.; Dwyer, T. A. W., III

    1987-01-01

    One step ahead optimization has been recently proposed for spacecraft attitude maneuvers as well as for robot manipulator maneuvers. Such a technique yields a discrete time control algorithm implementable as a sequence of state-dependent, quadratic programming problems for acceleration optimization. Its sensitivity to model accuracy, for the required inversion of the system dynamics, is shown in this paper to be alleviated by a fast variable structure control correction, acting between the sampling intervals of the slow one step ahead discrete time acceleration command generation algorithm. The slow and fast looping concept chosen follows that recently proposed for optimal aiming strategies with variable structure control. Accelerations required by the VSC correction are reserved during the slow one step ahead command generation so that the ability to overshoot the sliding surface is guaranteed.

  16. Performance of an attention-demanding task during treadmill walking shifts the noise qualities of step-to-step variation in step width.

    PubMed

    Grabiner, Mark D; Marone, Jane R; Wyatt, Marilynn; Sessoms, Pinata; Kaufman, Kenton R

    2018-06-01

    The fractal scaling evident in the step-to-step fluctuations of stepping-related time series reflects, to some degree, neuromotor noise. The primary purpose of this study was to determine the extent to which the fractal scaling of step width, step width and step width variability are affected by performance of an attention-demanding task. We hypothesized that the attention-demanding task would shift the structure of the step width time series toward white, uncorrelated noise. Subjects performed two 10-min treadmill walking trials, a control trial of undisturbed walking and a trial during which they performed a mental arithmetic/texting task. Motion capture data was converted to step width time series, the fractal scaling of which were determined from their power spectra. Fractal scaling decreased by 22% during the texting condition (p < 0.001) supporting the hypothesized shift toward white uncorrelated noise. Step width and step width variability increased 19% and five percent, respectively (p < 0.001). However, a stepwise discriminant analysis to which all three variables were input revealed that the control and dual task conditions were discriminated only by step width fractal scaling. The change of the fractal scaling of step width is consistent with increased cognitive demand and suggests a transition in the characteristics of the signal noise. This may reflect an important advance toward the understanding of the manner in which neuromotor noise contributes to some types of falls. However, further investigation of the repeatability of the results, the sensitivity of the results to progressive increases in cognitive load imposed by attention-demanding tasks, and the extent to which the results can be generalized to the gait of older adults seems warranted. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Optical defocus: differential effects on size and contrast letter recognition thresholds.

    PubMed

    Rabin, J

    1994-02-01

    To determine if optical defocus produces a greater reduction in visual acuity or small-letter contrast sensitivity. Letter charts were used to measure visual acuity and small-letter contrast sensitivity (20/25 Snellen equivalent) as a function of optical defocus. Letter size (acuity) and contrast (contrast sensitivity) were varied in equal logarithmic steps to make the task the same for the two types of measurement. Both visual acuity and contrast sensitivity declined with optical defocus, but the effect was far greater in the contrast domain. However, measurement variability also was greater for contrast sensitivity. After correction for this variability, measurement in the contrast domain still proved to be a more sensitive (1.75x) index of optical defocus. Small-letter contrast sensitivity is a powerful technique for detecting subtle amounts of optical defocus. This adjunctive approach may be useful when there are small changes in resolution that are not detected by standard measures of visual acuity. Potential applications include evaluating the course of vision in refractive surgery, classification of cataracts, detection of corneal or macular edema, and detection of visual loss in the aging eye. Evaluation of candidates for occupations requiring unique visual abilities also may be enhanced by measuring resolution in the contrast domain.

  18. Sampling hazelnuts for aflatoxin: uncertainty associated with sampling, sample preparation, and analysis.

    PubMed

    Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis

    2006-01-01

    The variability associated with the aflatoxin test procedure used to estimate aflatoxin levels in bulk shipments of hazelnuts was investigated. Sixteen 10 kg samples of shelled hazelnuts were taken from each of 20 lots that were suspected of aflatoxin contamination. The total variance associated with testing shelled hazelnuts was estimated and partitioned into sampling, sample preparation, and analytical variance components. Each variance component increased as aflatoxin concentration (either B1 or total) increased. With the use of regression analysis, mathematical expressions were developed to model the relationship between aflatoxin concentration and the total, sampling, sample preparation, and analytical variances. The expressions for these relationships were used to estimate the variance for any sample size, subsample size, and number of analyses for a specific aflatoxin concentration. The sampling, sample preparation, and analytical variances associated with estimating aflatoxin in a hazelnut lot at a total aflatoxin level of 10 ng/g and using a 10 kg sample, a 50 g subsample, dry comminution with a Robot Coupe mill, and a high-performance liquid chromatographic analytical method are 174.40, 0.74, and 0.27, respectively. The sampling, sample preparation, and analytical steps of the aflatoxin test procedure accounted for 99.4, 0.4, and 0.2% of the total variability, respectively.

  19. Pointing to double-step visual stimuli from a standing position: motor corrections when the speed-accuracy trade-off is unexpectedly modified in-flight. A breakdown of the perception-action coupling.

    PubMed

    Fautrelle, L; Barbieri, G; Ballay, Y; Bonnetblanc, F

    2011-10-27

    The time required to complete a fast and accurate movement is a function of its amplitude and the target size. This phenomenon refers to the well known speed-accuracy trade-off. Some interpretations have suggested that the speed-accuracy trade-off is already integrated into the movement planning phase. More specifically, pointing movements may be planned to minimize the variance of the final hand position. However, goal-directed movements can be altered at any time, if for instance, the target location is changed during execution. Thus, one possible limitation of these interpretations may be that they underestimate feedback processes. To further investigate this hypothesis we designed an experiment in which the speed-accuracy trade-off was unexpectedly varied at the hand movement onset by modifying separately the target distance or size, or by modifying both of them simultaneously. These pointing movements were executed from an upright standing position. Our main results showed that the movement time increased when there was a change to the size or location of the target. In addition, the terminal variability of finger position did not change. In other words, it showed that the movement velocity is modulated according to the target size and distance during motor programming or during the final approach, independently of the final variability of the hand position. It suggests that when the speed-accuracy trade-off is unexpectedly modified, terminal feedbacks based on intermediate representations of the endpoint velocity are used to monitor and control the hand displacement. There is clearly no obvious perception-action coupling in this case but rather intermediate processing that may be involved. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Multiple dual mode counter-current chromatography with variable duration of alternating phase elution steps.

    PubMed

    Kostanyan, Artak E; Erastov, Andrey A; Shishilov, Oleg N

    2014-06-20

    The multiple dual mode (MDM) counter-current chromatography separation processes consist of a succession of two isocratic counter-current steps and are characterized by the shuttle (forward and back) transport of the sample in chromatographic columns. In this paper, the improved MDM method based on variable duration of alternating phase elution steps has been developed and validated. The MDM separation processes with variable duration of phase elution steps are analyzed. Basing on the cell model, analytical solutions are developed for impulse and non-impulse sample loading at the beginning of the column. Using the analytical solutions, a calculation program is presented to facilitate the simulation of MDM with variable duration of phase elution steps, which can be used to select optimal process conditions for the separation of a given feed mixture. Two options of the MDM separation are analyzed: 1 - with one-step solute elution: the separation is conducted so, that the sample is transferred forward and back with upper and lower phases inside the column until the desired separation of the components is reached, and then each individual component elutes entirely within one step; 2 - with multi-step solute elution, when the fractions of individual components are collected in over several steps. It is demonstrated that proper selection of the duration of individual cycles (phase flow times) can greatly increase the separation efficiency of CCC columns. Experiments were carried out using model mixtures of compounds from the GUESSmix with solvent systems hexane/ethyl acetate/methanol/water. The experimental results are compared to the predictions of the theory. A good agreement between theory and experiment has been demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Process for preparation of large-particle-size monodisperse latexes

    NASA Technical Reports Server (NTRS)

    Vanderhoff, J. W.; Micale, F. J.; El-Aasser, M. S.; Kornfeld, D. M. (Inventor)

    1981-01-01

    Monodisperse latexes having a particle size in the range of 2 to 40 microns are prepared by seeded emulsion polymerization in microgravity. A reaction mixture containing smaller monodisperse latex seed particles, predetermined amounts of monomer, emulsifier, initiator, inhibitor and water is placed in a microgravity environment, and polymerization is initiated by heating. The reaction is allowed to continue until the seed particles grow to a predetermined size, and the resulting enlarged particles are then recovered. A plurality of particle-growing steps can be used to reach larger sizes within the stated range, with enlarge particles from the previous steps being used as seed particles for the succeeding steps. Microgravity enables preparation of particles in the stated size range by avoiding gravity related problems of creaming and settling, and flocculation induced by mechanical shear that have precluded their preparation in a normal gravity environment.

  2. Operative air temperature data for different measures applied on a building envelope in warm climate.

    PubMed

    Baglivo, Cristina; Congedo, Paolo Maria

    2018-04-01

    Several technical combinations have been evaluated in order to design high energy performance buildings for the warm climate. The analysis has been developed in several steps, avoiding the use of HVAC systems. The methodological approach of this study is based on a sequential search technique and it is shown on the paper entitled "Envelope Design Optimization by Thermal Modeling of a Building in a Warm Climate" [1]. The Operative Air Temperature trends (TOP), for each combination, have been plotted through a dynamic simulation performed using the software TRNSYS 17 (a transient system simulation program, University of Wisconsin, Solar Energy Laboratory, USA, 2010). Starting from the simplest building configuration consisting of 9 rooms (equal-sized modules of 5 × 5 m 2 ), the different building components are sequentially evaluated until the envelope design is optimized. The aim of this study is to perform a step-by-step simulation, simplifying as much as possible the model without making additional variables that can modify their performances. Walls, slab-on-ground floor, roof, shading and windows are among the simulated building components. The results are shown for each combination and evaluated for Brindisi, a city in southern Italy having 1083 degrees day, belonging to the national climatic zone C. The data show the trends of the TOP for each measure applied in the case study for a total of 17 combinations divided into eight steps.

  3. High efficient perovskite solar cell material CH3NH3PbI3: Synthesis of films and their characterization

    NASA Astrophysics Data System (ADS)

    Bera, Amrita Mandal; Wargulski, Dan Ralf; Unold, Thomas

    2018-04-01

    Hybrid organometal perovskites have been emerged as promising solar cell material and have exhibited solar cell efficiency more than 20%. Thin films of Methylammonium lead iodide CH3NH3PbI3 perovskite materials have been synthesized by two different (one step and two steps) methods and their morphological properties have been studied by scanning electron microscopy and optical microscope imaging. The morphology of the perovskite layer is one of the most important parameters which affect solar cell efficiency. The morphology of the films revealed that two steps method provides better surface coverage than the one step method. However, the grain sizes were smaller in case of two steps method. The films prepared by two steps methods on different substrates revealed that the grain size also depend on the substrate where an increase of the grain size was found from glass substrate to FTO with TiO2 blocking layer to FTO without any change in the surface coverage area. Present study reveals that an improved quality of films can be obtained by two steps method by an optimization of synthesis processes.

  4. Assessment of power step performances of variable speed pump-turbine unit by means of hydro-electrical system simulation

    NASA Astrophysics Data System (ADS)

    Béguin, A.; Nicolet, C.; Hell, J.; Moreira, C.

    2017-04-01

    The paper explores the improvement in ancillary services that variable speed technologies can provide for the case of an existing pumped storage power plant of 2x210 MVA which conversion from fixed speed to variable speed is investigated with a focus on the power step performances of the units. First two motor-generator variable speed technologies are introduced, namely the Doubly Fed Induction Machine (DFIM) and the Full Scale Frequency Converter (FSFC). Then a detailed numerical simulation model of the investigated power plant used to simulate power steps response and comprising the waterways, the pump-turbine unit, the motor-generator, the grid connection and the control systems is presented. Hydroelectric system time domain simulations are performed in order to determine the shortest response time achievable, taking into account the constraints from the maximum penstock pressure and from the rotational speed limits. It is shown that the maximum instantaneous power step response up and down depends on the hydro-mechanical characteristics of the pump-turbine unit and of the motor-generator speed limits. As a results, for the investigated test case, the FSFC solution offer the best power step response performances.

  5. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    NASA Astrophysics Data System (ADS)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with the code Gauss2 [S. Gonzalez-Pinto, R. Rojas-Bello, Gauss2, a Fortran 90 code for second order initial value problems, ], based on an adaptive two stage Runge-Kutta-Gauss method with this discontinuous step-size policy.

  6. Identifying elderly people at risk for cognitive decline by using the 2-step test.

    PubMed

    Maruya, Kohei; Fujita, Hiroaki; Arai, Tomoyuki; Hosoi, Toshiki; Ogiwara, Kennichi; Moriyama, Shunnichiro; Ishibashi, Hideaki

    2018-01-01

    [Purpose] The purpose is to verify the effectiveness of the 2-step test in predicting cognitive decline in elderly individuals. [Subjects and Methods] One hundred eighty-two participants aged over 65 years underwent the 2-step test, cognitive function tests and higher level competence testing. Participants were classified as Robust, <1.3, and <1.1 using criteria regarding the locomotive syndrome risk stage for the 2-step test, variables were compared between groups. In addition, ordered logistic analysis was used to analyze cognitive functions as independent variables in the three groups, using the 2-step test results as the dependent variable, with age, gender, etc. as adjustment factors. [Results] In the crude data, the <1.3 and <1.1 groups were older and displayed lower motor and cognitive functions than did the Robust group. Furthermore, the <1.3 group exhibited significantly lower memory retention than did the Robust group. The 2-step test was related to the Stroop test (β: 0.06, 95% confidence interval: 0.01-0.12). [Conclusion] The finding is that the risk stage of the 2-step test is related to cognitive functions, even at an initial risk stage. The 2-step test may help with earlier detection and implementation of prevention measures for locomotive syndrome and mild cognitive impairment.

  7. Prematriculation variables associated with suboptimal outcomes for the 1994 – 1999 cohort of U.S. medical-school matriculants

    PubMed Central

    Andriole, Dorothy A.; Jeffe, Donna B.

    2010-01-01

    Context The relationship between increasing numbers and diversity of medical-school enrollees and the US physician workforce size and composition has not been described. Objective Identify demographic and pre-matriculation factors associated with medical-school matriculants’ outcomes. Design, Setting, and Participants De-identified data for the 1994–1999 national cohort of 97445 matriculants, followed through March 2, 2009 to graduation or withdrawal/dismissal, were analyzed using multivariable logistic regression to identify factors associated with suboptimal outcomes. Main Outcome Measures Academic withdrawal/dismissal, non-academic withdrawal/dismissal, and graduation without first-attempt passing scores on United States Medical Licensing Examination (USMLE) Step 1 and/or Step 2CK, each compared with graduation with first-attempt passing scores on both examinations. Results Of 84018 (86.2%) matriculants, 74494 (88.7%) graduated with first-attempt passing scores on Step 1 and Step 2CK, 6743 (8.0%) graduated without first-attempt passing scores on Step 1 and/or Step 2CK, 1049 (1.2%) withdrew/were dismissed for academic reasons, and 1732 (2.1%) withdrew/were dismissed for non-academic reasons. Variables associated with greater likelihood of graduation without first-attempt passing scores on Step 1 and/or Step 2CK and of academic withdrawal/dismissal included Medical College Admission Test (MCAT) scores (MCAT 18–20 [2.9% of sample]: adjusted odds ratio [OR]=13.06, 95% confidence interval [95% CI]=11.56–14.76 and OR=11.08, 95% CI=8.50–14.45, respectively; MCAT 21–23 [5.6%]: OR=7.52, 95% CI=6.79–8.33 and OR=5.97, 95% CI=4.68–7.62; MCAT 24–26 [13.9%]: OR=4.27, 95% CI=3.92–4.65 and OR=3.56, CI=2.88–4.40) each compared with MCAT>29; Asian/Pacific Islander ([18.2%]: OR=2.15, 95% CI=2.00–2.32 and OR=1.69, 95% CI = 1.37–2.09) or underrepresented minority ([14.9%]: OR=2.30, 95% CI=2.13–2.48 and OR=2.96, 95% CI=2.48–3.54) compared with white race/ethnicity, and premedical debt > $50,000 ([1.0%]:OR=1.68, 95% CI=1.35–2.08 and OR=2.33, 95% CI=1.57–3.46) compared with no debt. Conclusions Lower MCAT scores, non-white race/ethnicity, and premedical debt > $50,000 were independently associated with greater likelihood of academic withdrawal/dismissal and graduating without first-attempt passing scores on USMLE Step 1 and/or Step 2CK. PMID:20841535

  8. [Influence on microstructure of dental zirconia ceramics prepared by two-step sintering].

    PubMed

    Jian, Chao; Li, Ning; Wu, Zhikai; Teng, Jing; Yan, Jiazhen

    2013-10-01

    To investigate the microstructure of dental zirconia ceramics prepared by two-step sintering. Nanostructured zirconia powder was dry compacted, cold isostatic pressed, and pre-sintered. The pre-sintered discs were cut processed into samples. Conventional sintering, single-step sintering, and two-step sintering were carried out, and density and grain size of the samples were measured. Afterward, T1 and/or T2 of two-step sintering ranges were measured. Effects on microstructure of different routes, which consisted of two-step sintering and conventional sintering were discussed. The influence of T1 and/or T2 on density and grain size were analyzed as well. The range of T1 was between 1450 degrees C and 1550 degrees C, and the range of T2 was between 1250 degrees C and 1350 degrees C. Compared with conventional sintering, finer microstructure of higher density and smaller grain could be obtained by two-step sintering. Grain growth was dependent on T1, whereas density was not much related with T1. However, density was dependent on T2, and grain size was minimally influenced. Two-step sintering could ensure a sintering body with high density and small grain, which is good for optimizing the microstructure of dental zirconia ceramics.

  9. Surfactant-controlled polymerization of semiconductor clusters to quantum dots through competing step-growth and living chain-growth mechanisms.

    PubMed

    Evans, Christopher M; Love, Alyssa M; Weiss, Emily A

    2012-10-17

    This article reports control of the competition between step-growth and living chain-growth polymerization mechanisms in the formation of cadmium chalcogenide colloidal quantum dots (QDs) from CdSe(S) clusters by varying the concentration of anionic surfactant in the synthetic reaction mixture. The growth of the particles proceeds by step-addition from initially nucleated clusters in the absence of excess phosphinic or carboxylic acids, which adsorb as their anionic conjugate bases, and proceeds indirectly by dissolution of clusters, and subsequent chain-addition of monomers to stable clusters (Ostwald ripening) in the presence of excess phosphinic or carboxylic acid. Fusion of clusters by step-growth polymerization is an explanation for the consistent observation of so-called "magic-sized" clusters in QD growth reactions. Living chain-addition (chain addition with no explicit termination step) produces QDs over a larger range of sizes with better size dispersity than step-addition. Tuning the molar ratio of surfactant to Se(2-)(S(2-)), the limiting ionic reagent, within the living chain-addition polymerization allows for stoichiometric control of QD radius without relying on reaction time.

  10. The Sounds of Desaturation: A Survey of Commercial Pulse Oximeter Sonifications.

    PubMed

    Loeb, Robert G; Brecknell, Birgit; Sanderson, Penelope M

    2016-05-01

    The pulse oximeter has been a standard of care medical monitor for >25 years. Most manufacturers include a variable-pitch pulse tone in their pulse oximeters. Research has shown that the acoustic properties of variable-pitch tones are not standardized. In this study, we surveyed the properties of pulse tones from 21 pulse oximeters, consisting of 1 to 4 instruments of 11 different models and 8 brands. Our goals were to fully document the sounds over saturation values 0% to 100%, test whether tones become quieter at low saturation values, and create a public repository of pulse oximeter recordings for future use. A convenience sample of commercial pulse oximeters in use at one hospital was studied. Audiovisual recordings of each pulse oximeter's display and sounds were taken while it monitored a simulator starting at a saturation of 100% and slowly decreasing in 1% steps until the saturation reached 0%. Recorded pulse tones were analyzed for spectral frequency and total power. Audio files for each pulse oximeter containing 100 pulse tones, one at every saturation value, were created for inclusion in the repository. Recordings containing 509 to 1053 pulse tones were made from the 21 pulse oximeters. Fundamental frequencies at 100% saturation ranged from 479 to 921 Hz, and fundamental frequencies at 1% saturation ranged from 38 to 404 Hz. The pulse tones from all but one model pulse oximeter contained harmonics. Pulse tone step sizes were linear in 6 models and logarithmic in 6 models. Only 6 pulse oximeter models decreased the pulse tone pitch at every decrease in saturation; all others decreased the pitch at only select saturation thresholds. Five pulse oximeter models stopped decreasing pitch altogether once the saturation reached a certain lower threshold. Pulse tone power (perceived as loudness) changed with saturation level for all pulse oximeters, increasing above baseline as saturation decreased from 100% and decreasing to levels below baseline at low saturation values. Current pulse oximeters use different techniques to address the competing goals of (1) using pitch steps that are large enough to be readily perceived, and (2) conveying saturation values from 0 to 100 within a limited range of sound frequencies. From a clinical perspective, 2 techniques for increasing perceivability (increasing the frequency range and using ratio step sizes) have no drawback, but 2 techniques (not changing pitch at every saturation change and using a lower saturation cutoff) do have potential clinical drawbacks. On the basis of our findings, we have made suggestions for clinicians and manufacturers.

  11. CAN STABILITY REALLY PREDICT AN IMPENDING SLIP-RELATED FALL AMONG OLDER ADULTS?

    PubMed Central

    Yang, Feng; Pai, Yi-Chung

    2015-01-01

    The primary purpose of this study was to systematically evaluate and compare the predictive power of falls for a battery of stability indices, obtained during normal walking among community-dwelling older adults. One hundred and eighty seven community-dwelling older adults participated in the study. After walking regularly for 20 strides on a walkway, participants were subjected to an unannounced slip during gait under the protection of a safety harness. Full body kinematics and kinetics were monitored during walking using a motion capture system synchronized with force plates. Stability variables, including feasible-stability-region measurement, margin of stability, the maximum Floquet multiplier, the Lyapunov exponents (short- and long-term), and the variability of gait parameters (including the step length, step width, and step time) were calculated for each subject. Accuracy of predicting slip outcome (fall vs. recovery) was examined for each stability variable using logistic regression. Results showed that the feasible-stability-region measurement predicted fall incidence among these subjects with the highest accuracy (68.4%). Except for the step width (with an accuracy of 60.2%), no other stability variables could differentiate fallers from those who did not fall for the sample studied in this study. The findings from the present study could provide guidance to identify individuals at increased risk of falling using the feasible-stability-region measurement or variability of the step width. PMID:25458148

  12. Method for the generation of variable density metal vapors which bypasses the liquidus phase

    DOEpatents

    Kunnmann, Walter; Larese, John Z.

    2001-01-01

    The present invention provides a method for producing a metal vapor that includes the steps of combining a metal and graphite in a vessel to form a mixture; heating the mixture to a first temperature in an argon gas atmosphere to form a metal carbide; maintaining the first temperature for a period of time; heating the metal carbide to a second temperature to form a metal vapor; withdrawing the metal vapor and the argon gas from the vessel; and separating the metal vapor from the argon gas. Metal vapors made using this method can be used to produce uniform powders of the metal oxide that have narrow size distribution and high purity.

  13. Disease Messaging in Churches: Implications for Health in African-American Communities

    PubMed Central

    Harmon, Brook E.; Chock, Marci; Brantley, Elizabeth; Wirth, Michael D.; Hébert, James R.

    2016-01-01

    Using the right messaging strategies, churches can help promote behavior change. Frequencies of disease-specific messages in 21 African-American churches were compared to overall and cancer-specific mortality and morbidity rates as well as church-level variables. Disease messages were found in 1025 of 2166 items. Frequently referenced topics included cancer (n=316), mental health conditions (n=253), heart disease (n=246), and infectious diseases (n=220). Messages for lung and colorectal cancers appeared at low frequency despite high mortality rates in African-American communities. Season, church size, and denomination showed significant associations with health messages. Next steps include testing messaging strategies aimed at improving the health of churchgoing communities. PMID:26296703

  14. Predictive Variables of Half-Marathon Performance for Male Runners

    PubMed Central

    Gómez-Molina, Josué; Ogueta-Alday, Ana; Camara, Jesus; Stickley, Christoper; Rodríguez-Marroyo, José A.; García-López, Juan

    2017-01-01

    The aims of this study were to establish and validate various predictive equations of half-marathon performance. Seventy-eight half-marathon male runners participated in two different phases. Phase 1 (n = 48) was used to establish the equations for estimating half-marathon performance, and Phase 2 (n = 30) to validate these equations. Apart from half-marathon performance, training-related and anthropometric variables were recorded, and an incremental test on a treadmill was performed, in which physiological (VO2max, speed at the anaerobic threshold, peak speed) and biomechanical variables (contact and flight times, step length and step rate) were registered. In Phase 1, half-marathon performance could be predicted to 90.3% by variables related to training and anthropometry (Equation 1), 94.9% by physiological variables (Equation 2), 93.7% by biomechanical parameters (Equation 3) and 96.2% by a general equation (Equation 4). Using these equations, in Phase 2 the predicted time was significantly correlated with performance (r = 0.78, 0.92, 0.90 and 0.95, respectively). The proposed equations and their validation showed a high prediction of half-marathon performance in long distance male runners, considered from different approaches. Furthermore, they improved the prediction performance of previous studies, which makes them a highly practical application in the field of training and performance. Key points The present study obtained four equations involving anthropometric, training, physiological and biomechanical variables to estimate half-marathon performance. These equations were validated in a different population, demonstrating narrows ranges of prediction than previous studies and also their consistency. As a novelty, some biomechanical variables (i.e. step length and step rate at RCT, and maximal step length) have been related to half-marathon performance. PMID:28630571

  15. Designing nacre-like materials for simultaneous stiffness, strength and toughness: Optimum materials, composition, microstructure and size

    NASA Astrophysics Data System (ADS)

    Barthelat, Francois

    2014-12-01

    Nacre, bone and spider silk are staggered composites where inclusions of high aspect ratio reinforce a softer matrix. Such staggered composites have emerged through natural selection as the best configuration to produce stiffness, strength and toughness simultaneously. As a result, these remarkable materials are increasingly serving as model for synthetic composites with unusual and attractive performance. While several models have been developed to predict basic properties for biological and bio-inspired staggered composites, the designer is still left to struggle with finding optimum parameters. Unresolved issues include choosing optimum properties for inclusions and matrix, and resolving the contradictory effects of certain design variables. Here we overcome these difficulties with a multi-objective optimization for simultaneous high stiffness, strength and energy absorption in staggered composites. Our optimization scheme includes material properties for inclusions and matrix as design variables. This process reveals new guidelines, for example the staggered microstructure is only advantageous if the tablets are at least five times stronger than the interfaces, and only if high volume concentrations of tablets are used. We finally compile the results into a step-by-step optimization procedure which can be applied for the design of any type of high-performance staggered composite and at any length scale. The procedure produces optimum designs which are consistent with the materials and microstructure of natural nacre, confirming that this natural material is indeed optimized for mechanical performance.

  16. Effects of Socket Size on Metrics of Socket Fit in Trans-Tibial Prosthesis Users

    PubMed Central

    Sanders, Joan E; Youngblood, Robert T; Hafner, Brian J; Cagle, John C; McLean, Jake B; Redd, Christian B; Dietrich, Colin R; Ciol, Marcia A; Allyn, Katheryn J

    2017-01-01

    The purpose of this research was to conduct a preliminary effort to identify quantitative metrics to distinguish a good socket from an oversized socket in people with trans-tibial amputation. Results could be used to inform clinical practices related to socket replacement. A cross-over study was conducted on community ambulators (K-level 3 or 4) with good residual limb sensation. Participants were each provided with two sockets, a duplicate of their as-prescribed socket and a modified socket that was enlarged or reduced by 1.8 mm (~6% of the socket volume) based on the fit quality of the as-prescribed socket. The two sockets were termed a larger socket and a smaller socket. Activity was monitored while participants wore each socket for 4wk. Participants’ gait; self-reported satisfaction, quality of fit, and performance; socket comfort; and morning-to-afternoon limb fluid volume changes were assessed. Visual analysis of plots and estimated effect sizes (measure as mean difference divided by standard deviation) showed largest effects for step time asymmetry, step width asymmetry, anterior and anterior-distal morning-to-afternoon fluid volume change, socket comfort scores, and self-reported measures of utility, satisfaction, and residual limb health. These variables may be viable metrics for early detection of deterioration in socket fit, and should be tested in a larger clinical study. PMID:28373013

  17. Effects of socket size on metrics of socket fit in trans-tibial prosthesis users.

    PubMed

    Sanders, Joan E; Youngblood, Robert T; Hafner, Brian J; Cagle, John C; McLean, Jake B; Redd, Christian B; Dietrich, Colin R; Ciol, Marcia A; Allyn, Katheryn J

    2017-06-01

    The purpose of this research was to conduct a preliminary effort to identify quantitative metrics to distinguish a good socket from an oversized socket in people with trans-tibial amputation. Results could be used to inform clinical practices related to socket replacement. A cross-over study was conducted on community ambulators (K-level 3 or 4) with good residual limb sensation. Participants were each provided with two sockets, a duplicate of their as-prescribed socket and a modified socket that was enlarged or reduced by 1.8mm (∼6% of the socket volume) based on the fit quality of the as-prescribed socket. The two sockets were termed a larger socket and a smaller socket. Activity was monitored while participants wore each socket for 4 weeks. Participants' gait; self-reported satisfaction, quality of fit, and performance; socket comfort; and morning-to-afternoon limb fluid volume changes were assessed. Visual analysis of plots and estimated effect sizes (measured as mean difference divided by standard deviation) showed largest effects for step time asymmetry, step width asymmetry, anterior and anterior-distal morning-to-afternoon fluid volume change, socket comfort score, and self-reported utility. These variables may be viable metrics for early detection of deterioration in socket fit, and should be tested in a larger clinical study. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. The precision of locomotor odometry in humans.

    PubMed

    Durgin, Frank H; Akagi, Mikio; Gallistel, Charles R; Haiken, Woody

    2009-03-01

    Two experiments measured the human ability to reproduce locomotor distances of 4.6-100 m without visual feedback and compared distance production with time production. Subjects were not permitted to count steps. It was found that the precision of human odometry follows Weber's law that variability is proportional to distance. The coefficients of variation for distance production were much lower than those measured for time production for similar durations. Gait parameters recorded during the task (average step length and step frequency) were found to be even less variable suggesting that step integration could be the basis for non-visual human odometry.

  19. Optimal setups for forced-choice staircases with fixed step sizes.

    PubMed

    García-Pérez, M A

    2000-01-01

    Forced-choice staircases with fixed step sizes are used in a variety of formats whose relative merits have never been studied. This paper presents a comparative study aimed at determining their optimal format. Factors included in the study were the up/down rule, the length (number of reversals), and the size of the steps. The study also addressed the issue of whether a protocol involving three staircases running for N reversals each (with a subsequent average of the estimates provided by each individual staircase) has better statistical properties than an alternative protocol involving a single staircase running for 3N reversals. In all cases the size of a step up was different from that of a step down, in the appropriate ratio determined by García-Pérez (Vision Research, 1998, 38, 1861 - 1881). The results of a simulation study indicate that a) there are no conditions in which the 1-down/1-up rule is advisable; b) different combinations of up/down rule and number of reversals appear equivalent in terms of precision and cost: c) using a single long staircase with 3N reversals is more efficient than running three staircases with N reversals each: d) to avoid bias and attain sufficient accuracy, threshold estimates should be based on at least 30 reversals: and e) to avoid excessive cost and imprecision, the size of the step up should be between 2/3 and 3/3 the (known or presumed) spread of the psychometric function. An empirical study with human subjects confirmed the major characteristics revealed by the simulations.

  20. Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Agte, Jeremy S.; Sandusky, Robert R., Jr.

    1998-01-01

    BLISS is a method for optimization of engineering systems by decomposition. It separates the system level optimization, having a relatively small number of design variables, from the potentially numerous subsystem optimizations that may each have a large number of local design variables. The subsystem optimizations are autonomous and may be conducted concurrently. Subsystem and system optimizations alternate, linked by sensitivity data, producing a design improvement in each iteration. Starting from a best guess initial design, the method improves that design in iterative cycles, each cycle comprised of two steps. In step one, the system level variables are frozen and the improvement is achieved by separate, concurrent, and autonomous optimizations in the local variable subdomains. In step two, further improvement is sought in the space of the system level variables. Optimum sensitivity data link the second step to the first. The method prototype was implemented using MATLAB and iSIGHT programming software and tested on a simplified, conceptual level supersonic business jet design, and a detailed design of an electronic device. Satisfactory convergence and favorable agreement with the benchmark results were observed. Modularity of the method is intended to fit the human organization and map well on the computing technology of concurrent processing.

  1. Acute effects of anesthetic lumbar spine injections on temporal spatial parameters of gait in individuals with chronic low back pain: A pilot study.

    PubMed

    Herndon, Carl L; Horodyski, MaryBeth; Vincent, Heather K

    2017-10-01

    This study examined whether epidural injection-induced anesthesia acutely and positively affected temporal spatial parameters of gait in patients with chronic low back pain (LBP) due to lumbar spinal stenosis. Twenty-five patients (61.7±13.6years) who were obtaining lumbar epidural injections for stenosis-related LBP participated. Oswestry Disability Index (ODI) scores, Medical Outcomes Short Form (SF-36) scores, 11-point Numerical pain rating (NRS pain ) scores, and temporal spatial parameters of walking gait were obtained prior to, and 11-point Numerical pain rating (NRS pain ) scores, and temporal spatial parameters of walking gait were obtained after the injection. Gait parameters were measured using an instrumented gait mat. Patients received transforaminal epidural injections in the L1-S1 vertebral range (1% lidocaine, corticosteroid) under fluoroscopic guidance. Patients with post-injection NRS pain ratings of "0" or values greater than "0" were stratified into two groups: 1) full pain relief, or 2) partial pain relief, respectively. Post-injection, 48% (N=12) of patients reported full pain relief. ODI scores were higher in patients with full pain relief (55.3±21.4 versus 33.7 12.8; p=0.008). Post-injection, stride length and step length variability were significantly improved in the patients with full pain relief compared to those with partial pain relief. Effect sizes between full and partial pain relief for walking velocity, step length, swing time, stride and step length variability were medium to large (Cohen's d>0.50). Patients with LBP can gain immediate gait improvements from complete pain relief from transforaminal epidural anesthetic injections for LBP, which could translate to better stability and lower fall risk. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Controlling the type I error rate in two-stage sequential adaptive designs when testing for average bioequivalence.

    PubMed

    Maurer, Willi; Jones, Byron; Chen, Ying

    2018-05-10

    In a 2×2 crossover trial for establishing average bioequivalence (ABE) of a generic agent and a currently marketed drug, the recommended approach to hypothesis testing is the two one-sided test (TOST) procedure, which depends, among other things, on the estimated within-subject variability. The power of this procedure, and therefore the sample size required to achieve a minimum power, depends on having a good estimate of this variability. When there is uncertainty, it is advisable to plan the design in two stages, with an interim sample size reestimation after the first stage, using an interim estimate of the within-subject variability. One method and 3 variations of doing this were proposed by Potvin et al. Using simulation, the operating characteristics, including the empirical type I error rate, of the 4 variations (called Methods A, B, C, and D) were assessed by Potvin et al and Methods B and C were recommended. However, none of these 4 variations formally controls the type I error rate of falsely claiming ABE, even though the amount of inflation produced by Method C was considered acceptable. A major disadvantage of assessing type I error rate inflation using simulation is that unless all possible scenarios for the intended design and analysis are investigated, it is impossible to be sure that the type I error rate is controlled. Here, we propose an alternative, principled method of sample size reestimation that is guaranteed to control the type I error rate at any given significance level. This method uses a new version of the inverse-normal combination of p-values test, in conjunction with standard group sequential techniques, that is more robust to large deviations in initial assumptions regarding the variability of the pharmacokinetic endpoints. The sample size reestimation step is based on significance levels and power requirements that are conditional on the first-stage results. This necessitates a discussion and exploitation of the peculiar properties of the power curve of the TOST testing procedure. We illustrate our approach with an example based on a real ABE study and compare the operating characteristics of our proposed method with those of Method B of Povin et al. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Double emulsion formation through hierarchical flow-focusing microchannel

    NASA Astrophysics Data System (ADS)

    Azarmanesh, Milad; Farhadi, Mousa; Azizian, Pooya

    2016-03-01

    A microfluidic device is presented for creating double emulsions, controlling their sizes and also manipulating encapsulation processes. As a result of three immiscible liquids' interaction using dripping instability, double emulsions can be produced elegantly. Effects of dimensionless numbers are investigated which are Weber number of the inner phase (Wein), Capillary number of the inner droplet (Cain), and Capillary number of the outer droplet (Caout). They affect the formation process, inner and outer droplet size, and separation frequency. Direct numerical simulation of governing equations was done using volume of fluid method and adaptive mesh refinement technique. Two kinds of double emulsion formation, the two-step and the one-step, were simulated in which the thickness of the sheath of double emulsions can be adjusted. Altering each dimensionless number will change detachment location, outer droplet size and droplet formation period. Moreover, the decussate regime of the double-emulsion/empty-droplet is observed in low Wein. This phenomenon can be obtained by adjusting the Wein in which the maximum size of the sheath is discovered. Also, the results show that Cain has significant influence on the outer droplet size in the two-step process, while Caout affects the sheath in the one-step formation considerably.

  4. A Two-Step Approach to Analyze Satisfaction Data

    ERIC Educational Resources Information Center

    Ferrari, Pier Alda; Pagani, Laura; Fiorio, Carlo V.

    2011-01-01

    In this paper a two-step procedure based on Nonlinear Principal Component Analysis (NLPCA) and Multilevel models (MLM) for the analysis of satisfaction data is proposed. The basic hypothesis is that observed ordinal variables describe different aspects of a latent continuous variable, which depends on covariates connected with individual and…

  5. Evaluating the effects of variable water chemistry on bacterial transport during infiltration.

    PubMed

    Zhang, Haibo; Nordin, Nahjan Amer; Olson, Mira S

    2013-07-01

    Bacterial infiltration through the subsurface has been studied experimentally under different conditions of interest and is dependent on a variety of physical, chemical and biological factors. However, most bacterial transport studies fail to adequately represent the complex processes occurring in natural systems. Bacteria are frequently detected in stormwater runoff, and may present risk of microbial contamination during stormwater recharge into groundwater. Mixing of stormwater runoff with groundwater during infiltration results in changes in local solution chemistry, which may lead to changes in both bacterial and collector surface properties and subsequent bacterial attachment rates. This study focuses on quantifying changes in bacterial transport behavior under variable solution chemistry, and on comparing the influences of chemical variability and physical variability on bacterial attachment rates. Bacterial attachment rate at the soil-water interface was predicted analytically using a combined rate equation, which varies temporally and spatially with respect to changes in solution chemistry. Two-phase Monte Carlo analysis was conducted and an overall input-output correlation coefficient was calculated to quantitatively describe the importance of physiochemical variation on the estimates of attachment rate. Among physical variables, soil particle size has the highest correlation coefficient, followed by porosity of the soil media, bacterial size and flow velocity. Among chemical variables, ionic strength has the highest correlation coefficient. A semi-reactive microbial transport model was developed within HP1 (HYDRUS1D-PHREEQC) and applied to column transport experiments with constant and variable solution chemistries. Bacterial attachment rates varied from 9.10×10(-3)min(-1) to 3.71×10(-3)min(-1) due to mixing of synthetic stormwater (SSW) with artificial groundwater (AGW), while bacterial attachment remained constant at 9.10×10(-3)min(-1) in a constant solution chemistry (AGW only). The model matched observed bacterial breakthrough curves well. Although limitations exist in the application of a semi-reactive microbial transport model, this method represents one step towards a more realistic model of bacterial transport in complex microbial-water-soil systems. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Clustering of Variables for Mixed Data

    NASA Astrophysics Data System (ADS)

    Saracco, J.; Chavent, M.

    2016-05-01

    This chapter presents clustering of variables which aim is to lump together strongly related variables. The proposed approach works on a mixed data set, i.e. on a data set which contains numerical variables and categorical variables. Two algorithms of clustering of variables are described: a hierarchical clustering and a k-means type clustering. A brief description of PCAmix method (that is a principal component analysis for mixed data) is provided, since the calculus of the synthetic variables summarizing the obtained clusters of variables is based on this multivariate method. Finally, the R packages ClustOfVar and PCAmixdata are illustrated on real mixed data. The PCAmix and ClustOfVar approaches are first used for dimension reduction (step 1) before applying in step 2 a standard clustering method to obtain groups of individuals.

  7. Immediate Effects of Clock-Turn Strategy on the Pattern and Performance of Narrow Turning in Persons With Parkinson Disease.

    PubMed

    Yang, Wen-Chieh; Hsu, Wei-Li; Wu, Ruey-Meei; Lin, Kwan-Hwa

    2016-10-01

    Turning difficulty is common in people with Parkinson disease (PD). The clock-turn strategy is a cognitive movement strategy to improve turning performance in people with PD despite its effects are unverified. Therefore, this study aimed to investigate the effects of the clock-turn strategy on the pattern of turning steps, turning performance, and freezing of gait during a narrow turning, and how these effects were influenced by concurrent performance of a cognitive task (dual task). Twenty-five people with PD were randomly assigned to the clock-turn or usual-turn group. Participants performed the Timed Up and Go test with and without concurrent cognitive task during the medication OFF period. The clock-turn group performed the Timed Up and Go test using the clock-turn strategy, whereas participants in the usual-turn group performed in their usual manner. Measurements were taken during the 180° turn of the Timed Up and Go test. The pattern of turning steps was evaluated by step time variability and step time asymmetry. Turning performance was evaluated by turning time and number of turning steps. The number and duration of freezing of gait were calculated by video review. The clock-turn group had lower step time variability and step time asymmetry than the usual-turn group. Furthermore, the clock-turn group turned faster with fewer freezing of gait episodes than the usual-turn group. Dual task increased the step time variability and step time asymmetry in both groups but did not affect turning performance and freezing severity. The clock-turn strategy reduces turning time and freezing of gait during turning, probably by lowering step time variability and asymmetry. Dual task compromises the effects of the clock-turn strategy, suggesting a competition for attentional resources.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A141).

  8. On the Existence of Step-To-Step Breakpoint Transitions in Accelerated Sprinting

    PubMed Central

    McGhie, David; Danielsen, Jørgen; Sandbakk, Øyvind; Haugen, Thomas

    2016-01-01

    Accelerated running is characterised by a continuous change of kinematics from one step to the next. It has been argued that breakpoints in the step-to-step transitions may occur, and that these breakpoints are an essential characteristic of dynamics during accelerated running. We examined this notion by comparing a continuous exponential curve fit (indicating continuity, i.e., smooth transitions) with linear piecewise fitting (indicating breakpoint). We recorded the kinematics of 24 well trained sprinters during a 25 m sprint run with start from competition starting blocks. Kinematic data were collected for 24 anatomical landmarks in 3D, and the location of centre of mass (CoM) was calculated from this data set. The step-to-step development of seven variables (four related to CoM position, and ground contact time, aerial time and step length) were analysed by curve fitting. In most individual sprints (in total, 41 sprints were successfully recorded) no breakpoints were identified for the variables investigated. However, for the mean results (i.e., the mean curve for all athletes) breakpoints were identified for the development of vertical CoM position, angle of acceleration and distance between support surface and CoM. It must be noted that for these variables the exponential fit showed high correlations (r2>0.99). No relationship was found between the occurrences of breakpoints for different variables as investigated using odds ratios (Mantel-Haenszel Chi-square statistic). It is concluded that although breakpoints regularly appear during accelerated running, these are not the rule and thereby unlikely a fundamental characteristic, but more likely an expression of imperfection of performance. PMID:27467387

  9. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  10. Droplet size prediction in ultrasonic nebulization for non-oxide ceramic powder synthesis.

    PubMed

    Muñoz, Mariana; Goutier, Simon; Foucaud, Sylvie; Mariaux, Gilles; Poirier, Thierry

    2018-03-01

    Spray pyrolysis process has been used for the synthesis of non-oxide ceramic powders from liquid precursors in the Si/C/N system. Particles with a high thermal stability and with variable composition and size distribution have been obtained. In this process, the mechanisms involved in precursor decomposition and gas phase recombination of species are still unknown. The final aim of this work consists in improving the whole process comprehension by an experimental/modelling approach that helps to connect the synthesized particles characteristics to the precursor properties and process operating parameters. It includes the following steps: aerosol formation by a piezoelectric nebulizer, its transport and the chemical-physical phenomena involved in the reaction processes. This paper focuses on the aerosol characterization to understand the relationship between the liquid precursor properties and the liquid droplet diameter distribution. Liquids with properties close to the precursor of interest (hexamethyldisilazane) have been used. Experiments have been performed using a shadowgraphy technique to determine the drop size distribution of the aerosol. For all operating parameters of the nebulizer device and liquids used, bimodal droplet size distributions have been obtained. Correlations proposed in the literature for the droplet size prediction by ultrasonic nebulization were used and adapted to the specific nebulizer device used in this study, showing rather good agreement with experimental values. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Inferring the demographic history from DNA sequences: An importance sampling approach based on non-homogeneous processes.

    PubMed

    Ait Kaci Azzou, S; Larribe, F; Froda, S

    2016-10-01

    In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.

  12. Vine vigor components and its variability - relationship to wine composition

    NASA Astrophysics Data System (ADS)

    Lafontaine, Magali; Tittmann, Susanne; Stoll, Manfred

    2015-04-01

    It was pointed out that a high spatial variability for canopy size and yield would exist within a vineyard but a high temporal stability over the years was observed. Furthermore, a greater variability in grape phenolics than in sugars and pH was detected within a vineyard. But the link between remote sensing indices and quality parameters of grapes is still unclear. Indeed, though in red grape varieties anthocyanins content was spatially negatively correlated to vigor parameters, it seemed that yield, Normalized Difference Vegetation Index (NDVI) and Plant Cell Density (PCD) indices were poorly correlated. Moreover, the link to quality parameters of wines remains uncertain. It was shown that more vigorous vines would lead to wines with less tannins while anthocyanins in wines would be highest when the vines were balanced but the question is if vine size or architecture, yield or nitrogen assimilation would play major contribution to those differences. The general scope of our project was to provide further knowledge on the relationship between vigor parameters and wine composition and relate these to the information gained by remote sensing. Variability in a 0.15 ha vineyard of Pinot noir planted in 2003 and grafted on SO4 rootstock at Geisenheim (Germany) was followed. Vine vigor was assessed manually for each of the 400 vines (cane number, pruning weight, trunk diameter) together with yield parameters (number of bunches per vine, crop yield). Leaf composition was assessed with a hand-held optical sensor (Multiplex3® [Mx3] (Force-A, Orsay, France) based on chlorophyll fluorescence screening providing information on leaf chlorophyll (SFR_G) and nitrogen (NBI_G) content. A micro-scale winemaking of single vines with a 3 factorial design on yield (L low, M middle, H high), SFRG (L, M, H) and canopy size (pruning weight, trunk diameter) (L, M, H) was performed for 2013 and 2014 to completely reflect variability. Wine tannin concentration represented the highest variability with a 11 fold concentration range (50-550 mg CE L-1) while variability of anthocyanins was lower with a 3 fold concentration range (90-250 mg M3OG L-1). The results showed that differences in leaf chlorophyll (SFR_G) would represent the most important factor influencing wine phenolic composition. Measurements of soil resistivity based on ARP technique (Geocarta, Paris, France), leaf composition with a mounted Multiplex providing information on porosity (NFI), biomass (BIOMASS) and chlorophyll (BISFR) together with NDVI assessed by geo-X8000 (geo-konzept-Gesellschaft für Umweltplanungssysteme mbH, Adelschlag, Germany) were performed. Grapes and berry composition was also assessed with Mx3 providing information on anthocyanins (ANTH, FERARI) and sugar (SFR_R) variability. In a second step, vines similar in size (trunk diameter and cane number) and similar yield (number of bunches per vines) were divided in 3 groups differing in leaf SFR_G. A larger scale winemaking (150kg) showed that with increasing SFR_G, Pinot noir wine typicity decreased together with anthocyanin concentration while tannin concentration increased. A better understanding of vineyard variability for targeted management or harvest would allow better understanding to produce and select fruit to a favored wine style.

  13. Analysis Techniques for Microwave Dosimetric Data.

    DTIC Science & Technology

    1985-10-01

    the number of steps in the frequency list . 0062 C ----------------------------------------------------------------------- 0063 CALL FILE2() 0064...starting frequency, 0061 C the step size, and the number of steps in the frequency list . 0062 C

  14. Application of Rosenbrock search technique to reduce the drilling cost of a well in Bai-Hassan oil field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aswad, Z.A.R.; Al-Hadad, S.M.S.

    1983-03-01

    The powerful Rosenbrock search technique, which optimizes both the search directions using the Gram-Schmidt procedure and the step size using the Fibonacci line search method, has been used to optimize the drilling program of an oil well drilled in Bai-Hassan oil field in Kirkuk, Iran, using the twodimensional drilling model of Galle and Woods. This model shows the effect of the two major controllable variables, weight on bit and rotary speed, on the drilling rate, while considering other controllable variables such as the mud properties, hydrostatic pressure, hydraulic design, and bit selection. The effect of tooth dullness on the drillingmore » rate is also considered. Increasing the weight on the drill bit with a small increase or decrease in ratary speed resulted in a significant decrease in the drilling cost for most bit runs. It was found that a 48% reduction in this cost and a 97-hour savings in the total drilling time was possible under certain conditions.« less

  15. Parameter Estimation with Almost No Public Communication for Continuous-Variable Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano

    2018-06-01

    One crucial step in any quantum key distribution (QKD) scheme is parameter estimation. In a typical QKD protocol the users have to sacrifice part of their raw data to estimate the parameters of the communication channel as, for example, the error rate. This introduces a trade-off between the secret key rate and the accuracy of parameter estimation in the finite-size regime. Here we show that continuous-variable QKD is not subject to this constraint as the whole raw keys can be used for both parameter estimation and secret key generation, without compromising the security. First, we show that this property holds for measurement-device-independent (MDI) protocols, as a consequence of the fact that in a MDI protocol the correlations between Alice and Bob are postselected by the measurement performed by an untrusted relay. This result is then extended beyond the MDI framework by exploiting the fact that MDI protocols can simulate device-dependent one-way QKD with arbitrarily high precision.

  16. Efficient Simulation of Wing Modal Response: Application of 2nd Order Shape Sensitivities and Neural Networks

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Liu, Youhua

    2000-01-01

    At the preliminary design stage of a wing structure, an efficient simulation, one needing little computation but yielding adequately accurate results for various response quantities, is essential in the search of optimal design in a vast design space. In the present paper, methods of using sensitivities up to 2nd order, and direct application of neural networks are explored. The example problem is how to decide the natural frequencies of a wing given the shape variables of the structure. It is shown that when sensitivities cannot be obtained analytically, the finite difference approach is usually more reliable than a semi-analytical approach provided an appropriate step size is used. The use of second order sensitivities is proved of being able to yield much better results than the case where only the first order sensitivities are used. When neural networks are trained to relate the wing natural frequencies to the shape variables, a negligible computation effort is needed to accurately determine the natural frequencies of a new design.

  17. Testing electroexplosive devices by programmed pulsing techniques

    NASA Technical Reports Server (NTRS)

    Rosenthal, L. A.; Menichelli, V. J.

    1976-01-01

    A novel method for testing electroexplosive devices is proposed wherein capacitor discharge pulses, with increasing energy in a step-wise fashion, are delivered to the device under test. The size of the energy increment can be programmed so that firing takes place after many, or after only a few, steps. The testing cycle is automatically terminated upon firing. An energy-firing contour relating the energy required to the programmed step size describes the single-pulse firing energy and the possible sensitization or desensitization of the explosive device.

  18. Finite element modelling to assess the effect of surface mounted piezoelectric patch size on vibration response of a hybrid beam

    NASA Astrophysics Data System (ADS)

    Rahman, N.; Alam, M. N.

    2018-02-01

    Vibration response analysis of a hybrid beam with surface mounted patch piezoelectric layer is presented in this work. A one dimensional finite element (1D-FE) model based on efficient layerwise (zigzag) theory is used for the analysis. The beam element has eight mechanical and a variable number of electrical degrees of freedom. The beams are also modelled in 2D-FE (ABAQUS) using a plane stress piezoelectric quadrilateral element for piezo layers and a plane stress quadrilateral element for the elastic layers of hybrid beams. Results are presented to assess the effect of size of piezoelectric patch layer on the free and forced vibration responses of thin and moderately thick beams under clamped-free and clamped-clamped configurations. The beams are subjected to unit step loading and harmonic loading to obtain the forced vibration responses. The vibration control using in phase actuation potential on piezoelectric patches is also studied. The 1D-FE results are compared with the 2D-FE results.

  19. Stepping motor controller

    DOEpatents

    Bourret, S.C.; Swansen, J.E.

    1982-07-02

    A stepping motor is microprocessor controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.

  20. Is There Evidence that Runners can Benefit from Wearing Compression Clothing?

    PubMed

    Engel, Florian Azad; Holmberg, Hans-Christer; Sperlich, Billy

    2016-12-01

    Runners at various levels of performance and specializing in different events (from 800 m to marathons) wear compression socks, sleeves, shorts, and/or tights in attempt to improve their performance and facilitate recovery. Recently, a number of publications reporting contradictory results with regard to the influence of compression garments in this context have appeared. To assess original research on the effects of compression clothing (socks, calf sleeves, shorts, and tights) on running performance and recovery. A computerized research of the electronic databases PubMed, MEDLINE, SPORTDiscus, and Web of Science was performed in September of 2015, and the relevant articles published in peer-reviewed journals were thus identified rated using the Physiotherapy Evidence Database (PEDro) Scale. Studies examining effects on physiological, psychological, and/or biomechanical parameters during or after running were included, and means and measures of variability for the outcome employed to calculate Hedges'g effect size and associated 95 % confidence intervals for comparison of experimental (compression) and control (non-compression) trials. Compression garments exerted no statistically significant mean effects on running performance (times for a (half) marathon, 15-km trail running, 5- and 10-km runs, and 400-m sprint), maximal and submaximal oxygen uptake, blood lactate concentrations, blood gas kinetics, cardiac parameters (including heart rate, cardiac output, cardiac index, and stroke volume), body and perceived temperature, or the performance of strength-related tasks after running. Small positive effect sizes were calculated for the time to exhaustion (in incremental or step tests), running economy (including biomechanical variables), clearance of blood lactate, perceived exertion, maximal voluntary isometric contraction and peak leg muscle power immediately after running, and markers of muscle damage and inflammation. The body core temperature was moderately affected by compression, while the effect size values for post-exercise leg soreness and the delay in onset of muscle fatigue indicated large positive effects. Our present findings suggest that by wearing compression clothing, runners may improve variables related to endurance performance (i.e., time to exhaustion) slightly, due to improvements in running economy, biomechanical variables, perception, and muscle temperature. They should also benefit from reduced muscle pain, damage, and inflammation.

  1. Method of controlling a variable geometry type turbocharger

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirabayashi, Y.

    1988-08-23

    This patent describes a method of controlling the supercharging pressure of a variable geometry type turbocharger having a bypass, comprising the following steps which are carried out successively: receiving signals from an engine speed sensor and from an engine knocking sensor; receiving a signal from a throttle valve sensor; judging whether or not an engine is being accelerated, and proceeding to step below if the engine is being accelerated and to step below if the engine is not being accelerated, i.e., if the engine is in a constant speed operation; determining a first correction value and proceeding to step below;more » judging whether or not the engine is knocking, and proceeding to step (d) if knocking is occurring and to step (f) below if no knocking is occurring; determining a second correction value and proceeding to step; receiving signals from the engine speed sensor and from an airflow meter which measures the quantity of airflow to be supplied to the engine; calculating an airflow rate per engine revolution; determining a duty valve according to the calculated airflow rate; transmitting the corrected duty value to control means for controlling the geometry of the variable geometry type turbocharger and the opening of bypass of the turbocharger, thereby controlling the supercharging pressure of the turbocharger.« less

  2. Bipolar radiofrequency ablation with 2 × 2 electrodes as a building block for matrix radiofrequency ablation: Ex vivo liver experiments and finite element method modelling.

    PubMed

    Mulier, Stefaan; Jiang, Yansheng; Jamart, Jacques; Wang, Chong; Feng, Yuanbo; Marchal, Guy; Michel, Luc; Ni, Yicheng

    2015-01-01

    Size and geometry of the ablation zone obtained by currently available radiofrequency (RF) electrodes is highly variable. Reliability might be improved by matrix radiofrequency ablation (MRFA), in which the whole tumour volume is contained within a cage of x × y parallel electrodes. The aim of this study was to optimise the smallest building block for matrix radiofrequency ablation: a recently developed bipolar 2 × 2 electrode system. In ex vivo bovine liver, the parameters of the experimental set-up were changed one by one. In a second step, a finite element method (FEM) modelling of the experiment was performed to better understand the experimental findings. The optimal power to obtain complete ablation in the shortest time was 50-60 W. Performing an ablation until impedance rise was superior to ablation for a fixed duration. Increasing electrode diameter improved completeness of ablation due to lower temperature along the electrodes. A chessboard pattern of electrode polarity was inferior to a row pattern due to an electric field void in between the electrodes. Variability of ablation size was limited. The FEM correctly simulated and explained the findings in ex vivo liver. These experiments and FEM modelling allowed a better insight in the factors influencing the ablation zone in a bipolar 2 × 2 electrode RF system. With optimal parameters, complete ablation was obtained quickly and with limited variability. This knowledge will be useful to build a larger system with x × y electrodes for MRFA.

  3. Kinematic Constraints Associated with the Acquisition of Overarm Throwing Part I: Step and Trunk Actions

    ERIC Educational Resources Information Center

    Stodden, David F.; Langendorfer, Stephen J.; Fleisig, Glenn S.; Andrews, James R.

    2006-01-01

    The purposes of this study were to: (a) examine differences within specific kinematic variables and ball velocity associated with developmental component levels of step and trunk action (Roberton & Halverson, 1984), and (b) if the differences in kinematic variables were significantly associated with the differences in component levels, determine…

  4. Sustainability of a Targeted Intervention Package: First Step to Success in Oregon

    ERIC Educational Resources Information Center

    Loman, Sheldon L.; Rodriguez, Billie Jo; Horner, Robert H.

    2010-01-01

    Variables affecting the sustained implementation of evidence-based practices are receiving increased attention. A descriptive analysis of the variables associated with sustained implementation of First Step to Success (FSS), a targeted intervention for young students at risk for behavior disorders, is provided. Measures based on a conceptual model…

  5. Morphology, biometry, and taxonomy of freshwater and marine interstitial cyphoderia (cercozoa: euglyphida).

    PubMed

    Todorov, Milcho; Golemansky, Vassil; Mitchell, Edward A D; Heger, Thierry J

    2009-01-01

    Good taxonomy is essential for ecological, biogeographical, and evolutionary studies of any group of organisms. Therefore, we performed detailed light- and scanning electron microscopy investigations on the shell ultrastructure and biometric analyses of the morphometric variability of five freshwater and marine interstitial testate amoebae of the genus Cyphoderia (C. trochus var. amphoralis, C. ampulla, C. margaritacea var. major, C. compressa, and C. littoralis), isolated from different populations in Bulgaria and Switzerland. Our aims were (1) to clarify the morphological characteristics of these taxa, and (2) to compare the morphology of a given taxon (C. ampulla) among different locations in Bulgaria and Switzerland as a first step towards an assessment of the geographical variation within a supposedly cosmopolitan taxon. Four of the studied taxa are characterized by a well-expressed main-size class and by a small size range of all the characters and can be defined as size-monomorphic species. Based on these results, the following systematic changes are proposed: C. major (Penard, 1891) n. comb. (Syn.: C. margaritacea var. major (Penard, 1891) and C. amphoralis (Wailes & Penard, 1911) n. comb. (Syn.: C. trochus var. amphoralis (Wailes & Penard, 1911)). However, we also show significant morphological variability between the Swiss and Bulgarian populations of C. ampulla, suggesting the possible existence of more than one taxon within this species. Further studies are required to assess (1) if these two morphologically different taxa represent individual species, (2) if so, if more species exist, and if this diversity is due to limited distribution ranges (endemism) or if several closely related taxa occur together in different geographical areas.

  6. Scanning tunneling microscope with a rotary piezoelectric stepping motor

    NASA Astrophysics Data System (ADS)

    Yakimov, V. N.

    1996-02-01

    A compact scanning tunneling microscope (STM) with a novel rotary piezoelectric stepping motor for coarse positioning has been developed. An inertial method for rotating of the rotor by the pair of piezoplates has been used in the piezomotor. Minimal angular step size was about several arcsec with the spindle working torque up to 1 N×cm. Design of the STM was noticeably simplified by utilization of the piezomotor with such small step size. A shaft eccentrically attached to the piezomotor spindle made it possible to push and pull back the cylindrical bush with the tubular piezoscanner. A linear step of coarse positioning was about 50 nm. STM resolution in vertical direction was better than 0.1 nm without an external vibration isolation.

  7. Neuronal differentiation of human mesenchymal stem cells in response to the domain size of graphene substrates.

    PubMed

    Lee, Yoo-Jung; Seo, Tae Hoon; Lee, Seula; Jang, Wonhee; Kim, Myung Jong; Sung, Jung-Suk

    2018-01-01

    Graphene is a noncytotoxic monolayer platform with unique physical, chemical, and biological properties. It has been demonstrated that graphene substrate may provide a promising biocompatible scaffold for stem cell therapy. Because chemical vapor deposited graphene has a two dimensional polycrystalline structure, it is important to control the individual domain size to obtain desirable properties for nano-material. However, the biological effects mediated by differences in domain size of graphene have not yet been reported. On the basis of the control of graphene domain achieved by one-step growth (1step-G, small domain) and two-step growth (2step-G, large domain) process, we found that the neuronal differentiation of bone marrow-derived human mesenchymal stem cells (hMSCs) highly depended on the graphene domain size. The defects at the domain boundaries in 1step-G graphene was higher (×8.5) and had a relatively low (13% lower) contact angle of water droplet than 2step-G graphene, leading to enhanced cell-substrate adhesion and upregulated neuronal differentiation of hMSCs. We confirmed that the strong interactions between cells and defects at the domain boundaries in 1step-G graphene can be obtained due to their relatively high surface energy, which is stronger than interactions between cells and graphene surfaces. Our results may provide valuable information on the development of graphene-based scaffold by understanding which properties of graphene domain influence cell adhesion efficacy and stem cell differentiation. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 106A: 43-51, 2018. © 2017 Wiley Periodicals, Inc.

  8. Predictors of posttraumatic stress symptoms following childbirth

    PubMed Central

    2014-01-01

    Background Posttraumatic stress disorder (PTSD) following childbirth has gained growing attention in the recent years. Although a number of predictors for PTSD following childbirth have been identified (e.g., history of sexual trauma, emergency caesarean section, low social support), only very few studies have tested predictors derived from current theoretical models of the disorder. This study first aimed to replicate the association of PTSD symptoms after childbirth with predictors identified in earlier research. Second, cognitive predictors derived from Ehlers and Clark’s (2000) model of PTSD were examined. Methods N = 224 women who had recently given birth completed an online survey. In addition to computing single correlations between PTSD symptom severities and variables of interest, in a hierarchical multiple regression analyses posttraumatic stress symptoms were predicted by (1) prenatal variables, (2) birth-related variables, (3) postnatal social support, and (4) cognitive variables. Results Wellbeing during pregnancy and age were the only prenatal variables contributing significantly to the explanation of PTSD symptoms in the first step of the regression analysis. In the second step, the birth-related variables peritraumatic emotions and wellbeing during childbed significantly increased the explanation of variance. Despite showing significant bivariate correlations, social support entered in the third step did not predict PTSD symptom severities over and above the variables included in the first two steps. However, with the exception of peritraumatic dissociation all cognitive variables emerged as powerful predictors and increased the amount of variance explained from 43% to a total amount of 68%. Conclusions The findings suggest that the prediction of PTSD following childbirth can be improved by focusing on variables derived from a current theoretical model of the disorder. PMID:25026966

  9. Beyond eruptive scenarios: assessing tephra fallout hazard from Neapolitan volcanoes.

    PubMed

    Sandri, Laura; Costa, Antonio; Selva, Jacopo; Tonini, Roberto; Macedonio, Giovanni; Folch, Arnau; Sulpizio, Roberto

    2016-04-12

    Assessment of volcanic hazards is necessary for risk mitigation. Typically, hazard assessment is based on one or a few, subjectively chosen representative eruptive scenarios, which use a specific combination of eruptive sizes and intensities to represent a particular size class of eruption. While such eruptive scenarios use a range of representative members to capture a range of eruptive sizes and intensities in order to reflect a wider size class, a scenario approach neglects to account for the intrinsic variability of volcanic eruptions, and implicitly assumes that inter-class size variability (i.e. size difference between different eruptive size classes) dominates over intra-class size variability (i.e. size difference within an eruptive size class), the latter of which is treated as negligible. So far, no quantitative study has been undertaken to verify such an assumption. Here, we adopt a novel Probabilistic Volcanic Hazard Analysis (PVHA) strategy, which accounts for intrinsic eruptive variabilities, to quantify the tephra fallout hazard in the Campania area. We compare the results of the new probabilistic approach with the classical scenario approach. The results allow for determining whether a simplified scenario approach can be considered valid, and for quantifying the bias which arises when full variability is not accounted for.

  10. A Unified Probabilistic Framework for Dose–Response Assessment of Human Health Effects

    PubMed Central

    Slob, Wout

    2015-01-01

    Background When chemical health hazards have been identified, probabilistic dose–response assessment (“hazard characterization”) quantifies uncertainty and/or variability in toxicity as a function of human exposure. Existing probabilistic approaches differ for different types of endpoints or modes-of-action, lacking a unifying framework. Objectives We developed a unified framework for probabilistic dose–response assessment. Methods We established a framework based on four principles: a) individual and population dose responses are distinct; b) dose–response relationships for all (including quantal) endpoints can be recast as relating to an underlying continuous measure of response at the individual level; c) for effects relevant to humans, “effect metrics” can be specified to define “toxicologically equivalent” sizes for this underlying individual response; and d) dose–response assessment requires making adjustments and accounting for uncertainty and variability. We then derived a step-by-step probabilistic approach for dose–response assessment of animal toxicology data similar to how nonprobabilistic reference doses are derived, illustrating the approach with example non-cancer and cancer datasets. Results Probabilistically derived exposure limits are based on estimating a “target human dose” (HDMI), which requires risk management–informed choices for the magnitude (M) of individual effect being protected against, the remaining incidence (I) of individuals with effects ≥ M in the population, and the percent confidence. In the example datasets, probabilistically derived 90% confidence intervals for HDMI values span a 40- to 60-fold range, where I = 1% of the population experiences ≥ M = 1%–10% effect sizes. Conclusions Although some implementation challenges remain, this unified probabilistic framework can provide substantially more complete and transparent characterization of chemical hazards and support better-informed risk management decisions. Citation Chiu WA, Slob W. 2015. A unified probabilistic framework for dose–response assessment of human health effects. Environ Health Perspect 123:1241–1254; http://dx.doi.org/10.1289/ehp.1409385 PMID:26006063

  11. Physiological and behavioral indices of emotion dysregulation as predictors of outcome from cognitive behavioral therapy and acceptance and commitment therapy for anxiety.

    PubMed

    Davies, Carolyn D; Niles, Andrea N; Pittig, Andre; Arch, Joanna J; Craske, Michelle G

    2015-03-01

    Identifying for whom and under what conditions a treatment is most effective is an essential step toward personalized medicine. The current study examined pre-treatment physiological and behavioral variables as predictors and moderators of outcome in a randomized clinical trial comparing cognitive behavioral therapy (CBT) and acceptance and commitment therapy (ACT) for anxiety disorders. Sixty individuals with a DSM-IV defined principal anxiety disorder completed 12 sessions of either CBT or ACT. Baseline physiological and behavioral variables were measured prior to entering treatment. Self-reported anxiety symptoms were assessed at pre-treatment, post-treatment, and 6- and 12-month follow-up from baseline. Higher pre-treatment heart rate variability was associated with worse outcome across ACT and CBT. ACT outperformed CBT for individuals with high behavioral avoidance. Subjective anxiety levels during laboratory tasks did not predict or moderate treatment outcome. Due to small sample sizes of each disorder, disorder-specific predictors were not tested. Future research should examine these predictors in larger samples and across other outcome variables. Lower heart rate variability was identified as a prognostic indicator of overall outcome, whereas high behavioral avoidance was identified as a prescriptive indicator of superior outcome from ACT versus CBT. Investigation of pre-treatment physiological and behavioral variables as predictors and moderators of outcome may help guide future treatment-matching efforts. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Effects of Turbulence Model and Numerical Time Steps on Von Karman Flow Behavior and Drag Accuracy of Circular Cylinder

    NASA Astrophysics Data System (ADS)

    Amalia, E.; Moelyadi, M. A.; Ihsan, M.

    2018-04-01

    The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.

  13. A descriptive study of step alignment and foot positioning relative to the tee by professional rugby union goal-kickers.

    PubMed

    Cockcroft, John; Van Den Heever, Dawie

    2016-01-01

    This study describes foot positioning during the final two steps of the approach to the ball amongst professional rugby goal-kickers. A 3D optical motion capture system was used to test 15 goal-kickers performing 10 goal-kicks. The distance and direction of each step, as well as individual foot contact positions relative to the tee, were measured. The intra- and inter-subject variability was calculated as well as the correlation (Pearson) between the measurements and participant anthropometrics. Inter-subject variability for the final foot position was lowest (placed 0.03 ± 0.07 m behind and 0.33 ± 0.03 m lateral to the tee) and highest for the penultimate step distance (0.666 ± 0.149 m), performed at an angle of 36.1 ± 8.5° external to the final step. The final step length was 1.523 ± 0.124 m, executed at an external angle of 35.5 ± 7.4° to the target line. The intra-subject variability was very low; distances and angles for the 10 kicks varied per participant by 1.6-3.1 cm and 0.7-1.6°, respectively. The results show that even though the participants had variability in their run-up to the tee, final foot position next to the tee was very similar and consistent. Furthermore, the inter- and intra-subject variability could not be attributed to differences in anthropometry. These findings may be useful as normative reference data for coaching, although further work is required to understand the role of other factors such as approach speed and body alignment.

  14. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  15. Simulation methods with extended stability for stiff biochemical Kinetics.

    PubMed

    Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin

    2010-08-11

    With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.

  16. Craters of the Pluto-Charon system

    NASA Astrophysics Data System (ADS)

    Robbins, Stuart J.; Singer, Kelsi N.; Bray, Veronica J.; Schenk, Paul; Lauer, Tod R.; Weaver, Harold A.; Runyon, Kirby; McKinnon, William B.; Beyer, Ross A.; Porter, Simon; White, Oliver L.; Hofgartner, Jason D.; Zangari, Amanda M.; Moore, Jeffrey M.; Young, Leslie A.; Spencer, John R.; Binzel, Richard P.; Buie, Marc W.; Buratti, Bonnie J.; Cheng, Andrew F.; Grundy, William M.; Linscott, Ivan R.; Reitsema, Harold J.; Reuter, Dennis C.; Showalter, Mark R.; Tyler, G. Len; Olkin, Catherine B.; Ennico, Kimberly S.; Stern, S. Alan; New Horizons Lorri, Mvic Instrument Teams

    2017-05-01

    NASA's New Horizons flyby mission of the Pluto-Charon binary system and its four moons provided humanity with its first spacecraft-based look at a large Kuiper Belt Object beyond Triton. Excluding this system, multiple Kuiper Belt Objects (KBOs) have been observed for only 20 years from Earth, and the KBO size distribution is unconstrained except among the largest objects. Because small KBOs will remain beyond the capabilities of ground-based observatories for the foreseeable future, one of the best ways to constrain the small KBO population is to examine the craters they have made on the Pluto-Charon system. The first step to understanding the crater population is to map it. In this work, we describe the steps undertaken to produce a robust crater database of impact features on Pluto, Charon, and their two largest moons, Nix and Hydra. These include an examination of different types of images and image processing, and we present an analysis of variability among the crater mapping team, where crater diameters were found to average ± 10% uncertainty across all sizes measured (∼0.5-300 km). We also present a few basic analyses of the crater databases, finding that Pluto's craters' differential size-frequency distribution across the encounter hemisphere has a power-law slope of approximately -3.1 ± 0.1 over diameters D ≈ 15-200 km, and Charon's has a slope of -3.0 ± 0.2 over diameters D ≈ 10-120 km; it is significantly shallower on both bodies at smaller diameters. We also better quantify evidence of resurfacing evidenced by Pluto's craters in contrast with Charon's. With this work, we are also releasing our database of potential and probable impact craters: 5287 on Pluto, 2287 on Charon, 35 on Nix, and 6 on Hydra.

  17. Craters of the Pluto-Charon System

    NASA Technical Reports Server (NTRS)

    Robbins, Stuart J.; Singer, Kelsi N.; Bray, Veronica J.; Schenk, Paul; Lauer, Todd R.; Weaver, Harold A.; Runyon, Kirby; Mckinnon, William B.; Beyer, Ross A.; Porter, Simon; hide

    2016-01-01

    NASA's New Horizons flyby mission of the Pluto-Charon binary system and its four moons provided humanity with its first spacecraft-based look at a large Kuiper Belt Object beyond Triton. Excluding this system, multiple Kuiper Belt Objects (KBOs) have been observed for only 20 years from Earth, and the KBO size distribution is unconstrained except among the largest objects. Because small KBOs will remain beyond the capabilities of ground-based observatories for the foreseeable future, one of the best ways to constrain the small KBO population is to examine the craters they have made on the Pluto-Charon system. The first step to understanding the crater population is to map it. In this work, we describe the steps undertaken to produce a robust crater database of impact features on Pluto, Charon, and their two largest moons, Nix and Hydra. These include an examination of different types of images and image processing, and we present an analysis of variability among the crater mapping team, where crater diameters were found to average +/-10% uncertainty across all sizes measured (approx.0.5-300 km). We also present a few basic analyses of the crater databases, finding that Pluto's craters' differential size-frequency distribution across the encounter hemisphere has a power-law slope of approximately -3.1 +/- 0.1 over diameters D approx. = 15-200 km, and Charon's has a slope of -3.0 +/- 0.2 over diameters D approx. = 10-120 km; it is significantly shallower on both bodies at smaller diameters. We also better quantify evidence of resurfacing evidenced by Pluto's craters in contrast with Charon's. With this work, we are also releasing our database of potential and probable impact craters: 5287 on Pluto, 2287 on Charon, 35 on Nix, and 6 on Hydra.

  18. Multi-objective design optimization of antenna structures using sequential domain patching with automated patch size determination

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2018-02-01

    In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.

  19. Building an open-source robotic stereotaxic instrument.

    PubMed

    Coffey, Kevin R; Barker, David J; Ma, Sisi; West, Mark O

    2013-10-29

    This protocol includes the designs and software necessary to upgrade an existing stereotaxic instrument to a robotic (CNC) stereotaxic instrument for around $1,000 (excluding a drill), using industry standard stepper motors and CNC controlling software. Each axis has variable speed control and may be operated simultaneously or independently. The robot's flexibility and open coding system (g-code) make it capable of performing custom tasks that are not supported by commercial systems. Its applications include, but are not limited to, drilling holes, sharp edge craniotomies, skull thinning, and lowering electrodes or cannula. In order to expedite the writing of g-coding for simple surgeries, we have developed custom scripts that allow individuals to design a surgery with no knowledge of programming. However, for users to get the most out of the motorized stereotax, it would be beneficial to be knowledgeable in mathematical programming and G-Coding (simple programming for CNC machining). The recommended drill speed is greater than 40,000 rpm. The stepper motor resolution is 1.8°/Step, geared to 0.346°/Step. A standard stereotax has a resolution of 2.88 μm/step. The maximum recommended cutting speed is 500 μm/sec. The maximum recommended jogging speed is 3,500 μm/sec. The maximum recommended drill bit size is HP 2.

  20. Molecular dynamics based enhanced sampling of collective variables with very large time steps.

    PubMed

    Chen, Pei-Yang; Tuckerman, Mark E

    2018-01-14

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  1. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    NASA Astrophysics Data System (ADS)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  2. Variable selection in near-infrared spectroscopy: benchmarking of feature selection methods on biodiesel data.

    PubMed

    Balabin, Roman M; Smirnov, Sergey V

    2011-04-29

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Leaf Morphology, Taxonomy and Geometric Morphometrics: A Simplified Protocol for Beginners

    PubMed Central

    Viscosi, Vincenzo; Cardini, Andrea

    2011-01-01

    Taxonomy relies greatly on morphology to discriminate groups. Computerized geometric morphometric methods for quantitative shape analysis measure, test and visualize differences in form in a highly effective, reproducible, accurate and statistically powerful way. Plant leaves are commonly used in taxonomic analyses and are particularly suitable to landmark based geometric morphometrics. However, botanists do not yet seem to have taken advantage of this set of methods in their studies as much as zoologists have done. Using free software and an example dataset from two geographical populations of sessile oak leaves, we describe in detailed but simple terms how to: a) compute size and shape variables using Procrustes methods; b) test measurement error and the main levels of variation (population and trees) using a hierachical design; c) estimate the accuracy of group discrimination; d) repeat this estimate after controlling for the effect of size differences on shape (i.e., allometry). Measurement error was completely negligible; individual variation in leaf morphology was large and differences between trees were generally bigger than within trees; differences between the two geographic populations were small in both size and shape; despite a weak allometric trend, controlling for the effect of size on shape slighly increased discrimination accuracy. Procrustes based methods for the analysis of landmarks were highly efficient in measuring the hierarchical structure of differences in leaves and in revealing very small-scale variation. In taxonomy and many other fields of botany and biology, the application of geometric morphometrics contributes to increase scientific rigour in the description of important aspects of the phenotypic dimension of biodiversity. Easy to follow but detailed step by step example studies can promote a more extensive use of these numerical methods, as they provide an introduction to the discipline which, for many biologists, is less intimidating than the often inaccessible specialistic literature. PMID:21991324

  4. Performance of Low-Density Parity-Check Coded Modulation

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2011-02-01

    This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt

  5. Accelerated Training for Large Feedforward Neural Networks

    NASA Technical Reports Server (NTRS)

    Stepniewski, Slawomir W.; Jorgensen, Charles C.

    1998-01-01

    In this paper we introduce a new training algorithm, the scaled variable metric (SVM) method. Our approach attempts to increase the convergence rate of the modified variable metric method. It is also combined with the RBackprop algorithm, which computes the product of the matrix of second derivatives (Hessian) with an arbitrary vector. The RBackprop method allows us to avoid computationally expensive, direct line searches. In addition, it can be utilized in the new, 'predictive' updating technique of the inverse Hessian approximation. We have used directional slope testing to adjust the step size and found that this strategy works exceptionally well in conjunction with the Rbackprop algorithm. Some supplementary, but nevertheless important enhancements to the basic training scheme such as improved setting of a scaling factor for the variable metric update and computationally more efficient procedure for updating the inverse Hessian approximation are presented as well. We summarize by comparing the SVM method with four first- and second- order optimization algorithms including a very effective implementation of the Levenberg-Marquardt method. Our tests indicate promising computational speed gains of the new training technique, particularly for large feedforward networks, i.e., for problems where the training process may be the most laborious.

  6. Increased walking variability in elderly persons with congestive heart failure

    NASA Technical Reports Server (NTRS)

    Hausdorff, J. M.; Forman, D. E.; Ladin, Z.; Goldberger, A. L.; Rigney, D. R.; Wei, J. Y.

    1994-01-01

    OBJECTIVES: To determine the effects of congestive heart failure on a person's ability to walk at a steady pace while ambulating at a self-determined rate. SETTING: Beth Israel Hospital, Boston, a primary and tertiary teaching hospital, and a social activity center for elderly adults living in the community. PARTICIPANTS: Eleven elderly subjects (aged 70-93 years) with well compensated congestive heart failure (NY Heart Association class I or II), seven elderly subjects (aged 70-79 years) without congestive heart failure, and 10 healthy young adult subjects (aged 20-30 years). MEASUREMENTS: Subjects walked for 8 minutes on level ground at their own selected walking rate. Footswitches were used to measure the time between steps. Step rate (steps/minute) and step rate variability were calculated for the entire walking period, for 30 seconds during the first minute of the walk, for 30 seconds during the last minute of the walk, and for the 30-second period when each subject's step rate variability was minimal. Group means and 5% and 95% confidence intervals were computed. MAIN RESULTS: All measures of walking variability were significantly increased in the elderly subjects with congestive heart failure, intermediate in the elderly controls, and lowest in the young subjects. There was no overlap between the three groups using the minimal 30-second variability (elderly CHF vs elderly controls: P < 0.001, elderly controls vs young: P < 0.001), and no overlap between elderly subjects with and without congestive heart failure when using the overall variability. For all four measures, there was no overlap in any of the confidence intervals, and all group means were significantly different (P < 0.05).

  7. Portfolio of automated trading systems: complexity and learning set size issues.

    PubMed

    Raudys, Sarunas

    2013-03-01

    In this paper, we consider using profit/loss histories of multiple automated trading systems (ATSs) as N input variables in portfolio management. By means of multivariate statistical analysis and simulation studies, we analyze the influences of sample size (L) and input dimensionality on the accuracy of determining the portfolio weights. We find that degradation in portfolio performance due to inexact estimation of N means and N(N - 1)/2 correlations is proportional to N/L; however, estimation of N variances does not worsen the result. To reduce unhelpful sample size/dimensionality effects, we perform a clustering of N time series and split them into a small number of blocks. Each block is composed of mutually correlated ATSs. It generates an expert trading agent based on a nontrainable 1/N portfolio rule. To increase the diversity of the expert agents, we use training sets of different lengths for clustering. In the output of the portfolio management system, the regularized mean-variance framework-based fusion agent is developed in each walk-forward step of an out-of-sample portfolio validation experiment. Experiments with the real financial data (2003-2012) confirm the effectiveness of the suggested approach.

  8. Determination of the structures of small gold clusters on stepped magnesia by density functional calculations.

    PubMed

    Damianos, Konstantina; Ferrando, Riccardo

    2012-02-21

    The structural modifications of small supported gold clusters caused by realistic surface defects (steps) in the MgO(001) support are investigated by computational methods. The most stable gold cluster structures on a stepped MgO(001) surface are searched for in the size range up to 24 Au atoms, and locally optimized by density-functional calculations. Several structural motifs are found within energy differences of 1 eV: inclined leaflets, arched leaflets, pyramidal hollow cages and compact structures. We show that the interaction with the step clearly modifies the structures with respect to adsorption on the flat defect-free surface. We find that leaflet structures clearly dominate for smaller sizes. These leaflets are either inclined and quasi-horizontal, or arched, at variance with the case of the flat surface in which vertical leaflets prevail. With increasing cluster size pyramidal hollow cages begin to compete against leaflet structures. Cage structures become more and more favourable as size increases. The only exception is size 20, at which the tetrahedron is found as the most stable isomer. This tetrahedron is however quite distorted. The comparison of two different exchange-correlation functionals (Perdew-Burke-Ernzerhof and local density approximation) show the same qualitative trends. This journal is © The Royal Society of Chemistry 2012

  9. The effect of external forces on discrete motion within holographic optical tweezers.

    PubMed

    Eriksson, E; Keen, S; Leach, J; Goksör, M; Padgett, M J

    2007-12-24

    Holographic optical tweezers is a widely used technique to manipulate the individual positions of optically trapped micron-sized particles in a sample. The trap positions are changed by updating the holographic image displayed on a spatial light modulator. The updating process takes a finite time, resulting in a temporary decrease of the intensity, and thus the stiffness, of the optical trap. We have investigated this change in trap stiffness during the updating process by studying the motion of an optically trapped particle in a fluid flow. We found a highly nonlinear behavior of the change in trap stiffness vs. changes in step size. For step sizes up to approximately 300 nm the trap stiffness is decreasing. Above 300 nm the change in trap stiffness remains constant for all step sizes up to one particle radius. This information is crucial for optical force measurements using holographic optical tweezers.

  10. Rock sampling. [method for controlling particle size distribution

    NASA Technical Reports Server (NTRS)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  11. Measure Guideline. Replacing Single-Speed Pool Pumps with Variable Speed Pumps for Energy Savings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, A.; Easley, S.

    2012-05-01

    This measure guideline evaluates potential energy savings by replacing traditional single-speed pool pumps with variable speed pool pumps, and provides a basic cost comparison between continued uses of traditional pumps verses new pumps. A simple step-by-step process for inspecting the pool area and installing a new pool pump follows.

  12. Measure Guideline: Replacing Single-Speed Pool Pumps with Variable Speed Pumps for Energy Savings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, A.; Easley, S.

    2012-05-01

    The report evaluates potential energy savings by replacing traditional single-speed pool pumps with variable speed pool pumps, and provide a basic cost comparison between continued uses of traditional pumps verses new pumps. A simple step-by-step process for inspecting the pool area and installing a new pool pump follows.

  13. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    DOE PAGES

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...

    2016-08-09

    Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less

  14. Quantification of soil water retention parameters using multi-section TDR-waveform analysis

    NASA Astrophysics Data System (ADS)

    Baviskar, S. M.; Heimovaara, T. J.

    2017-06-01

    Soil water retention parameters are important for describing flow in variably saturated soils. TDR is one of the standard methods used for determining water content in soil samples. In this study, we present an approach to estimate water retention parameters of a sample which is initially saturated and subjected to an incremental decrease in boundary head causing it to drain in a multi-step fashion. TDR waveforms are measured along the height of the sample at assumed different hydrostatic conditions at daily interval. The cumulative discharge outflow drained from the sample is also recorded. The saturated water content is obtained using volumetric analysis after the final step involved in multi-step drainage. The equation obtained by coupling the unsaturated parametric function and the apparent dielectric permittivity is fitted to a TDR wave propagation forward model. The unsaturated parametric function is used to spatially interpolate the water contents along TDR probe. The cumulative discharge outflow data is fitted with cumulative discharge estimated using the unsaturated parametric function. The weight of water inside the sample estimated at the first and final boundary head in multi-step drainage is fitted with the corresponding weights calculated using unsaturated parametric function. A Bayesian optimization scheme is used to obtain optimized water retention parameters for these different objective functions. This approach can be used for samples with long heights and is especially suitable for characterizing sands with a uniform particle size distribution at low capillary heads.

  15. Beyond eruptive scenarios: assessing tephra fallout hazard from Neapolitan volcanoes

    PubMed Central

    Sandri, Laura; Costa, Antonio; Selva, Jacopo; Tonini, Roberto; Macedonio, Giovanni; Folch, Arnau; Sulpizio, Roberto

    2016-01-01

    Assessment of volcanic hazards is necessary for risk mitigation. Typically, hazard assessment is based on one or a few, subjectively chosen representative eruptive scenarios, which use a specific combination of eruptive sizes and intensities to represent a particular size class of eruption. While such eruptive scenarios use a range of representative members to capture a range of eruptive sizes and intensities in order to reflect a wider size class, a scenario approach neglects to account for the intrinsic variability of volcanic eruptions, and implicitly assumes that inter-class size variability (i.e. size difference between different eruptive size classes) dominates over intra-class size variability (i.e. size difference within an eruptive size class), the latter of which is treated as negligible. So far, no quantitative study has been undertaken to verify such an assumption. Here, we adopt a novel Probabilistic Volcanic Hazard Analysis (PVHA) strategy, which accounts for intrinsic eruptive variabilities, to quantify the tephra fallout hazard in the Campania area. We compare the results of the new probabilistic approach with the classical scenario approach. The results allow for determining whether a simplified scenario approach can be considered valid, and for quantifying the bias which arises when full variability is not accounted for. PMID:27067389

  16. Thermal barriers constrain microbial elevational range size via climate variability.

    PubMed

    Wang, Jianjun; Soininen, Janne

    2017-08-01

    Range size is invariably limited and understanding range size variation is an important objective in ecology. However, microbial range size across geographical gradients remains understudied, especially on mountainsides. Here, the patterns of range size of stream microbes (i.e., bacteria and diatoms) and macroorganisms (i.e., macroinvertebrates) along elevational gradients in Asia and Europe were examined. In bacteria, elevational range size showed non-significant phylogenetic signals. In all taxa, there was a positive relationship between niche breadth and species elevational range size, driven by local environmental and climatic variables. No taxa followed the elevational Rapoport's rule. Climate variability explained the most variation in microbial mean elevational range size, whereas local environmental variables were more important for macroinvertebrates. Seasonal and annual climate variation showed negative effects, while daily climate variation had positive effects on community mean elevational range size for all taxa. The negative correlation between range size and species richness suggests that understanding the drivers of range is key for revealing the processes underlying diversity. The results advance the understanding of microbial species thermal barriers by revealing the importance of seasonal and diurnal climate variation, and highlight that aquatic and terrestrial biota may differ in their response to short- and long-term climate variability. © 2017 Society for Applied Microbiology and John Wiley & Sons Ltd.

  17. Some critical issues in the characterization of nanoscale thermal conductivity by molecular dynamics analysis

    NASA Astrophysics Data System (ADS)

    Ehsan Khaled, Mohammad; Zhang, Liangchi; Liu, Weidong

    2018-07-01

    The nanoscale thermal conductivity of a material can be significantly different from its value at the macroscale. Although a number of studies using the equilibrium molecular dynamics (EMD) with Green–Kubo (GK) formula have been conducted for nano-conductivity predictions, there are many problems in the analysis that have made the EMD results unreliable or misleading. This paper aims to clarify such critical issues through a thorough investigation on the effect and determination of the vital physical variables in the EMD-GK analysis, using the prediction of the nanoscale thermal conductivity of Si as an example. The study concluded that to have a reliable prediction, quantum correction, time step, simulation time, correlation time and system size are all crucial.

  18. Baby steps. Lessons learned by leaders of one of the nation's leading children's hospitals on the complexities of IT rollouts in the pediatric setting.

    PubMed

    Gamble, Kate Huvane

    2010-05-01

    As information technology has become a larger factor in the healthcare industry, one area of care that has been somewhat neglected is pediatrics. The most significant factor differentiating pediatric care from adult care in terms of IT implementation is the variability in treatment based on a patient's age and size. But while there are challenges--particularly in the areas of medication administration, growth chart analysis and clinical documentation--forward-thinking leaders have found success by collaborating with vendors to customize products to meet their needs. It's important to develop relationships with other organizations and create a network of trusted leaders who can provide advice when rolling out applications like EMRs.

  19. Bomb or Boon: Linking Population, People and Power in Fragile Regions: Comment on "The Pill Is Mightier Than the Sword".

    PubMed

    Gilpin, Raymond

    2015-10-02

    The relationship between population structure and violent conflict is complex and heavily dependent on the behavior of other variables like governance, economic prospects, and urbanization. While addressing rapid population growth might be a necessary condition for peace, it is by no means sufficient. Concomitant steps must also be taken to foster inclusivity, guarantee broader rights for all, particularly women, rebuild social contracts and ensure that all citizens have equal access to economic opportunity. Measures to control family size could reduce dependency and create greater socio-economic opportunities for women and youth, By so doing, the "youth bulge" phenomenon could be a boon for rapidly growing developing countries. © 2016 by Kerman University of Medical Sciences.

  20. Surface treated carbon catalysts produced from waste tires for fatty acids to biofuel conversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hood, Zachary D.; Adhikari, Shiba P.; Wright, Marcus W.

    A method of making solid acid catalysts includes the step of sulfonating waste tire pieces in a first sulfonation step. The sulfonated waste tire pieces are pyrolyzed to produce carbon composite pieces having a pore size less than 10 nm. The carbon composite pieces are then ground to produce carbon composite powders having a size less than 50 .mu.m. The carbon composite particles are sulfonated in a second sulfonation step to produce sulfonated solid acid catalysts. A method of making biofuels and solid acid catalysts are also disclosed.

  1. Method and apparatus for sizing and separating warp yarns using acoustical energy

    DOEpatents

    Sheen, Shuh-Haw; Chien, Hual-Te; Raptis, Apostolos C.; Kupperman, David S.

    1998-01-01

    A slashing process for preparing warp yarns for weaving operations including the steps of sizing and/or desizing the yarns in an acoustic resonance box and separating the yarns with a leasing apparatus comprised of a set of acoustically agitated lease rods. The sizing step includes immersing the yarns in a size solution contained in an acoustic resonance box. Acoustic transducers are positioned against the exterior of the box for generating an acoustic pressure field within the size solution. Ultrasonic waves that result from the acoustic pressure field continuously agitate the size solution to effect greater mixing and more uniform application and penetration of the size onto the yarns. The sized yarns are then separated by passing the warp yarns over and under lease rods. Electroacoustic transducers generate acoustic waves along the longitudinal axis of the lease rods, creating a shearing motion on the surface of the rods for splitting the yarns.

  2. The Markov process admits a consistent steady-state thermodynamic formalism

    NASA Astrophysics Data System (ADS)

    Peng, Liangrong; Zhu, Yi; Hong, Liu

    2018-01-01

    The search for a unified formulation for describing various non-equilibrium processes is a central task of modern non-equilibrium thermodynamics. In this paper, a novel steady-state thermodynamic formalism was established for general Markov processes described by the Chapman-Kolmogorov equation. Furthermore, corresponding formalisms of steady-state thermodynamics for the master equation and Fokker-Planck equation could be rigorously derived in mathematics. To be concrete, we proved that (1) in the limit of continuous time, the steady-state thermodynamic formalism for the Chapman-Kolmogorov equation fully agrees with that for the master equation; (2) a similar one-to-one correspondence could be established rigorously between the master equation and Fokker-Planck equation in the limit of large system size; (3) when a Markov process is restrained to one-step jump, the steady-state thermodynamic formalism for the Fokker-Planck equation with discrete state variables also goes to that for master equations, as the discretization step gets smaller and smaller. Our analysis indicated that general Markov processes admit a unified and self-consistent non-equilibrium steady-state thermodynamic formalism, regardless of underlying detailed models.

  3. Improving the Dynamic Characteristics of Body-in-White Structure Using Structural Optimization

    PubMed Central

    Yahaya Rashid, Aizzat S.; Mohamed Haris, Sallehuddin; Alias, Anuar

    2014-01-01

    The dynamic behavior of a body-in-white (BIW) structure has significant influence on the noise, vibration, and harshness (NVH) and crashworthiness of a car. Therefore, by improving the dynamic characteristics of BIW, problems and failures associated with resonance and fatigue can be prevented. The design objectives attempt to improve the existing torsion and bending modes by using structural optimization subjected to dynamic load without compromising other factors such as mass and stiffness of the structure. The natural frequency of the design was modified by identifying and reinforcing the structure at critical locations. These crucial points are first identified by topology optimization using mass and natural frequencies as the design variables. The individual components obtained from the analysis go through a size optimization step to find their target thickness of the structure. The thickness of affected regions of the components will be modified according to the analysis. The results of both optimization steps suggest several design modifications to achieve the target vibration specifications without compromising the stiffness of the structure. A method of combining both optimization approaches is proposed to improve the design modification process. PMID:25101312

  4. Integrating K-means Clustering with Kernel Density Estimation for the Development of a Conditional Weather Generation Downscaling Model

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Ho, C.; Chang, L.

    2011-12-01

    In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the conditional probability density function (PDF) of precipitations approximated by the kernel density estimation are calculated respectively for each weather types. In the synthesis step, 100 patterns of synthesis data are generated. First, the weather type of the n-th day are determined by the results of K-means clustering. The associated transition matrix and PDF of the weather type were also determined for the usage of the next sub-step in the synthesis process. Second, the precipitation condition, dry or wet, can be synthesized basing on the transition matrix. If the synthesized condition is dry, the quantity of precipitation is zero; otherwise, the quantity should be further determined in the third sub-step. Third, the quantity of the synthesized precipitation is assigned as the random variable of the PDF defined above. The synthesis efficiency compares the gap of the monthly mean curves and monthly standard deviation curves between the historical precipitation data and the 100 patterns of synthesis data.

  5. Control Software for Piezo Stepping Actuators

    NASA Technical Reports Server (NTRS)

    Shields, Joel F.

    2013-01-01

    A control system has been developed for the Space Interferometer Mission (SIM) piezo stepping actuator. Piezo stepping actuators are novel because they offer extreme dynamic range (centimeter stroke with nanometer resolution) with power, thermal, mass, and volume advantages over existing motorized actuation technology. These advantages come with the added benefit of greatly reduced complexity in the support electronics. The piezo stepping actuator consists of three fully redundant sets of piezoelectric transducers (PZTs), two sets of brake PZTs, and one set of extension PZTs. These PZTs are used to grasp and move a runner attached to the optic to be moved. By proper cycling of the two brake and extension PZTs, both forward and backward moves of the runner can be achieved. Each brake can be configured for either a power-on or power-off state. For SIM, the brakes and gate of the mechanism are configured in such a manner that, at the end of the step, the actuator is in a parked or power-off state. The control software uses asynchronous sampling of an optical encoder to monitor the position of the runner. These samples are timed to coincide with the end of the previous move, which may consist of a variable number of steps. This sampling technique linearizes the device by avoiding input saturation of the actuator and makes latencies of the plant vanish. The software also estimates, in real time, the scale factor of the device and a disturbance caused by cycling of the brakes. These estimates are used to actively cancel the brake disturbance. The control system also includes feedback and feedforward elements that regulate the position of the runner to a given reference position. Convergence time for smalland medium-sized reference positions (less than 200 microns) to within 10 nanometers can be achieved in under 10 seconds. Convergence times for large moves (greater than 1 millimeter) are limited by the step rate.

  6. Association Between Short-Term Systolic Blood Pressure Variability and Carotid Intima-Media Thickness in ELSA-Brasil Baseline.

    PubMed

    Ribeiro, Adèle H; Lotufo, Paulo A; Fujita, André; Goulart, Alessandra C; Chor, Dora; Mill, José G; Bensenor, Isabela M; Santos, Itamar S

    2017-10-01

    Blood pressure (BP) is associated with carotid intima-media thickness (CIMT), but few studies have explored the association between BP variability and CIMT. We aimed to investigate this association in the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil) baseline. We analyzed data from 7,215 participants (56.0% women) without overt cardiovascular disease (CVD) or antihypertensive use. We included 10 BP readings in varying positions during a 6-hour visit. We defined BP variability as the SD of these readings. We performed a 2-step analysis. We first linearly regressed the CIMT values on main and all-order interaction effects of the variables age, sex, body mass index, race, diabetes diagnosis, dyslipidemia diagnosis, family history of premature CVD, smoking status, and ELSA-Brasil site, and calculated the residuals (residual CIMT). We used partial least square path analysis to investigate whether residual CIMT was associated with BP central tendency and BP variability. Systolic BP (SBP) variability was significantly associated with residual CIMT in models including the entire sample (path coefficient [PC]: 0.046; P < 0.001), and in women (PC: 0.046; P = 0.007) but not in men (PC: 0.037; P = 0.09). This loss of significance was probably due to the smaller subsample size, as PCs were not significantly different according to sex. We found a small but significant association between SBP variability and CIMT values. This was additive to the association between SBP central tendency and CIMT values, supporting a role for high short-term SBP variability in atherosclerosis. © American Journal of Hypertension, Ltd 2017. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  7. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Applicability of corrosion control treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper...

  8. Effect of time step size and turbulence model on the open water hydrodynamic performance prediction of contra-rotating propellers

    NASA Astrophysics Data System (ADS)

    Wang, Zhan-zhi; Xiong, Ying

    2013-04-01

    A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.

  9. Real-time laser cladding control with variable spot size

    NASA Astrophysics Data System (ADS)

    Arias, J. L.; Montealegre, M. A.; Vidal, F.; Rodríguez, J.; Mann, S.; Abels, P.; Motmans, F.

    2014-03-01

    Laser cladding processing has been used in different industries to improve the surface properties or to reconstruct damaged pieces. In order to cover areas considerably larger than the diameter of the laser beam, successive partially overlapping tracks are deposited. With no control over the process variables this conduces to an increase of the temperature, which could decrease mechanical properties of the laser cladded material. Commonly, the process is monitored and controlled by a PC using cameras, but this control suffers from a lack of speed caused by the image processing step. The aim of this work is to design and develop a FPGA-based laser cladding control system. This system is intended to modify the laser beam power according to the melt pool width, which is measured using a CMOS camera. All the control and monitoring tasks are carried out by a FPGA, taking advantage of its abundance of resources and speed of operation. The robustness of the image processing algorithm is assessed, as well as the control system performance. Laser power is decreased as substrate temperature increases, thus maintaining a constant clad width. This FPGA-based control system is integrated in an adaptive laser cladding system, which also includes an adaptive optical system that will control the laser focus distance on the fly. The whole system will constitute an efficient instrument for part repair with complex geometries and coating selective surfaces. This will be a significant step forward into the total industrial implementation of an automated industrial laser cladding process.

  10. Step back! Niche dynamics in cave-dwelling predators

    NASA Astrophysics Data System (ADS)

    Mammola, Stefano; Piano, Elena; Isaia, Marco

    2016-08-01

    The geometry of the Hutchinson's hypervolume derives from multiple selective pressures defined, on one hand, by the physiological tolerance of the species, and on the other, by intra- and interspecific competition. The quantification of these evolutionary forces is essential for the understanding of the coexistence of predators in light of competitive exclusion dynamics. We address this topic by investigating the ecological niche of two medium-sized troglophile spiders (Meta menardi and Pimoa graphitica). Over one year, we surveyed several populations in four subterranean sites in the Western Italian Alps, monitoring monthly their spatial and temporal dynamics and the associated physical and ecological variables. We assessed competition between the two species by means of multi regression techniques and by evaluating the intersection between their multidimensional hypervolumes. We detected a remarkable overlap between the microclimatic and trophic niche of M. menardi and P. graphitica, however, the former -being larger in size- resulted the best competitor in proximity of the cave entrance, causing the latter to readjust its spatial niche towards the inner part, where prey availability is scarcer ("step back effect"). In parallel to the slight variations in the subterranean microclimatic condition, the niche of the two species was also found to be seasonal dependent, varying over the year. With this work, we aim at providing new insights about the relationships among predators, demonstrating that energy-poor environments such as caves maintain the potential for diversification of predators via niche differentiation and serve as useful models for theoretical ecological studies.

  11. Supercritical Fluid Technologies to Fabricate Proliposomes.

    PubMed

    Falconer, James R; Svirskis, Darren; Adil, Ali A; Wu, Zimei

    2015-01-01

    Proliposomes are stable drug carrier systems designed to form liposomes upon addition of an aqueous phase. In this review, current trends in the use of supercritical fluid (SCF) technologies to prepare proliposomes are discussed. SCF methods are used in pharmaceutical research and industry to address limitations associated with conventional methods of pro/liposome fabrication. The SCF solvent methods of proliposome preparation are eco-friendly (known as green technology) and, along with the SCF anti-solvent methods, could be advantageous over conventional methods; enabling better design of particle morphology (size and shape). The major hurdles of SCF methods include poor scalability to industrial manufacturing which may result in variable particle characteristics. In the case of SCF anti-solvent methods, another hurdle is the reliance on organic solvents. However, the amount of solvent required is typically less than that used by the conventional methods. Another hurdle is that most of the SCF methods used have complicated manufacturing processes, although once the setup has been completed, SCF technologies offer a single-step process in the preparation of proliposomes compared to the multiple steps required by many other methods. Furthermore, there is limited research into how proliposomes will be converted into liposomes for the end-user, and how such a product can be prepared reproducibly in terms of vesicle size and drug loading. These hurdles must be overcome and with more research, SCF methods, especially where the SCF acts as a solvent, have the potential to offer a strong alternative to the conventional methods to prepare proliposomes.

  12. Should we consider steps with variable height for a safer stair negotiation in older adults?

    PubMed

    Kunzler, Marcos R; da Rocha, Emmanuel S; Dos Santos, Christielen S; Ceccon, Fernando G; Priario, Liver A; Carpes, Felipe P

    2018-01-01

    Effects of exercise on foot clearances are important. In older adults variations in foot clearances during walking may lead to a fall, but there is a lack of information concerning stair negotiation in older adults. Whether a condition of post exercise changes foot clearances between steps of a staircase in older adults still unknown. To determine differences in clearances when older adults negotiate different steps of a staircase before and after a session of aerobic exercise. Kinematics data from 30 older adults were acquired and the toe and heel clearances were determined for each step. Clearances were compared between the steps. Smaller clearances were found at the highest step during ascending and descending, which was not changed by exercise. Smaller clearances suggest higher risk of tripping at the top of the staircase, regardless of exercise. A smaller step at the top of a short flight of stairs could reduce chances of tripping in older adults. It suggests that steps with variable height could make stair negotiation safer in older adults. This hypothesis should be tested in further studies.

  13. Species distribution model transferability and model grain size - finer may not always be better.

    PubMed

    Manzoor, Syed Amir; Griffiths, Geoffrey; Lukac, Martin

    2018-05-08

    Species distribution models have been used to predict the distribution of invasive species for conservation planning. Understanding spatial transferability of niche predictions is critical to promote species-habitat conservation and forecasting areas vulnerable to invasion. Grain size of predictor variables is an important factor affecting the accuracy and transferability of species distribution models. Choice of grain size is often dependent on the type of predictor variables used and the selection of predictors sometimes rely on data availability. This study employed the MAXENT species distribution model to investigate the effect of the grain size on model transferability for an invasive plant species. We modelled the distribution of Rhododendron ponticum in Wales, U.K. and tested model performance and transferability by varying grain size (50 m, 300 m, and 1 km). MAXENT-based models are sensitive to grain size and selection of variables. We found that over-reliance on the commonly used bioclimatic variables may lead to less accurate models as it often compromises the finer grain size of biophysical variables which may be more important determinants of species distribution at small spatial scales. Model accuracy is likely to increase with decreasing grain size. However, successful model transferability may require optimization of model grain size.

  14. The inter-rater reliability of estimating the size of burns from various burn area chart drawings.

    PubMed

    Wachtel, T L; Berry, C C; Wachtel, E E; Frank, H A

    2000-03-01

    The accuracy and variability of burn size calculations using four Lund and Browder charts currently in clinical use and two Rule of Nine's diagrams were evaluated. The study showed that variability in estimation increased with burn size initially, plateaued in large burns and then decreased slightly in extensive burns. The Rule of Nine's technique often overestimates the burn size and is more variable, but can be performed somewhat faster than the Lund and Browder method. More burn experience leads to less variability in burn area chart drawing estimates. Irregularly shaped burns and burns on the trunk and thighs had greater variability than less irregularly shaped burns or burns on more defined anatomical parts of the body.

  15. One-step generation of continuous-variable quadripartite cluster states in a circuit QED system

    NASA Astrophysics Data System (ADS)

    Yang, Zhi-peng; Li, Zhen; Ma, Sheng-li; Li, Fu-li

    2017-07-01

    We propose a dissipative scheme for one-step generation of continuous-variable quadripartite cluster states in a circuit QED setup consisting of four superconducting coplanar waveguide resonators and a gap-tunable superconducting flux qubit. With external driving fields to adjust the desired qubit-resonator and resonator-resonator interactions, we show that continuous-variable quadripartite cluster states of the four resonators can be generated with the assistance of energy relaxation of the qubit. By comparison with the previous proposals, the distinct advantage of our scheme is that only one step of quantum operation is needed to realize the quantum state engineering. This makes our scheme simpler and more feasible in experiment. Our result may have useful application for implementing quantum computation in solid-state circuit QED systems.

  16. DNA bipedal motor walking dynamics: an experimental and theoretical study of the dependency on step size

    PubMed Central

    Khara, Dinesh C; Berger, Yaron; Ouldridge, Thomas E

    2018-01-01

    Abstract We present a detailed coarse-grained computer simulation and single molecule fluorescence study of the walking dynamics and mechanism of a DNA bipedal motor striding on a DNA origami. In particular, we study the dependency of the walking efficiency and stepping kinetics on step size. The simulations accurately capture and explain three different experimental observations. These include a description of the maximum possible step size, a decrease in the walking efficiency over short distances and a dependency of the efficiency on the walking direction with respect to the origami track. The former two observations were not expected and are non-trivial. Based on this study, we suggest three design modifications to improve future DNA walkers. Our study demonstrates the ability of the oxDNA model to resolve the dynamics of complex DNA machines, and its usefulness as an engineering tool for the design of DNA machines that operate in the three spatial dimensions. PMID:29294083

  17. Two-step size reduction and post-washing of steam exploded corn stover improving simultaneous saccharification and fermentation for ethanol production.

    PubMed

    Liu, Zhi-Hua; Chen, Hong-Zhang

    2017-01-01

    The simultaneous saccharification and fermentation (SSF) of corn stover biomass for ethanol production was performed by integrating steam explosion (SE) pretreatment, hydrolysis and fermentation. Higher SE pretreatment severity and two-step size reduction increased the specific surface area, swollen volume and water holding capacity of steam exploded corn stover (SECS) and hence facilitated the efficiency of hydrolysis and fermentation. The ethanol production and yield in SSF increased with the decrease of particle size and post-washing of SECS prior to fermentation to remove the inhibitors. Under the SE conditions of 1.5MPa and 9min using 2.0cm particle size, glucan recovery and conversion to glucose by enzymes were 86.2% and 87.2%, respectively. The ethanol concentration and yield were 45.0g/L and 85.6%, respectively. With this two-step size reduction and post-washing strategy, the water utilization efficiency, sugar recovery and conversion, and ethanol concentration and yield by the SSF process were improved. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Exploring the full natural variability of eruption sizes within probabilistic hazard assessment of tephra dispersal

    NASA Astrophysics Data System (ADS)

    Selva, Jacopo; Sandri, Laura; Costa, Antonio; Tonini, Roberto; Folch, Arnau; Macedonio, Giovanni

    2014-05-01

    The intrinsic uncertainty and variability associated to the size of next eruption strongly affects short to long-term tephra hazard assessment. Often, emergency plans are established accounting for the effects of one or a few representative scenarios (meant as a specific combination of eruptive size and vent position), selected with subjective criteria. On the other hand, probabilistic hazard assessments (PHA) consistently explore the natural variability of such scenarios. PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping possible eruption sizes and vent positions in classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA results from combining simulations considering different volcanological and meteorological conditions through a weight given by their specific probability of occurrence. However, volcanological parameters, such as erupted mass, eruption column height and duration, bulk granulometry, fraction of aggregates, typically encompass a wide range of values. Because of such a variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. Here we propose a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological inputs are chosen by using a stratified sampling method. This procedure allows avoiding the bias introduced by selecting single representative scenarios and thus neglecting most of the intrinsic eruptive variability. When considering within-size-class variability, attention must be paid to appropriately weight events falling within the same size class. While a uniform weight to all the events belonging to a size class is the most straightforward idea, this implies a strong dependence on the thresholds dividing classes: under this choice, the largest event of a size class has a much larger weight than the smallest event of the subsequent size class. In order to overcome this problem, in this study, we propose an innovative solution able to smoothly link the weight variability within each size class to the variability among the size classes through a common power law, and, simultaneously, respect the probability of different size classes conditional to the occurrence of an eruption. Embedding this procedure into the Bayesian Event Tree scheme enables for tephra fall PHA, quantified through hazard curves and maps representing readable results applicable in planning risk mitigation actions, and for the quantification of its epistemic uncertainties. As examples, we analyze long-term tephra fall PHA at Vesuvius and Campi Flegrei. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained clearly show that PHA accounting for the whole natural variability significantly differs from that based on a representative scenarios, as in volcanic hazard common practice.

  19. The problem of predicting the size distribution of sediment supplied by hillslopes to rivers

    NASA Astrophysics Data System (ADS)

    Sklar, Leonard S.; Riebe, Clifford S.; Marshall, Jill A.; Genetti, Jennifer; Leclere, Shirin; Lukens, Claire L.; Merces, Viviane

    2017-01-01

    Sediments link hillslopes to river channels. The size of sediments entering channels is a key control on river morphodynamics across a range of scales, from channel response to human land use to landscape response to changes in tectonic and climatic forcing. However, very little is known about what controls the size distribution of particles eroded from bedrock on hillslopes, and how particle sizes evolve before sediments are delivered to channels. Here we take the first steps toward building a geomorphic transport law to predict the size distribution of particles produced on hillslopes and supplied to channels. We begin by identifying independent variables that can be used to quantify the influence of five key boundary conditions: lithology, climate, life, erosion rate, and topography, which together determine the suite of geomorphic processes that produce and transport sediments on hillslopes. We then consider the physical and chemical mechanisms that determine the initial size distribution of rock fragments supplied to the hillslope weathering system, and the duration and intensity of weathering experienced by particles on their journey from bedrock to the channel. We propose a simple modeling framework with two components. First, the initial rock fragment sizes are set by the distribution of spacing between fractures in unweathered rock, which is influenced by stresses encountered by rock during exhumation and by rock resistance to fracture propagation. That initial size distribution is then transformed by a weathering function that captures the influence of climate and mineralogy on chemical weathering potential, and the influence of erosion rate and soil depth on residence time and the extent of particle size reduction. Model applications illustrate how spatial variation in weathering regime can lead to bimodal size distributions and downstream fining of channel sediment by down-valley fining of hillslope sediment supply, two examples of hillslope control on river sediment size. Overall, this work highlights the rich opportunities for future research into the controls on the size of sediments produced on hillslopes and delivered to channels.

  20. Choice of Variables and Preconditioning for Time Dependent Problems

    NASA Technical Reports Server (NTRS)

    Turkel, Eli; Vatsa, Verr N.

    2003-01-01

    We consider the use of low speed preconditioning for time dependent problems. These are solved using a dual time step approach. We consider the effect of this dual time step on the parameter of the low speed preconditioning. In addition, we compare the use of two sets of variables, conservation and primitive variables, to solve the system. We show the effect of these choices on both the convergence to a steady state and the accuracy of the numerical solutions for low Mach number steady state and time dependent flows.

  1. Optimized spray drying process for preparation of one-step calcium-alginate gel microspheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popeski-Dimovski, Riste

    Calcium-alginate micro particles have been used extensively in drug delivery systems. Therefore we establish a one-step method for preparation of internally gelated micro particles with spherical shape and narrow size distribution. We use four types of alginate with different G/M ratio and molar weight. The size of the particles is measured using light diffraction and scanning electron microscopy. Measurements showed that with this method, micro particles with size distribution around 4 micrometers can be prepared, and SEM imaging showed that those particles are spherical in shape.

  2. Shear Melting of a Colloidal Glass

    NASA Astrophysics Data System (ADS)

    Eisenmann, Christoph; Kim, Chanjoong; Mattsson, Johan; Weitz, David A.

    2010-01-01

    We use confocal microscopy to explore shear melting of colloidal glasses, which occurs at strains of ˜0.08, coinciding with a strongly non-Gaussian step size distribution. For larger strains, the particle mean square displacement increases linearly with strain and the step size distribution becomes Gaussian. The effective diffusion coefficient varies approximately linearly with shear rate, consistent with a modified Stokes-Einstein relationship in which thermal energy is replaced by shear energy and the length scale is set by the size of cooperatively moving regions consisting of ˜3 particles.

  3. A reduced successive quadratic programming strategy for errors-in-variables estimation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.

    Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less

  4. Increased gait variability may not imply impaired stride-to-stride control of walking in healthy older adults: Winner: 2013 Gait and Clinical Movement Analysis Society Best Paper Award.

    PubMed

    Dingwell, Jonathan B; Salinas, Mandy M; Cusumano, Joseph P

    2017-06-01

    Older adults exhibit increased gait variability that is associated with fall history and predicts future falls. It is not known to what extent this increased variability results from increased physiological noise versus a decreased ability to regulate walking movements. To "walk", a person must move a finite distance in finite time, making stride length (L n ) and time (T n ) the fundamental stride variables to define forward walking. Multiple age-related physiological changes increase neuromotor noise, increasing gait variability. If older adults also alter how they regulate their stride variables, this could further exacerbate that variability. We previously developed a Goal Equivalent Manifold (GEM) computational framework specifically to separate these causes of variability. Here, we apply this framework to identify how both young and high-functioning healthy older adults regulate stepping from each stride to the next. Healthy older adults exhibited increased gait variability, independent of walking speed. However, despite this, these healthy older adults also concurrently exhibited no differences (all p>0.50) from young adults either in how their stride variability was distributed relative to the GEM or in how they regulated, from stride to stride, either their basic stepping variables or deviations relative to the GEM. Using a validated computational model, we found these experimental findings were consistent with increased gait variability arising solely from increased neuromotor noise, and not from changes in stride-to-stride control. Thus, age-related increased gait variability likely precedes impaired stepping control. This suggests these changes may in turn precede increased fall risk. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Step-down versus outpatient psychotherapeutic treatment for personality disorders: 6-year follow-up of the Ullevål personality project

    PubMed Central

    2014-01-01

    Background Although psychotherapy is considered the treatment of choice for patients with personality disorders (PDs), there is no consensus about the optimal level of care for this group of patients. This study reports the results from the 6-year follow-up of the Ullevål Personality Project (UPP), a randomized clinical trial comparing outpatient individual psychotherapy with a long-term step-down treatment program that included a short-term day hospital treatment followed by combined group and individual psychotherapy. Methods The UPP included 113 patients with PDs. Outcome was evaluated after 8 months, 18 months, 3 years and 6 years and was based on a wide range of clinical measures, such as psychosocial functioning, interpersonal problems, symptom severity, and axis I and II diagnoses. Results At the 6-year follow-up, there were no statistically significant differences in outcome between the treatment groups. Effect sizes ranged from medium to large for all outcome variables in both treatment arms. However, patients in the outpatient group had a marked decline in psychosocial functioning during the period between the 3- and 6-year follow-ups; while psychosocial functioning continued to improve in the step-down group during the same period. This difference between groups was statistically significant. Conclusions The findings suggest that both hospital-based long-term step-down treatment and long-term outpatient individual psychotherapy may improve symptoms and psychosocial functioning in poorly functioning PD patients. Social and interpersonal functioning continued to improve in the step-down group during the post-treatment phase, indicating that longer-term changes were stimulated during treatment. Trial registration NCT00378248. PMID:24758722

  6. TRUST84. Sat-Unsat Flow in Deformable Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narasimhan, T.N.

    1984-11-01

    TRUST84 solves for transient and steady-state flow in variably saturated deformable media in one, two, or three dimensions. It can handle porous media, fractured media, or fractured-porous media. Boundary conditions may be an arbitrary function of time. Sources or sinks may be a function of time or of potential. The theoretical model considers a general three-dimensional field of flow in conjunction with a one-dimensional vertical deformation field. The governing equation expresses the conservation of fluid mass in an elemental volume that has a constant volume of solids. Deformation of the porous medium may be nonelastic. Permeability and the compressibility coefficientsmore » may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may be characterized by hysteresis. The relation between pore pressure change and effective stress change may be a function of saturation. The basic calculational model of the conductive heat transfer code TRUMP is applied in TRUST84 to the flow of fluids in porous media. The model combines an integrated finite difference algorithm for numerically solving the governing equation with a mixed explicit-implicit iterative scheme in which the explicit changes in potential are first computed for all elements in the system, after which implicit corrections are made only for those elements for which the stable time-step is less than the time-step being used. Time-step sizes are automatically controlled to optimize the number of iterations, to control maximum change to potential during a time-step, and to obtain desired output information. Time derivatives, estimated on the basis of system behavior during the two previous time-steps, are used to start the iteration process and to evaluate nonlinear coefficients. Both heterogeneity and anisotropy can be handled.« less

  7. The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.

    ERIC Educational Resources Information Center

    Baldwin, Beatrice

    The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…

  8. Impurity effects in crystal growth from solutions: Steady states, transients and step bunch motion

    NASA Astrophysics Data System (ADS)

    Ranganathan, Madhav; Weeks, John D.

    2014-05-01

    We analyze a recently formulated model in which adsorbed impurities impede the motion of steps in crystals grown from solutions, while moving steps can remove or deactivate adjacent impurities. In this model, the chemical potential change of an atom on incorporation/desorption to/from a step is calculated for different step configurations and used in the dynamical simulation of step motion. The crucial difference between solution growth and vapor growth is related to the dependence of the driving force for growth of the main component on the size of the terrace in front of the step. This model has features resembling experiments in solution growth, which yields a dead zone with essentially no growth at low supersaturation and the motion of large coherent step bunches at larger supersaturation. The transient behavior shows a regime wherein steps bunch together and move coherently as the bunch size increases. The behavior at large line tension is reminiscent of the kink-poisoning mechanism of impurities observed in calcite growth. Our model unifies different impurity models and gives a picture of nonequilibrium dynamics that includes both steady states and time dependent behavior and shows similarities with models of disordered systems and the pinning/depinning transition.

  9. A continuum state variable theory to model the size-dependent surface energy of nanostructures.

    PubMed

    Jamshidian, Mostafa; Thamburaja, Prakash; Rabczuk, Timon

    2015-10-14

    We propose a continuum-based state variable theory to quantify the excess surface free energy density throughout a nanostructure. The size-dependent effect exhibited by nanoplates and spherical nanoparticles i.e. the reduction of surface energy with reducing nanostructure size is well-captured by our continuum state variable theory. Our constitutive theory is also able to predict the reducing energetic difference between the surface and interior (bulk) portions of a nanostructure with decreasing nanostructure size.

  10. Dynamic Modeling of the Main Blow in Basic Oxygen Steelmaking Using Measured Step Responses

    NASA Astrophysics Data System (ADS)

    Kattenbelt, Carolien; Roffel, B.

    2008-10-01

    In the control and optimization of basic oxygen steelmaking, it is important to have an understanding of the influence of control variables on the process. However, important process variables such as the composition of the steel and slag cannot be measured continuously. The decarburization rate and the accumulation rate of oxygen, which can be derived from the generally measured waste gas flow and composition, are an indication of changes in steel and slag composition. The influence of the control variables on the decarburization rate and the accumulation rate of oxygen can best be determined in the main blow period. In this article, the measured step responses of the decarburization rate and the accumulation rate of oxygen to step changes in the oxygen blowing rate, lance height, and the addition rate of iron ore during the main blow are presented. These measured step responses are subsequently used to develop a dynamic model for the main blow. The model consists of an iron oxide and a carbon balance and an additional equation describing the influence of the lance height and the oxygen blowing rate on the decarburization rate. With this simple dynamic model, the measured step responses can be explained satisfactorily.

  11. Charge-regularized swelling kinetics of polyelectrolyte gels: Elasticity and diffusion

    NASA Astrophysics Data System (ADS)

    Sen, Swati; Kundagrami, Arindam

    2017-11-01

    We apply a recently developed method [S. Sen and A. Kundagrami, J. Chem. Phys. 143, 224904 (2015)], using a phenomenological expression of osmotic stress, as a function of polymer and charge densities, hydrophobicity, and network elasticity for the swelling of spherical polyelectrolyte (PE) gels with fixed and variable charges in a salt-free solvent. This expression of stress is used in the equation of motion of swelling kinetics of spherical PE gels to numerically calculate the spatial profiles for the polymer and free ion densities at different time steps and the time evolution of the size of the gel. We compare the profiles of the same variables obtained from the classical linear theory of elasticity and quantitatively estimate the bulk modulus of the PE gel. Further, we obtain an analytical expression of the elastic modulus from the linearized expression of stress (in the small deformation limit). We find that the estimated bulk modulus of the PE gel decreases with the increase of its effective charge for a fixed degree of deformation during swelling. Finally, we match the gel-front locations with the experimental data, taken from the measurements of charged reversible addition-fragmentation chain transfer gels to show an increase in gel-size with charge and also match the same for PNIPAM (uncharged) and imidazolium-based (charged) minigels, which specifically confirms the decrease of the gel modulus value with the increase of the charge. The agreement between experimental and theoretical results confirms general diffusive behaviour for swelling of PE gels with a decreasing bulk modulus with increasing degree of ionization (charge). The new formalism captures large deformations as well with a significant variation of charge content of the gel. It is found that PE gels with large deformation but same initial size swell faster with a higher charge.

  12. Complete sequence of Tvv1, a family of Ty 1 copia-like retrotransposons of Vitis vinifera L., reconstituted by chromosome walking.

    PubMed

    Pelsy, F.; Merdinoglu, D.

    2002-09-01

    A chromosome-walking strategy was used to sequence and characterize retrotransposons in the grapevine genome. The reconstitution of a family of retroelements, named Tvv1, was achieved by six successive steps. These elements share a single, highly conserved open reading frame 4,153 nucleotides-long, putatively encoding the gag, pro, int, rt and rh proteins. Comparison of the Tvv1 open reading frame coding potential with those of drosophila copia and tobacco Tnt1, revealed that Tvv1 is closely related to Ty 1 copia-like retrotransposons. A highly variable untranslated leader region, upstream of the open reading frame, allowed us to differentiate Tvv1 variants, which represent a family of at least 28 copies, in varying sizes. This internal region is flanked by two long terminal repeats in direct orientation, sized between 149 and 157 bp. Among elements theoretically sized from 4,970 to 5,550 bp, we describe the full-length sequence of a reference element Tvv1-1, 5,343 nucleotides-long. The full-length sequence of Tvv1-1 compared to pea PDR1 shows a 53.3% identity. In addition, both elements contain long terminal repeats of nearly the same size in which the U5 region could be entirely absent. Therefore, we assume that Tvv1 and PDR1 could constitute a particular class of short LTRs retroelements.

  13. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    ERIC Educational Resources Information Center

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  14. Childhood malnutrition in Egypt using geoadditive Gaussian and latent variable models.

    PubMed

    Khatab, Khaled

    2010-04-01

    Major progress has been made over the last 30 years in reducing the prevalence of malnutrition amongst children less than 5 years of age in developing countries. However, approximately 27% of children under the age of 5 in these countries are still malnourished. This work focuses on the childhood malnutrition in one of the biggest developing countries, Egypt. This study examined the association between bio-demographic and socioeconomic determinants and the malnutrition problem in children less than 5 years of age using the 2003 Demographic and Health survey data for Egypt. In the first step, we use separate geoadditive Gaussian models with the continuous response variables stunting (height-for-age), underweight (weight-for-age), and wasting (weight-for-height) as indicators of nutritional status in our case study. In a second step, based on the results of the first step, we apply the geoadditive Gaussian latent variable model for continuous indicators in which the 3 measurements of the malnutrition status of children are assumed as indicators for the latent variable "nutritional status".

  15. Study of CdTe quantum dots grown using a two-step annealing method

    NASA Astrophysics Data System (ADS)

    Sharma, Kriti; Pandey, Praveen K.; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.

    2006-02-01

    High size dispersion, large average radius of quantum dot and low-volume ratio has been a major hurdle in the development of quantum dot based devices. In the present paper, we have grown CdTe quantum dots in a borosilicate glass matrix using a two-step annealing method. Results of optical characterization and the theoretical model of absorption spectra have shown that quantum dots grown using two-step annealing have lower average radius, lesser size dispersion, higher volume ratio and higher decrease in bulk free energy as compared to quantum dots grown conventionally.

  16. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  17. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  18. Visually Lossless JPEG 2000 for Remote Image Browsing

    PubMed Central

    Oh, Han; Bilgin, Ali; Marcellin, Michael

    2017-01-01

    Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112

  19. A novel and simple test of gait adaptability predicts gold standard measures of functional mobility in stroke survivors.

    PubMed

    Hollands, K L; Pelton, T A; van der Veen, S; Alharbi, S; Hollands, M A

    2016-01-01

    Although there is evidence that stroke survivors have reduced gait adaptability, the underlying mechanisms and the relationship to functional recovery are largely unknown. We explored the relationships between walking adaptability and clinical measures of balance, motor recovery and functional ability in stroke survivors. Stroke survivors (n=42) stepped to targets, on a 6m walkway, placed to elicit step lengthening, shortening and narrowing on paretic and non-paretic sides. The number of targets missed during six walks and target stepping speed was recorded. Fugl-Meyer (FM), Berg Balance Scale (BBS), self-selected walking speed (SWWS) and single support (SS) and step length (SL) symmetry (using GaitRite when not walking to targets) were also assessed. Stepwise multiple-linear regression was used to model the relationships between: total targets missed, number missed with paretic and non-paretic legs, target stepping speed, and each clinical measure. Regression revealed a significant model for each outcome variable that included only one independent variable. Targets missed by the paretic limb, was a significant predictor of FM (F(1,40)=6.54, p=0.014,). Speed of target stepping was a significant predictor of each of BBS (F(1,40)=26.36, p<0.0001), SSWS (F(1,40)=37.00, p<0.0001). No variables were significant predictors of SL or SS asymmetry. Speed of target stepping was significantly predictive of BBS and SSWS and paretic targets missed predicted FM, suggesting that fast target stepping requires good balance and accurate stepping demands good paretic leg function. The relationships between these parameters indicate gait adaptability is a clinically meaningful target for measurement and treatment of functionally adaptive walking ability in stroke survivors. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Multi-passes warm rolling of AZ31 magnesium alloy, effect on evaluation of texture, microstructure, grain size and hardness

    NASA Astrophysics Data System (ADS)

    Kamran, J.; Hasan, B. A.; Tariq, N. H.; Izhar, S.; Sarwar, M.

    2014-06-01

    In this study the effect of multi-passes warm rolling of AZ31 magnesium alloy on texture, microstructure, grain size variation and hardness of as cast sample (A) and two rolled samples (B & C) taken from different locations of the as-cast ingot was investigated. The purpose was to enhance the formability of AZ31 alloy in order to help manufacturability. It was observed that multi-passes warm rolling (250°C to 350°C) of samples B & C with initial thickness 7.76mm and 7.73 mm was successfully achieved up to 85% reduction without any edge or surface cracks in ten steps with a total of 26 passes. The step numbers 1 to 4 consist of 5, 2, 11 and 3 passes respectively, the remaining steps 5 to 10 were single pass rolls. In each discrete step a fixed roll gap is used in a way that true strain per step increases very slowly from 0.0067 in the first step to 0.7118 in the 26th step. Both samples B & C showed very similar behavior after 26th pass and were successfully rolled up to 85% thickness reduction. However, during 10th step (27th pass) with a true strain value of 0.772 the sample B experienced very severe surface as well as edge cracks. Sample C was therefore not rolled for the 10th step and retained after 26 passes. Both samples were studied in terms of their basal texture, microstructure, grain size and hardness. Sample C showed an equiaxed grain structure after 85% total reduction. The equiaxed grain structure of sample C may be due to the effective involvement of dynamic recrystallization (DRX) which led to formation of these grains with relatively low misorientations with respect to the parent as cast grains. The sample B on the other hand showed a microstructure in which all the grains were elongated along the rolling direction (RD) after 90 % total reduction and DRX could not effectively play its role due to heavy strain and lack of plastic deformation systems. The microstructure of as cast sample showed a near-random texture (mrd 4.3), with average grain size of 44 & micro-hardness of 52 Hv. The grain size of sample B and C was 14μm and 27μm respectively and mrd intensity of basal texture was 5.34 and 5.46 respectively. The hardness of sample B and C came out to be 91 and 66 Hv respectively due to reduction in grain size and followed the well known Hall-Petch relationship.

  1. Multiple stage miniature stepping motor

    DOEpatents

    Niven, William A.; Shikany, S. David; Shira, Michael L.

    1981-01-01

    A stepping motor comprising a plurality of stages which may be selectively activated to effect stepping movement of the motor, and which are mounted along a common rotor shaft to achieve considerable reduction in motor size and minimum diameter, whereby sequential activation of the stages results in successive rotor steps with direction being determined by the particular activating sequence followed.

  2. Preparation of ritonavir nanosuspensions by microfluidization using polymeric stabilizers: I. A Design of Experiment approach.

    PubMed

    Karakucuk, Alptug; Celebi, Nevin; Teksin, Zeynep Safak

    2016-12-01

    The objective of this study was to prepare ritonavir (RTV) nanosuspensions, an anti-HIV protease inhibitor, to solve its poor water solubility issues. The microfluidization method with a pre-treatment step was used to obtain the nanosuspensions. Design of Experiment (DoE) approach was performed in order to understand the effect of the critical formulation parameters which were selected as polymer type (HPMC or PVP), RTV to polymer ratio, and number of passes. Interactions between the formulation variables were evaluated according to Univariate ANOVA. Particle size, particle size distribution and zeta potential were selected as dependent variables. Scanning electron microscopy, X-ray powder diffraction, and differential scanning calorimetry were performed for the in vitro characterization after lyophilization of the optimum nanosuspension formulation. The saturation solubility was examined in comparison with coarse powder, physical mixture and nanosuspension. In vitro dissolution studies were conducted using polyoxyethylene 10 lauryl ether (POE10LE) and biorelevant media (FaSSIF and FeSSIF). The results showed nanosuspensions were partially amorphous and spherically shaped with particle sizes ranging from 400 to 600nm. Moreover, 0.1-0.4 particle size distribution and about -20mV zeta potential values were obtained. The nanosuspension showed a significantly increased solubility when compared to coarse powder (3.5 fold). Coarse powder, physical mixture, nanosuspension and commercial product dissolved completely in POE10LE; however, cumulative dissolved values reached ~20% in FaSSIF for the commercial product and nanosuspension. The nanosuspension showed more than 90% drug dissolved in FeSSIF compared to the commercial product which showed ~50% in the same medium. It was determined that RTV dissolution was increased by nanosuspension formulation. We concluded that DoE approach is useful to develop nanosuspension formulation to improve solubility and dissolution rate of RTV. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Development and external validation of new ultrasound-based mathematical models for preoperative prediction of high-risk endometrial cancer.

    PubMed

    Van Holsbeke, C; Ameye, L; Testa, A C; Mascilini, F; Lindqvist, P; Fischerova, D; Frühauf, F; Fransis, S; de Jonge, E; Timmerman, D; Epstein, E

    2014-05-01

    To develop and validate strategies, using new ultrasound-based mathematical models, for the prediction of high-risk endometrial cancer and compare them with strategies using previously developed models or the use of preoperative grading only. Women with endometrial cancer were prospectively examined using two-dimensional (2D) and three-dimensional (3D) gray-scale and color Doppler ultrasound imaging. More than 25 ultrasound, demographic and histological variables were analyzed. Two logistic regression models were developed: one 'objective' model using mainly objective variables; and one 'subjective' model including subjective variables (i.e. subjective impression of myometrial and cervical invasion, preoperative grade and demographic variables). The following strategies were validated: a one-step strategy using only preoperative grading and two-step strategies using preoperative grading as the first step and one of the new models, subjective assessment or previously developed models as a second step. One-hundred and twenty-five patients were included in the development set and 211 were included in the validation set. The 'objective' model retained preoperative grade and minimal tumor-free myometrium as variables. The 'subjective' model retained preoperative grade and subjective assessment of myometrial invasion. On external validation, the performance of the new models was similar to that on the development set. Sensitivity for the two-step strategy with the 'objective' model was 78% (95% CI, 69-84%) at a cut-off of 0.50, 82% (95% CI, 74-88%) for the strategy with the 'subjective' model and 83% (95% CI, 75-88%) for that with subjective assessment. Specificity was 68% (95% CI, 58-77%), 72% (95% CI, 62-80%) and 71% (95% CI, 61-79%) respectively. The two-step strategies detected up to twice as many high-risk cases as preoperative grading only. The new models had a significantly higher sensitivity than did previously developed models, at the same specificity. Two-step strategies with 'new' ultrasound-based models predict high-risk endometrial cancers with good accuracy and do this better than do previously developed models. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.

  4. Predicting the admission into medical school of African American college students who have participated in summer academic enrichment programs.

    PubMed

    Hesser, A; Cregler, L L; Lewis, L

    1998-02-01

    To identify cognitive and noncognitive variables as predictors of the admission into medical school of African American college students who have participated in summer academic enrichment programs (SAEPs). The study sample comprised 309 African American college students who participated in SAEPs at the Medical College of Georgia School of Medicine from 1980 to 1989 and whose educational and occupational statuses were determined by follow-up tracking. A three-step logistic regression was used to analyze the data (with alpha = .05); the criterion variable was admission to medical school. The 17 predictor variables studied were one of two types, cognitive and noncognitive. The cognitive variables were (1) Scholastic Aptitude Test mathematics (SAT-M) score, (2) SAT verbal score, (3) college grade-point average (GPA), (4) college science GPA, (5) SAEP GPA, and (6) SAEP basic science GPA (BSGPA). The noncognitive variables were (1) gender, (2) highest college level at the time of the last SAEP application, (3) type of college attended (historically African American or predominately white), (4) number of SAEPs attended, (5) career aspiration (physician or another health science option) (6) parents who were professionals, (7) parents who were health care role models, (8) evidence of leadership, (9) evidence of community service, (10) evidence of special motivation, and (11) strength of letter of recommendation in the SAEP application. For each student the rating scores for the last four noncognitive variables were determined by averaging the ratings of two judges who reviewed relevant information in each student's file. In step 1, which explained 20% of the admission decision variance, SAT-M score, SAEP BSGPA, and college GPA were the three significant cognitive predictors identified. In step 2, which explained 31% of the variance, the three cognitive predictors identified in step 1 were joined by three noncognitive predictors: career aspiration, type of college, and number of SAEPs attended. In step 3, which explained 29% of the variance, two cognitive variables (SAT-M score and SAEP BSGPA) and two noncognitive variables (career aspiration and strength of recommendation letter) were identified. The results support the concept of using both cognitive and noncognitive variables when selecting African American students for pre-medical school SAEPs.

  5. Predator Persistence through Variability of Resource Productivity in Tritrophic Systems.

    PubMed

    Soudijn, Floor H; de Roos, André M

    2017-12-01

    The trophic structure of species communities depends on the energy transfer between trophic levels. Primary productivity varies strongly through time, challenging the persistence of species at higher trophic levels. Yet resource variability has mostly been studied in systems with only one or two trophic levels. We test the effect of variability in resource productivity in a tritrophic model system including a resource, a size-structured consumer, and a size-specific predator. The model complies with fundamental principles of mass conservation and the body-size dependence of individual-level energetics and predator-prey interactions. Surprisingly, we find that resource variability may promote predator persistence. The positive effect of variability on the predator arises through periods with starvation mortality of juvenile prey, which reduces the intraspecific competition in the prey population. With increasing variability in productivity and starvation mortality in the juvenile prey, the prey availability increases in the size range preferred by the predator. The positive effect of prey mortality on the trophic transfer efficiency depends on the biologically realistic consideration of body size-dependent and food-dependent functions for growth and reproduction in our model. Our findings show that variability may promote the trophic transfer efficiency, indicating that environmental variability may sustain species at higher trophic levels in natural ecosystems.

  6. Vitrification of zona-free rabbit expanded or hatching blastocysts: a possible model for human blastocysts.

    PubMed

    Cervera, R P; Garcia-Ximénez, F

    2003-10-01

    The purpose of this study was to test the effectiveness of one two-step (A) and two one-step (B1 and B2) vitrification procedures on denuded expanded or hatching rabbit blastocysts held in standard sealed plastic straws as a possible model for human blastocysts. The effect of blastocyst size was also studied on the basis of three size categories (I: diameter <200 micro m; II: diameter 200-299 micro m; III: diameter >/==" BORDER="0">300 micro m). Rabbit expanded or hatching blastocysts were vitrified at day 4 or 5. Before vitrification, the zona pellucida was removed using acidic phosphate buffered saline. For the two-step procedure, prior to vitrification, blastocysts were pre- equilibrated in a solution containing 10% dimethyl sulphoxide (DMSO) and 10% ethylene glycol (EG) for 1 min. Different final vitrification solutions were compared: 20% DMSO and 20% EG with (A and B1) or without (B2) 0.5 mol/l sucrose. Of 198 vitrified blastocysts, 181 (91%) survived, regardless of the vitrification procedure applied. Vitrification procedure A showed significantly higher re-expansion (88%), attachment (86%) and trophectoderm outgrowth (80%) rates than the two one-step vitrification procedures, B1 and B2 (46 and 21%, 20 and 33%, and 18 and 23%, respectively). After warming, blastocysts of greater size (II and III) showed significantly higher attachment (54 and 64%) and trophectoderm outgrowth (44 and 58%) rates than smaller blastocysts (I, attachment: 29%; trophectoderm outgrowth: 25%). These result demonstrate that denuded expanded or hatching rabbit blastocysts of greater size can be satisfactorily vitrified by use of a two-step procedure. The similarity of vitrification solutions used in humans could make it feasible to test such a procedure on human denuded blastocysts of different sizes.

  7. Walk Ratio (Step Length/Cadence) as a Summary Index of Neuromotor Control of Gait: Application to Multiple Sclerosis

    ERIC Educational Resources Information Center

    Rota, Viviana; Perucca, Laura; Simone, Anna; Tesio, Luigi

    2011-01-01

    In healthy adults, the step length/cadence ratio [walk ratio (WR) in mm/(steps/min) and normalized for height] is known to be constant around 6.5 mm/(step/min). It is a speed-independent index of the overall neuromotor gait control, in as much as it reflects energy expenditure, balance, between-step variability, and attentional demand. The speed…

  8. Clones of cells switch from reduction to enhancement of size variability in Arabidopsis sepals

    PubMed Central

    Tsugawa, Satoru; Hervieux, Nathan; Kierzkowski, Daniel; Routier-Kierzkowska, Anne-Lise; Sapala, Aleksandra; Hamant, Olivier; Smith, Richard S.; Boudaoud, Arezki

    2017-01-01

    Organs form with remarkably consistent sizes and shapes during development, whereas a high variability in growth is observed at the cell level. Given this contrast, it is unclear how such consistency in organ scale can emerge from cellular behavior. Here, we examine an intermediate scale, the growth of clones of cells in Arabidopsis sepals. Each clone consists of the progeny of a single progenitor cell. At early stages, we find that clones derived from a small progenitor cell grow faster than those derived from a large progenitor cell. This results in a reduction in clone size variability, a phenomenon we refer to as size uniformization. By contrast, at later stages of clone growth, clones change their growth pattern to enhance size variability, when clones derived from larger progenitor cells grow faster than those derived from smaller progenitor cells. Finally, we find that, at early stages, fast growing clones exhibit greater cell growth heterogeneity. Thus, cellular variability in growth might contribute to a decrease in the variability of clones throughout the sepal. PMID:29183944

  9. Newmark local time stepping on high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less

  10. Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi

    A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.

  11. A stochastic locomotor control model for the nurse shark, Ginglymostoma cirratum.

    PubMed

    Gerald, K B; Matis, J H; Kleerekoper, H

    1978-06-12

    The locomotor behavior of the nurse shark (Ginglymostoma cirratum) is characterized by 17 variables (frequency and ratios of left, right, and total turns; their radians; straight paths (steps); distance travelled; and velocity) Within each of these variables there is an internal time dependency the structure of which was elaborated together with an improved statistical model predicting their behavior within 90% confidence limits. The model allows for the sensitive detection of subtle locomotor response to sensory stimulation as values of variables may exceed the established confidence limits within minutes after onset of the stimulus. The locomotor activity is well described by an autoregression time series model and can be predicted by only seven variables. Six of these form two independently operating clusters. The first one consists of: the number of right turns, the distance travelled and the mean velocity; the second one of: the mean size of right turns, of left turns, and of all turns. The same clustering is obtained independently by a cluster analysis of cross-sections of the seven time series. It is apparent that, among a total of 17 locomotor variables, seven behave as individually independent agents, presumably controlled by seven separate and independent centers. The output of each center can only be predicted by its own behavior. In spite of the individual of the seven variables, their internal structure is similar in important aspects which may result from control by a common command center. The shark locomotor model differs in important aspects from the previously constructed for the goldfish. The interdependence of the locomotor variables in both species may be related to the control mechanisms postulated by von Holst for the coordination of rhythmic fin movements in fishes. A locomotor control model for the nurse shark is proposed.

  12. Active control of impulsive noise with symmetric α-stable distribution based on an improved step-size normalized adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yali; Zhang, Qizhi; Yin, Yixin

    2015-05-01

    In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.

  13. Characterizing 3D grain size distributions from 2D sections in mylonites using a modified version of the Saltykov method

    NASA Astrophysics Data System (ADS)

    Lopez-Sanchez, Marco; Llana-Fúnez, Sergio

    2016-04-01

    The understanding of creep behaviour in rocks requires knowledge of 3D grain size distributions (GSD) that result from dynamic recrystallization processes during deformation. The methods to estimate directly the 3D grain size distribution -serial sectioning, synchrotron or X-ray-based tomography- are expensive, time-consuming and, in most cases and at best, challenging. This means that in practice grain size distributions are mostly derived from 2D sections. Although there are a number of methods in the literature to derive the actual 3D grain size distributions from 2D sections, the most popular in highly deformed rocks is the so-called Saltykov method. It has though two major drawbacks: the method assumes no interaction between grains, which is not true in the case of recrystallised mylonites; and uses histograms to describe distributions, which limits the quantification of the GSD. The first aim of this contribution is to test whether the interaction between grains in mylonites, i.e. random grain packing, affects significantly the GSDs estimated by the Saltykov method. We test this using the random resampling technique in a large data set (n = 12298). The full data set is built from several parallel thin sections that cut a completely dynamically recrystallized quartz aggregate in a rock sample from a Variscan shear zone in NW Spain. The results proved that the Saltykov method is reliable as long as the number of grains is large (n > 1000). Assuming that a lognormal distribution is an optimal approximation for the GSD in a completely dynamically recrystallized rock, we introduce an additional step to the Saltykov method, which allows estimating a continuous probability distribution function of the 3D grain size population. The additional step takes the midpoints of the classes obtained by the Saltykov method and fits a lognormal distribution with a trust region using a non-linear least squares algorithm. The new protocol is named the two-step method. The conclusion of this work is that both the Saltykov and the two-step methods are accurate and simple enough to be useful in practice in rocks, alloys or ceramics with near-equant grains and expected lognormal distributions. The Saltykov method is particularly suitable to estimate the volumes of particular grain fractions, while the two-step method to quantify the full GSD (mean and standard deviation in log grain size). The two-step method is implemented in a free, open-source and easy-to-handle script (see http://marcoalopez.github.io/GrainSizeTools/).

  14. Avoiding Stair-Step Artifacts in Image Registration for GOES-R Navigation and Registration Assessment

    NASA Technical Reports Server (NTRS)

    Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John

    2016-01-01

    In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.

  15. Affected States Soft Independent Modeling by Class Analogy from the Relation Between Independent Variables, Number of Independent Variables and Sample Size

    PubMed Central

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, İrem Ersöz

    2013-01-01

    Objective: The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Study Design: Simulation study. Material and Methods: SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Results: Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. Conclusion: It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values. PMID:25207065

  16. Affected States soft independent modeling by class analogy from the relation between independent variables, number of independent variables and sample size.

    PubMed

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, Irem Ersöz

    2013-03-01

    The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Simulation study. SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values.

  17. Method and apparatus for sizing and separating warp yarns using acoustical energy

    DOEpatents

    Sheen, S.H.; Chien, H.T.; Raptis, A.C.; Kupperman, D.S.

    1998-05-19

    A slashing process is disclosed for preparing warp yarns for weaving operations including the steps of sizing and/or desizing the yarns in an acoustic resonance box and separating the yarns with a leasing apparatus comprised of a set of acoustically agitated lease rods. The sizing step includes immersing the yarns in a size solution contained in an acoustic resonance box. Acoustic transducers are positioned against the exterior of the box for generating an acoustic pressure field within the size solution. Ultrasonic waves that result from the acoustic pressure field continuously agitate the size solution to effect greater mixing and more uniform application and penetration of the size onto the yarns. The sized yarns are then separated by passing the warp yarns over and under lease rods. Electroacoustic transducers generate acoustic waves along the longitudinal axis of the lease rods, creating a shearing motion on the surface of the rods for splitting the yarns. 2 figs.

  18. Effects of the voltage and time of anodization on modulation of the pore dimensions of AAO films for nanomaterials synthesis

    NASA Astrophysics Data System (ADS)

    Chahrour, Khaled M.; Ahmed, Naser M.; Hashim, M. R.; Elfadill, Nezar G.; Maryam, W.; Ahmad, M. A.; Bououdina, M.

    2015-12-01

    Highly-ordered and hexagonal-shaped nanoporous anodic aluminum oxide (AAO) of 1 μm thickness of Al pre-deposited onto Si substrate using two-step anodization was successfully fabricated. The growth mechanism of the porous AAO film was investigated by anodization current-time behavior for different anodizing voltages and by visualizing the microstructural procedure of the fabrication of AAO film by two-step anodization using cross-sectional and top view of FESEM imaging. Optimum conditions of the process variables such as annealing time of the as-deposited Al thin film and pore widening time of porous AAO film were experimentally determined to obtain AAO films with uniformly distributed and vertically aligned porous microstructure. Pores with diameter ranging from 50 nm to 110 nm and thicknesses between 250 nm and 1400 nm, were obtained by controlling two main influential anodization parameters: the anodizing voltage and time of the second-step anodization. X-ray diffraction analysis reveals amorphous-to-crystalline phase transformation after annealing at temperatures above 800 °C. AFM images show optimum ordering of the porous AAO film anodized under low voltage condition. AAO films may be exploited as templates with desired size distribution for the fabrication of CuO nanorod arrays. Such nanostructured materials exhibit unique properties and hold high potential for nanotechnology devices.

  19. An adaptive scale factor based MPPT algorithm for changing solar irradiation levels in outer space

    NASA Astrophysics Data System (ADS)

    Kwan, Trevor Hocksun; Wu, Xiaofeng

    2017-03-01

    Maximum power point tracking (MPPT) techniques are popularly used for maximizing the output of solar panels by continuously tracking the maximum power point (MPP) of their P-V curves, which depend both on the panel temperature and the input insolation. Various MPPT algorithms have been studied in literature, including perturb and observe (P&O), hill climbing, incremental conductance, fuzzy logic control and neural networks. This paper presents an algorithm which improves the MPP tracking performance by adaptively scaling the DC-DC converter duty cycle. The principle of the proposed algorithm is to detect the oscillation by checking the sign (ie. direction) of the duty cycle perturbation between the current and previous time steps. If there is a difference in the signs then it is clear an oscillation is present and the DC-DC converter duty cycle perturbation is subsequently scaled down by a constant factor. By repeating this process, the steady state oscillations become negligibly small which subsequently allows for a smooth steady state MPP response. To verify the proposed MPPT algorithm, a simulation involving irradiances levels that are typically encountered in outer space is conducted. Simulation and experimental results prove that the proposed algorithm is fast and stable in comparison to not only the conventional fixed step counterparts, but also to previous variable step size algorithms.

  20. Perceptual-motor regulation in locomotor pointing while approaching a curb.

    PubMed

    Andel, Steven van; Cole, Michael H; Pepping, Gert-Jan

    2018-02-01

    Locomotor pointing is a task that has been the focus of research in the context of sport (e.g. long jumping and cricket) as well as normal walking. Collectively, these studies have produced a broad understanding of locomotor pointing, but generalizability has been limited to laboratory type tasks and/or tasks with high spatial demands. The current study aimed to generalize previous findings in locomotor pointing to the common daily task of approaching and stepping on to a curb. Sixteen people completed 33 repetitions of a task that required them to walk up to and step onto a curb. Information about their foot placement was collected using a combination of measures derived from a pressure-sensitive walkway and video data. Variables related to perceptual-motor regulation were analyzed on an inter-trial, intra-step and inter-step level. Similar to previous studies, analysis of the foot placements showed that, variability in foot placement decreased as the participants drew closer to the curb. Regulation seemed to be initiated earlier in this study compared to previous studies, as shown by a decreasing variability in foot placement as early as eight steps before reaching the curb. Furthermore, it was shown that when walking up to the curb, most people regulated their walk in a way so as to achieve minimal variability in the foot placement on top of the curb, rather than a placement in front of the curb. Combined, these results showed a strong perceptual-motor coupling in the task of approaching and stepping up a curb, rendering this task a suitable test for perceptual-motor regulation in walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Molecular Dynamics of Flexible Polar Cations in a Variable Confined Space: Toward Exceptional Two-Step Nonlinear Optical Switches.

    PubMed

    Xu, Wei-Jian; He, Chun-Ting; Ji, Cheng-Min; Chen, Shao-Li; Huang, Rui-Kang; Lin, Rui-Biao; Xue, Wei; Luo, Jun-Hua; Zhang, Wei-Xiong; Chen, Xiao-Ming

    2016-07-01

    The changeable molecular dynamics of flexible polar cations in the variable confined space between inorganic chains brings about a new type of two-step nonlinear optical (NLO) switch with genuine "off-on-off" second harmonic generation (SHG) conversion between one NLO-active state and two NLO-inactive states. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Multilevel resistive information storage and retrieval

    DOEpatents

    Lohn, Andrew; Mickel, Patrick R.

    2016-08-09

    The present invention relates to resistive random-access memory (RRAM or ReRAM) systems, as well as methods of employing multiple state variables to form degenerate states in such memory systems. The methods herein allow for precise write and read steps to form multiple state variables, and these steps can be performed electrically. Such an approach allows for multilevel, high density memory systems with enhanced information storage capacity and simplified information retrieval.

  3. Optimization of Surface Roughness and Wall Thickness in Dieless Incremental Forming Of Aluminum Sheet Using Taguchi

    NASA Astrophysics Data System (ADS)

    Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir

    2018-03-01

    Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using Taguchi analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.

  4. Single cardiac ventricular myosins are autonomous motors

    PubMed Central

    Wang, Yihua; Yuan, Chen-Ching; Kazmierczak, Katarzyna; Szczesna-Cordary, Danuta

    2018-01-01

    Myosin transduces ATP free energy into mechanical work in muscle. Cardiac muscle has dynamically wide-ranging power demands on the motor as the muscle changes modes in a heartbeat from relaxation, via auxotonic shortening, to isometric contraction. The cardiac power output modulation mechanism is explored in vitro by assessing single cardiac myosin step-size selection versus load. Transgenic mice express human ventricular essential light chain (ELC) in wild- type (WT), or hypertrophic cardiomyopathy-linked mutant forms, A57G or E143K, in a background of mouse α-cardiac myosin heavy chain. Ensemble motility and single myosin mechanical characteristics are consistent with an A57G that impairs ELC N-terminus actin binding and an E143K that impairs lever-arm stability, while both species down-shift average step-size with increasing load. Cardiac myosin in vivo down-shifts velocity/force ratio with increasing load by changed unitary step-size selections. Here, the loaded in vitro single myosin assay indicates quantitative complementarity with the in vivo mechanism. Both have two embedded regulatory transitions, one inhibiting ADP release and a second novel mechanism inhibiting actin detachment via strain on the actin-bound ELC N-terminus. Competing regulators filter unitary step-size selection to control force-velocity modulation without myosin integration into muscle. Cardiac myosin is muscle in a molecule. PMID:29669825

  5. Effect of reaction-step-size noise on the switching dynamics of stochastic populations

    NASA Astrophysics Data System (ADS)

    Be'er, Shay; Heller-Algazi, Metar; Assaf, Michael

    2016-05-01

    In genetic circuits, when the messenger RNA lifetime is short compared to the cell cycle, proteins are produced in geometrically distributed bursts, which greatly affects the cellular switching dynamics between different metastable phenotypic states. Motivated by this scenario, we study a general problem of switching or escape in stochastic populations, where influx of particles occurs in groups or bursts, sampled from an arbitrary distribution. The fact that the step size of the influx reaction is a priori unknown and, in general, may fluctuate in time with a given correlation time and statistics, introduces an additional nondemographic reaction-step-size noise into the system. Employing the probability-generating function technique in conjunction with Hamiltonian formulation, we are able to map the problem in the leading order onto solving a stationary Hamilton-Jacobi equation. We show that compared to the "usual case" of single-step influx, bursty influx exponentially decreases the population's mean escape time from its long-lived metastable state. In particular, close to bifurcation we find a simple analytical expression for the mean escape time which solely depends on the mean and variance of the burst-size distribution. Our results are demonstrated on several realistic distributions and compare well with numerical Monte Carlo simulations.

  6. Improved neural network based scene-adaptive nonuniformity correction method for infrared focal plane arrays.

    PubMed

    Lai, Rui; Yang, Yin-tang; Zhou, Duan; Li, Yue-jin

    2008-08-20

    An improved scene-adaptive nonuniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPAs) is proposed. This method simultaneously estimates the infrared detectors' parameters and eliminates the nonuniformity causing fixed pattern noise (FPN) by using a neural network (NN) approach. In the learning process of neuron parameter estimation, the traditional LMS algorithm is substituted with the newly presented variable step size (VSS) normalized least-mean square (NLMS) based adaptive filtering algorithm, which yields faster convergence, smaller misadjustment, and lower computational cost. In addition, a new NN structure is designed to estimate the desired target value, which promotes the calibration precision considerably. The proposed NUC method reaches high correction performance, which is validated by the experimental results quantitatively tested with a simulative testing sequence and a real infrared image sequence.

  7. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data.

    PubMed

    Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun

    2018-01-01

    Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  8. Vergence driven accommodation with simulated disparity in myopia and emmetropia.

    PubMed

    Maiello, Guido; Kerber, Kristen L; Thorn, Frank; Bex, Peter J; Vera-Diaz, Fuensanta A

    2018-01-01

    The formation of focused and corresponding foveal images requires a close synergy between the accommodation and vergence systems. This linkage is usually decoupled in virtual reality systems and may be dysfunctional in people who are at risk of developing myopia. We study how refractive error affects vergence-accommodation interactions in stereoscopic displays. Vergence and accommodative responses were measured in 21 young healthy adults (n=9 myopes, 22-31 years) while subjects viewed naturalistic stimuli on a 3D display. In Step 1, vergence was driven behind the monitor using a blurred, non-accommodative, uncrossed disparity target. In Step 2, vergence and accommodation were driven back to the monitor plane using naturalistic images that contained structured depth and focus information from size, blur and/or disparity. In Step 1, both refractive groups converged towards the stereoscopic target depth plane, but the vergence-driven accommodative change was smaller in emmetropes than in myopes (F 1,19 =5.13, p=0.036). In Step 2, there was little effect of peripheral depth cues on accommodation or vergence in either refractive group. However, vergence responses were significantly slower (F 1,19 =4.55, p=0.046) and accommodation variability was higher (F 1,19 =12.9, p=0.0019) in myopes. Vergence and accommodation responses are disrupted in virtual reality displays in both refractive groups. Accommodation responses are less stable in myopes, perhaps due to a lower sensitivity to dioptric blur. Such inaccuracies of accommodation may cause long-term blur on the retina, which has been associated with a failure of emmetropization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Intermediate surface structure between step bunching and step flow in SrRuO3 thin film growth

    NASA Astrophysics Data System (ADS)

    Bertino, Giulia; Gura, Anna; Dawber, Matthew

    We performed a systematic study of SrRuO3 thin films grown on TiO2 terminated SrTiO3 substrates using off-axis magnetron sputtering. We investigated the step bunching formation and the evolution of the SRO film morphology by varying the step size of the substrate, the growth temperature and the film thickness. The thin films were characterized using Atomic Force Microscopy and X-Ray Diffraction. We identified single and multiple step bunching and step flow growth regimes as a function of the growth parameters. Also, we clearly observe a stronger influence of the step size of the substrate on the evolution of the SRO film surface with respect to the other growth parameters. Remarkably, we observe the formation of a smooth, regular and uniform ``fish skin'' structure at the transition between one regime and another. We believe that the fish skin structure results from the merging of 2D flat islands predicted by previous models. The direct observation of this transition structure allows us to better understand how and when step bunching develops in the growth of SrRuO3 thin films.

  10. Advanced Energy Storage Management in Distribution Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guodong; Ceylan, Oguzhan; Xiao, Bailu

    2016-01-01

    With increasing penetration of distributed generation (DG) in the distribution networks (DN), the secure and optimal operation of DN has become an important concern. In this paper, an iterative mixed integer quadratic constrained quadratic programming model to optimize the operation of a three phase unbalanced distribution system with high penetration of Photovoltaic (PV) panels, DG and energy storage (ES) is developed. The proposed model minimizes not only the operating cost, including fuel cost and purchasing cost, but also voltage deviations and power loss. The optimization model is based on the linearized sensitivity coefficients between state variables (e.g., node voltages) andmore » control variables (e.g., real and reactive power injections of DG and ES). To avoid slow convergence when close to the optimum, a golden search method is introduced to control the step size and accelerate the convergence. The proposed algorithm is demonstrated on modified IEEE 13 nodes test feeders with multiple PV panels, DG and ES. Numerical simulation results validate the proposed algorithm. Various scenarios of system configuration are studied and some critical findings are concluded.« less

  11. Primary productivity (PP) in the North Pacific Subtropical Gyre: Understanding drivers of variability via 14C-tracer incubations and PP diagnosed via the diurnal cycle of particulate carbon.

    NASA Astrophysics Data System (ADS)

    White, A. E.; Letelier, R. M.

    2016-02-01

    The rate of primary production (PP) in the ocean is a fundamental step in the ocean's food web and biological carbon pump. For more than 50 years oceanographers have relied primarily on estimates of PP based on in vitro measurements of 14CO2 uptake rates. Yet, it is difficult to reconcile PP rates measured in vitro with in situ rates. Here we present diurnal cycles of optically-derived particulate organic carbon (POC) and particle size distributions measured over a series of cruises in the North Pacific relative to traditional 14C-based PP measurements. We have calculated net PP from the daytime increase in optically-derived particulate organic carbon (POC) and the sum of respiration, grazing and sinking from the nighttime POM decrease. Comparison of optically derived NPP to parallel 12-hr 14C incubations are highly significant. The variability in productivity measurements over daily to seasonal to annual time-scales are discussed relative to predominant chemical, physical and climactic forcing.

  12. Microphysical Timescales in Clouds and their Application in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xiping; Tao, Wei-Kuo; Simpson, Joanne

    2007-01-01

    Independent prognostic variables in cloud-resolving modeling are chosen on the basis of the analysis of microphysical timescales in clouds versus a time step for numerical integration. Two of them are the moist entropy and the total mixing ratio of airborne water with no contributions from precipitating particles. As a result, temperature can be diagnosed easily from those prognostic variables, and cloud microphysics be separated (or modularized) from moist thermodynamics. Numerical comparison experiments show that those prognostic variables can work well while a large time step (e.g., 10 s) is used for numerical integration.

  13. Relationships among providing maternal, child, and adolescent health services; implementing various financial strategy responses; and performance of local health departments.

    PubMed

    Issel, L Michele; Olorunsaiye, Comfort; Snebold, Laura; Handler, Arden

    2015-04-01

    We explored the relationships between local health department (LHD) structure, capacity, and macro-context variables and performance of essential public health services (EPHS). In 2012, we assessed a stratified, random sample of 195 LHDs that provided data via an online survey regarding performance of EPHS, the services provided or contracted out, the financial strategies used in response to budgetary pressures, and the extent of collaborations. We performed weighted analyses that included analysis of variance, pairwise correlations by jurisdiction population size, and linear regressions. On average, LHDs provided approximately 13 (36%) of 35 possible services either directly or by contract. Rather than cut services or externally consolidating, LHDs took steps to generate more revenue and maximize capacity. Higher LHD performance of EPHS was significantly associated with delivering more services, initiating more financial strategies, and engaging in collaboration, after adjusting for the effects of the Affordable Care Act and jurisdiction size. During changing economic and health care environments, we found that strong structural capacity enhanced local health department EPHS performance for maternal, child, and adolescent health.

  14. Preliminary evaluation of cryogenic two-phase flow imaging using electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Xie, Huangjun; Yu, Liu; Zhou, Rui; Qiu, Limin; Zhang, Xiaobin

    2017-09-01

    The potential application of the 2-D eight-electrode electrical capacitance tomography (ECT) to the inversion imaging of the liquid nitrogen-vaporous nitrogen (LN2-VN2) flow in the tube is theoretically evaluated. The phase distribution of the computational domain is obtained using the simultaneous iterative reconstruction technique with variable iterative step size. The detailed mathematical derivations for the calculations are presented. The calculated phase distribution for the two detached LN2 column case shows the comparable results with the water-air case, regardless of the much reduced dielectric permittivity of LN2 compared with water. The inversion images of total eight different LN2-VN2 flow patterns are presented and quantitatively evaluated by calculating the relative void fraction error and the correlation coefficient. The results demonstrate that the developed reconstruction technique for ECT has the capacity to reconstruct the phase distribution of the complex LN2-VN2 flow, while the accuracy of the inversion images is significantly influenced by the size of the discrete phase. The influence of the measurement noise on the image quality is also considered in the calculations.

  15. A first generation dynamic ingress, redistribution and transport model of soil track-in: DIRT.

    PubMed

    Johnson, D L

    2008-12-01

    This work introduces a spatially resolved quantitative model, based on conservation of mass and first order transfer kinetics, for following the transport and redistribution of outdoor soil to, and within, the indoor environment by track-in on footwear. Implementations of the DIRT model examined the influence of room size, rug area and location, shoe size, and mass transfer coefficients for smooth and carpeted floor surfaces using the ratio of mass loading on carpeted to smooth floor surfaces as a performance metric. Results showed that in the limit for large numbers of random steps the dual aspects of deposition to and track-off from the carpets govern this ratio. Using recently obtained experimental measurements, historic transport and distribution parameters, cleaning efficiencies for the different floor surfaces, and indoor dust deposition rates to provide model boundary conditions, DIRT predicts realistic floor surface loadings. The spatio-temporal variability in model predictions agrees with field observations and suggests that floor surface dust loadings are constantly in flux; steady state distributions are hardly, if ever, achieved.

  16. Steps Toward Determination of the Size and Structure of the Broad-Line Region in Active Galactic Nuclei XVI: A 13 Year Study of Spectral Variability in NGC 5548

    NASA Technical Reports Server (NTRS)

    Peterson, B. M.; Berlind, P.; Bertram, R.; Bischoff, K.; Bochkarev, N. G.; Burenkov, A. N.; Calkins, M.; Carrasco, L.; Chavushyan, V. H.

    2002-01-01

    We present the final installment of an intensive 13 year study of variations of the optical continuum and broad H beta emission line in the Seyfert 1 galaxy NGC 5548. The database consists of 1530 optical continuum measurements and 1248 H beta measurements. The H beta variations follow the continuum variations closely, with a typical time delay of about 20 days. However, a year-by-year analysis shows that the magnitude of emission-line time delay is correlated with the mean continuum flux. We argue that the data are consistent with the simple model prediction between the size of the broad-line region and the ionizing luminosity, r is proportional to L(sup 1/2)(sub ion). Moreover, the apparently linear nature of the correlation between the H beta response time and the nonstellar optical continuum F(sub opt) arises as a consequence of the changing shape of the continuum as it varies, specifically F(sub opt) is proportional to F(sup 0.56)(sub UV).

  17. Control performances of a piezoactuator direct drive valve system at high temperatures with thermal insulation

    NASA Astrophysics Data System (ADS)

    Han, Yung-Min; Han, Chulhee; Kim, Wan Ho; Seong, Ho Yong; Choi, Seung-Bok

    2016-09-01

    This technical note presents control performances of a piezoactuator direct drive valve (PDDV) operated at high temperature environment. After briefly discussing operating principle and mechanical dimensions of the proposed PDDV, an appropriate size of the PDDV is manufactured. As a first step, the temperature effect on the valve performance is experimentally investigated by measuring the spool displacement at various temperatures. Subsequently, the PDDV is thermally insulated using aerogel and installed in a large-size heat chamber in which the pneumatic-hydraulic cylinders and sensors are equipped. A proportional-integral-derivative feedback controller is then designed and implemented to control the spool displacement of the valve system. In this work, the spool displacement is chosen as a control variable since it is directly related to the flow rate of the valve system. Three different sinusoidal displacements with different frequencies of 1, 10 and 50 Hz are used as reference spool displacement and tracking controls are undertaken up to 150 °C. It is shown that the proposed PDDV with the thermal insulation can provide favorable control responses without significant tracking errors at high temperatures.

  18. [VARIABILITY AND DETERMINING FACTORS OF THE BODY SIZE STRUCTURE OF THE INFRAPOPULATION OF COSMOCERCA ORNATA (NEMATODA: COSMOCERCIDAE) IN MARSH FROGS].

    PubMed

    Kirillov, A A; Kirillova, N Yu

    2015-01-01

    Variability of the body size in females of the Cosmocerca ornata (Dujardin, 1845), a parasite of marsh frogs, is studied. The influence of both biotic (age, sex and a phenotype of the host, density of the parasite population) and abiotic (a season of the year, water temperature) factors on the formation of the body size structure in the C. ornata hemipopulation (infrapopulation) is demonstrated. The body size structure of the C. ornata hemipopulation is characterized by the low level of individual variability as within certain subpopulation groups of amphibians (sex, age and phenotype), so within the population of marsh frogs as a whole. The more distinct are the differences in biology and ecology of these host subpopulations, the more pronounced is the variability in the body size of C ornata.

  19. Optimal Padding for the Two-Dimensional Fast Fourier Transform

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.

    2011-01-01

    One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications.

  20. Micro-computed tomography characterization of tissue engineering scaffolds: effects of pixel size and rotation step.

    PubMed

    Cengiz, Ibrahim Fatih; Oliveira, Joaquim Miguel; Reis, Rui L

    2017-08-01

    Quantitative assessment of micro-structure of materials is of key importance in many fields including tissue engineering, biology, and dentistry. Micro-computed tomography (µ-CT) is an intensively used non-destructive technique. However, the acquisition parameters such as pixel size and rotation step may have significant effects on the obtained results. In this study, a set of tissue engineering scaffolds including examples of natural and synthetic polymers, and ceramics were analyzed. We comprehensively compared the quantitative results of µ-CT characterization using 15 acquisition scenarios that differ in the combination of the pixel size and rotation step. The results showed that the acquisition parameters could statistically significantly affect the quantified mean porosity, mean pore size, and mean wall thickness of the scaffolds. The effects are also practically important since the differences can be as high as 24% regarding the mean porosity in average, and 19.5 h and 166 GB regarding the characterization time and data storage per sample with a relatively small volume. This study showed in a quantitative manner the effects of such a wide range of acquisition scenarios on the final data, as well as the characterization time and data storage per sample. Herein, a clear picture of the effects of the pixel size and rotation step on the results is provided which can notably be useful to refine the practice of µ-CT characterization of scaffolds and economize the related resources.

  1. Dynamics of one-dimensional self-gravitating systems using Hermite-Legendre polynomials

    NASA Astrophysics Data System (ADS)

    Barnes, Eric I.; Ragan, Robert J.

    2014-01-01

    The current paradigm for understanding galaxy formation in the Universe depends on the existence of self-gravitating collisionless dark matter. Modelling such dark matter systems has been a major focus of astrophysicists, with much of that effort directed at computational techniques. Not surprisingly, a comprehensive understanding of the evolution of these self-gravitating systems still eludes us, since it involves the collective non-linear dynamics of many particle systems interacting via long-range forces described by the Vlasov equation. As a step towards developing a clearer picture of collisionless self-gravitating relaxation, we analyse the linearized dynamics of isolated one-dimensional systems near thermal equilibrium by expanding their phase-space distribution functions f(x, v) in terms of Hermite functions in the velocity variable, and Legendre functions involving the position variable. This approach produces a picture of phase-space evolution in terms of expansion coefficients, rather than spatial and velocity variables. We obtain equations of motion for the expansion coefficients for both test-particle distributions and self-gravitating linear perturbations of thermal equilibrium. N-body simulations of perturbed equilibria are performed and found to be in excellent agreement with the expansion coefficient approach over a time duration that depends on the size of the expansion series used.

  2. National Stormwater Calculator: Low Impact Development ...

    EPA Pesticide Factsheets

    The National Stormwater Calculator (NSC) makes it easy to estimate runoff reduction when planning a new development or redevelopment site with low impact development (LID) stormwater controls. The Calculator is currently deployed as a Windows desktop application. The Calculator is organized as a wizard style application that walks the user through the steps necessary to perform runoff calculations on a single urban sub-catchment of 10 acres or less in size. Using an interactive map, the user can select the sub-catchment location and the Calculator automatically acquires hydrologic data for the site.A new LID cost estimation module has been developed for the Calculator. This project involved programming cost curves into the existing Calculator desktop application. The integration of cost components of LID controls into the Calculator increases functionality and will promote greater use of the Calculator as a stormwater management and evaluation tool. The addition of the cost estimation module allows planners and managers to evaluate LID controls based on comparison of project cost estimates and predicted LID control performance. Cost estimation is accomplished based on user-identified size (or auto-sizing based on achieving volume control or treatment of a defined design storm), configuration of the LID control infrastructure, and other key project and site-specific variables, including whether the project is being applied as part of new development or redevelopm

  3. Minimum stiffness criteria for ring frame stiffeners of space launch vehicles

    NASA Astrophysics Data System (ADS)

    Friedrich, Linus; Schröder, Kai-Uwe

    2016-12-01

    Frame stringer-stiffened shell structures show high load carrying capacity in conjunction with low structural mass and are for this reason frequently used as primary structures of aerospace applications. Due to the great number of design variables, deriving suitable stiffening configurations is a demanding task and needs to be realized using efficient analysis methods. The structural design of ring frame stringer-stiffened shells can be subdivided into two steps. One, the design of a shell section between two ring frames. Two, the structural design of the ring frames such that a general instability mode is avoided. For sizing stringer-stiffened shell sections, several methods were recently developed, but existing ring frame sizing methods are mainly based on empirical relations or on smeared models. These methods do not mandatorily lead to reliable designs and in some cases the lightweight design potential of stiffened shell structures can thus not be exploited. In this paper, the explicit physical behaviour of ring frame stiffeners of space launch vehicles at the onset of panel instability is described using mechanical substitute models. Ring frame stiffeners of a stiffened shell structure are sized applying existing methods and the method suggested in this paper. To verify the suggested method and to demonstrate its potential, geometrically non-linear finite element analyses are performed using detailed finite element models.

  4. In situ formation deposited ZnO nanoparticles on silk fabrics under ultrasound irradiation.

    PubMed

    Khanjani, Somayeh; Morsali, Ali; Joo, Sang W

    2013-03-01

    Deposition of zinc(II) oxide (ZnO) nanoparticles on the surface of silk fabrics was prepared by sequential dipping steps in alternating bath of potassium hydroxide and zinc nitrate under ultrasound irradiation. This coating involves in situ generation and deposition of ZnO in a one step. The effects of ultrasound irradiation, concentration and sequential dipping steps on growth of the ZnO nanoparticles have been studied. Results show a decrease in the particles size as increasing power of ultrasound irradiation. Also, increasing of the concentration and sequential dipping steps increase particle size. The physicochemical properties of the nanoparticles were determined by powder X-ray diffraction (XRD), scanning electron microscopy (SEM) and wavelength dispersive X-ray (WDX). Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Scalability and performance of data-parallel pressure-based multigrid methods for viscous flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blosch, E.L.; Shyy, W.

    1996-05-01

    A full-approximation storage multigrid method for solving the steady-state 2-d incompressible Navier-Stokes equations on staggered grids has been implemented in Fortran on the CM-5, using the array aliasing feature in CM-Fortran to avoid declaring fine-grid-sized arrays on all levels while still allowing a variable number of grid levels. Thus, the storage cost scales with the number of unknowns, allowing us to consider significantly larger problems than would otherwise be possible. Timings over a range of problem sizes and numbers of processors, up to 4096 x 4096 on 512 nodes, show that the smoothing procedure, a pressure-correction technique, is scalable andmore » that the restriction and prolongation steps are nearly so. The performance obtained for the multigrid method is 333 Mflops out of the theoretical peak 4 Gflops on a 32-node CM-5. In comparison, a single-grid computation obtained 420 Mflops. The decrease is due to the inefficiency of the smoothing iterations on the coarse grid levels. W cycles cost much more and are much less efficient than V cycles, due to the increased contribution from the coarse grids. The convergence rate characteristics of the pressure-correction multigrid method are investigated in a Re = 5000 lid-driven cavity flow and a Re = 300 symmetric backward-facing step flow, using either a defect-correction scheme or a second-order upwind scheme. A heuristic technique relating the convergence tolerances for the course grids to the truncation error of the discretization has been found effective and robust. With second-order upwinding on all grid levels, a 5-level 320 x 80 step flow solution was obtained in 20 V cycles, which corresponds to a smoothing rate of 0.7, and required 25 s on a 32-node CM-5. Overall, the convergence rates obtained in the present work are comparable to the most competitive findings reported in the literature. 62 refs., 13 figs.« less

  6. Scalability and Performance of Data-Parallel Pressure-Based Multigrid Methods for Viscous Flows

    NASA Astrophysics Data System (ADS)

    Blosch, Edwin L.; Shyy, Wei

    1996-05-01

    A full-approximation storage multigrid method for solving the steady-state 2-dincompressible Navier-Stokes equations on staggered grids has been implemented in Fortran on the CM-5,using the array aliasing feature in CM-Fortran to avoid declaring fine-grid-sized arrays on all levels while still allowing a variable number of grid levels. Thus, the storage cost scales with the number of unknowns,allowing us to consider significantly larger problems than would otherwise be possible. Timings over a range of problem sizes and numbers of processors, up to 4096 × 4096 on 512 nodes, show that the smoothing procedure, a pressure-correction technique, is scalable and that the restriction and prolongation steps are nearly so. The performance obtained for the multigrid method is 333 Mflops out of the theoretical peak 4 Gflops on a 32-node CM-5. In comparison, a single-grid computation obtained 420 Mflops. The decrease is due to the inefficiency of the smoothing iterations on the coarse grid levels. W cycles cost much more and are much less efficient than V cycles, due to the increased contribution from the coarse grids. The convergence rate characteristics of the pressure-correction multigrid method are investigated in a Re = 5000 lid-driven cavity flow and a Re = 300 symmetric backward-facing step flow, using either a defect-correction scheme or a second-order upwind scheme. A heuristic technique relating the convergence tolerances for the coarse grids to the truncation error of the discretization has been found effective and robust. With second-order upwinding on all grid levels, a 5-level 320× 80 step flow solution was obtained in 20 V cycles, which corresponds to a smoothing rate of 0.7, and required 25 s on a 32-node CM-5. Overall, the convergence rates obtained in the present work are comparable to the most competitive findings reported in the literature.

  7. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  8. Flow resistance dynamics in step‐pool stream channels: 1. Large woody debris and controls on total resistance

    USGS Publications Warehouse

    Wilcox, Andrew C.; Wohl, Ellen E.

    2006-01-01

    Flow resistance dynamics in step‐pool channels were investigated through physical modeling using a laboratory flume. Variables contributing to flow resistance in step‐pool channels were manipulated in order to measure the effects of various large woody debris (LWD) configurations, steps, grains, discharge, and slope on total flow resistance. This entailed nearly 400 flume runs, organized into a series of factorial experiments. Factorial analyses of variance indicated significant two‐way and three‐way interaction effects between steps, grains, and LWD, illustrating the complexity of flow resistance in these channels. Interactions between steps and LWD resulted in substantially greater flow resistance for steps with LWD than for steps lacking LWD. LWD position contributed to these interactions, whereby LWD pieces located near the lip of steps, analogous to step‐forming debris in natural channels, increased the effective height of steps and created substantially higher flow resistance than pieces located farther upstream on step treads. Step geometry and LWD density and orientation also had highly significant effects on flow resistance. Flow resistance dynamics and the resistance effect of bed roughness configurations were strongly discharge‐dependent; discharge had both highly significant main effects on resistance and highly significant interactions with all other variables.

  9. An Experimental Study of Small-Scale Variability of Raindrop Size Distribution

    NASA Technical Reports Server (NTRS)

    Tokay, Ali; Bashor, Paul G.

    2010-01-01

    An experimental study of small-scale variability of raindrop size distributions (DSDs) has been carried out at Wallops Island, Virginia. Three Joss-Waldvogel disdrometers were operated at a distance of 0.65, 1.05, and 1.70 km in a nearly straight line. The main purpose of the study was to examine the variability of DSDs and its integral parameters of liquid water content, rainfall, and reflectivity within a 2-km array: a typical size of Cartesian radar pixel. The composite DSD of rain events showed very good agreement among the disdrometers except where there were noticeable differences in midsize and large drops in a few events. For consideration of partial beam filling where the radar pixel was not completely covered by rain, a single disdrometer reported just over 10% more rainy minutes than the rainy minutes when all three disdrometers reported rainfall. Similarly two out of three disdrometers reported5%more rainy minutes than when all three were reporting rainfall. These percentages were based on a 1-min average, and were less for longer averaging periods. Considering only the minutes when all three disdrometers were reporting rainfall, just over one quarter of the observations showed an increase in the difference in rainfall with distance. This finding was based on a 15-min average and was even less for shorter averaging periods. The probability and cumulative distributions of a gamma-fitted DSD and integral rain parameters between the three disdrometers had a very good agreement and no major variability. This was mainly due to the high percentage of light stratiform rain and to the number of storms that traveled along the track of the disdrometers. At a fixed time step, however, both DSDs and integral rain parameters showed substantial variability. The standard deviation (SD) of rain rate was near 3 mm/h, while the SD of reflectivity exceeded 3 dBZ at the longest separation distance. These standard deviations were at 6-min average and were higher at shorter averaging periods. The correlations decreased with increasing separation distance. For rain rate, the correlations were higher than previous gauge-based studies. This was attributed to the differences in data processing and the difference in rainfall characteristics in different climate regions. It was also considered that the gauge sampling errors could be a factor. In this regard, gauge measurements were simulated employing existing disdrometer dataset. While a difference was noticed in cumulative distribution of rain occurrence between the simulated gauge and disdrometer observations, the correlations in simulated gauge measurements did not differ from the disdrometer measurements.

  10. Steps in the open space planning process

    Treesearch

    Stephanie B. Kelly; Melissa M. Ryan

    1995-01-01

    This paper presents the steps involved in developing an open space plan. The steps are generic in that the methods may be applied various size communities. The intent is to provide a framework to develop an open space plan that meets Massachusetts requirements for funding of open space acquisition.

  11. A GIS modeling method applied to predicting forest songbird habitat

    USGS Publications Warehouse

    Dettmers, Randy; Bart, Jonathan

    1999-01-01

    We have developed an approach for using a??presencea?? data to construct habitat models. Presence data are those that indicate locations where the target organism is observed to occur, but that cannot be used to define locations where the organism does not occur. Surveys of highly mobile vertebrates often yield these kinds of data. Models developed through our approach yield predictions of the amount and the spatial distribution of good-quality habitat for the target species. This approach was developed primarily for use in a GIS context; thus, the models are spatially explicit and have the potential to be applied over large areas. Our method consists of two primary steps. In the first step, we identify an optimal range of values for each habitat variable to be used as a predictor in the model. To find these ranges, we employ the concept of maximizing the difference between cumulative distribution functions of (1) the values of a habitat variable at the observed presence locations of the target organism, and (2) the values of that habitat variable for all locations across a study area. In the second step, multivariate models of good habitat are constructed by combining these ranges of values, using the Boolean operators a??anda?? and a??or.a?? We use an approach similar to forward stepwise regression to select the best overall model. We demonstrate the use of this method by developing species-specific habitat models for nine forest-breeding songbirds (e.g., Cerulean Warbler, Scarlet Tanager, Wood Thrush) studied in southern Ohio. These models are based on speciesa?? microhabitat preferences for moisture and vegetation characteristics that can be predicted primarily through the use of abiotic variables. We use slope, land surface morphology, land surface curvature, water flow accumulation downhill, and an integrated moisture index, in conjunction with a land-cover classification that identifies forest/nonforest, to develop these models. The performance of these models was evaluated with an independent data set. Our tests showed that the models performed better than random at identifying where the birds occurred and provided useful information for predicting the amount and spatial distribution of good habitat for the birds we studied. In addition, we generally found positive correlations between the amount of habitat, as predicted by the models, and the number of territories within a given area. This added component provides the possibility, ultimately, of being able to estimate population sizes. Our models represent useful tools for resource managers who are interested in assessing the impacts of alternative management plans that could alter or remove habitat for these birds.

  12. A simple, compact, and rigid piezoelectric step motor with large step size.

    PubMed

    Wang, Qi; Lu, Qingyou

    2009-08-01

    We present a novel piezoelectric stepper motor featuring high compactness, rigidity, simplicity, and any direction operability. Although tested in room temperature, it is believed to work in low temperatures, owing to its loose operation conditions and large step size. The motor is implemented with a piezoelectric scanner tube that is axially cut into almost two halves and clamp holds a hollow shaft inside at both ends via the spring parts of the shaft. Two driving voltages that singly deform the two halves of the piezotube in one direction and recover simultaneously will move the shaft in the opposite direction, and vice versa.

  13. A simple, compact, and rigid piezoelectric step motor with large step size

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Lu, Qingyou

    2009-08-01

    We present a novel piezoelectric stepper motor featuring high compactness, rigidity, simplicity, and any direction operability. Although tested in room temperature, it is believed to work in low temperatures, owing to its loose operation conditions and large step size. The motor is implemented with a piezoelectric scanner tube that is axially cut into almost two halves and clamp holds a hollow shaft inside at both ends via the spring parts of the shaft. Two driving voltages that singly deform the two halves of the piezotube in one direction and recover simultaneously will move the shaft in the opposite direction, and vice versa.

  14. Variability in group size and the evolution of collective action.

    PubMed

    Peña, Jorge; Nöldeke, Georg

    2016-01-21

    Models of the evolution of collective action typically assume that interactions occur in groups of identical size. In contrast, social interactions between animals occur in groups of widely dispersed size. This paper models collective action problems as two-strategy multiplayer games and studies the effect of variability in group size on the evolution of cooperative behavior under the replicator dynamics. The analysis identifies elementary conditions on the payoff structure of the game implying that the evolution of cooperative behavior is promoted or inhibited when the group size experienced by a focal player is more or less variable. Similar but more stringent conditions are applicable when the confounding effect of size-biased sampling, which causes the group-size distribution experienced by a focal player to differ from the statistical distribution of group sizes, is taken into account. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Variable selection with stepwise and best subset approaches

    PubMed Central

    2016-01-01

    While purposeful selection is performed partly by software and partly by hand, the stepwise and best subset approaches are automatically performed by software. Two R functions stepAIC() and bestglm() are well designed for stepwise and best subset regression, respectively. The stepAIC() function begins with a full or null model, and methods for stepwise regression can be specified in the direction argument with character values “forward”, “backward” and “both”. The bestglm() function begins with a data frame containing explanatory variables and response variables. The response variable should be in the last column. Varieties of goodness-of-fit criteria can be specified in the IC argument. The Bayesian information criterion (BIC) usually results in more parsimonious model than the Akaike information criterion. PMID:27162786

  16. Crop status evaluations and yield predictions

    NASA Technical Reports Server (NTRS)

    Haun, J. R.

    1975-01-01

    A model was developed for predicting the day 50 percent of the wheat crop is planted in North Dakota. This model incorporates location as an independent variable. The Julian date when 50 percent of the crop was planted for the nine divisions of North Dakota for seven years was regressed on the 49 variables through the step-down multiple regression procedure. This procedure begins with all of the independent variables and sequentially removes variables that are below a predetermined level of significance after each step. The prediction equation was tested on daily data. The accuracy of the model is considered satisfactory for finding the historic dates on which to initiate yield prediction model. Growth prediction models were also developed for spring wheat.

  17. Representation of solution for fully nonlocal diffusion equations with deviation time variable

    NASA Astrophysics Data System (ADS)

    Drin, I. I.; Drin, S. S.; Drin, Ya. M.

    2018-01-01

    We prove the solvability of the Cauchy problem for a nonlocal heat equation which is of fractional order both in space and time. The representation formula for classical solutions for time- and space- fractional partial differential operator Dat + a2 (-Δ) γ/2 (0 <= α <= 1, γ ɛ (0, 2]) and deviation time variable is given in terms of the Fox H-function, using the step by step method.

  18. Optimal variable-grid finite-difference modeling for porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-12-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.

  19. Optimization of ecosystem model parameters with different temporal variabilities using tower flux data and an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    He, L.; Chen, J. M.; Liu, J.; Mo, G.; Zhen, T.; Chen, B.; Wang, R.; Arain, M.

    2013-12-01

    Terrestrial ecosystem models have been widely used to simulate carbon, water and energy fluxes and climate-ecosystem interactions. In these models, some vegetation and soil parameters are determined based on limited studies from literatures without consideration of their seasonal variations. Data assimilation (DA) provides an effective way to optimize these parameters at different time scales . In this study, an ensemble Kalman filter (EnKF) is developed and applied to optimize two key parameters of an ecosystem model, namely the Boreal Ecosystem Productivity Simulator (BEPS): (1) the maximum photosynthetic carboxylation rate (Vcmax) at 25 °C, and (2) the soil water stress factor (fw) for stomatal conductance formulation. These parameters are optimized through assimilating observations of gross primary productivity (GPP) and latent heat (LE) fluxes measured in a 74 year-old pine forest, which is part of the Turkey Point Flux Station's age-sequence sites. Vcmax is related to leaf nitrogen concentration and varies slowly over the season and from year to year. In contrast, fw varies rapidly in response to soil moisture dynamics in the root-zone. Earlier studies suggested that DA of vegetation parameters at daily time steps leads to Vcmax values that are unrealistic. To overcome the problem, we developed a three-step scheme to optimize Vcmax and fw. First, the EnKF is applied daily to obtain precursor estimates of Vcmax and fw. Then Vcmax is optimized at different time scales assuming fw is unchanged from first step. The best temporal period or window size is then determined by analyzing the magnitude of the minimized cost-function, and the coefficient of determination (R2) and Root-mean-square deviation (RMSE) of GPP and LE between simulation and observation. Finally, the daily fw value is optimized for rain free days corresponding to the Vcmax curve from the best window size. The optimized fw is then used to model its relationship with soil moisture. We found that the optimized fw is best correlated linearly to soil water content at 5 to 10 cm depth. We also found that both the temporal scale or window size and the priori uncertainty of Vcmax (given as its standard deviation) are important in determining the seasonal trajectory of Vcmax. During the leaf expansion stage, an appropriate window size leads to reasonable estimate of Vcmax. In the summer, the fluctuation of optimized Vcmax is mainly caused by the uncertainties in Vcmax but not the window size. Our study suggests that a smooth Vcmax curve optimized from an optimal time window size is close to the reality though the RMSE of GPP at this window is not the minimum. It also suggests that for the accurate optimization of Vcmax, it is necessary to set appropriate levels of uncertainty of Vcmax in the spring and summer because the rate of leaf nitrogen concentration change is different over the season. Parameter optimizations for more sites and multi-years are in progress.

  20. AN ATTEMPT TO FIND AN A PRIORI MEASURE OF STEP SIZE. COMPARATIVE STUDIES OF PRINCIPLES FOR PROGRAMMING MATHEMATICS IN AUTOMATED INSTRUCTION, TECHNICAL REPORT NO. 13.

    ERIC Educational Resources Information Center

    ROSEN, ELLEN F.; STOLUROW, LAWRENCE M.

    IN ORDER TO FIND A GOOD PREDICTOR OF EMPIRICAL DIFFICULTY, AN OPERATIONAL DEFINITION OF STEP SIZE, TEN PROGRAMER-JUDGES RATED CHANGE IN COMPLEXITY IN TWO VERSIONS OF A MATHEMATICS PROGRAM, AND THESE RATINGS WERE THEN COMPARED WITH MEASURES OF EMPIRICAL DIFFICULTY OBTAINED FROM STUDENT RESPONSE DATA. THE TWO VERSIONS, A 54 FRAME BOOKLET AND A 35…

  1. Predict the fatigue life of crack based on extended finite element method and SVR

    NASA Astrophysics Data System (ADS)

    Song, Weizhen; Jiang, Zhansi; Jiang, Hui

    2018-05-01

    Using extended finite element method (XFEM) and support vector regression (SVR) to predict the fatigue life of plate crack. Firstly, the XFEM is employed to calculate the stress intensity factors (SIFs) with given crack sizes. Then predicetion model can be built based on the function relationship of the SIFs with the fatigue life or crack length. Finally, according to the prediction model predict the SIFs at different crack sizes or different cycles. Because of the accuracy of the forward Euler method only ensured by the small step size, a new prediction method is presented to resolve the issue. The numerical examples were studied to demonstrate the proposed method allow a larger step size and have a high accuracy.

  2. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  3. Finite Memory Walk and Its Application to Small-World Network

    NASA Astrophysics Data System (ADS)

    Oshima, Hiraku; Odagaki, Takashi

    2012-07-01

    In order to investigate the effects of cycles on the dynamical process on both regular lattices and complex networks, we introduce a finite memory walk (FMW) as an extension of the simple random walk (SRW), in which a walker is prohibited from moving to sites visited during m steps just before the current position. This walk interpolates the simple random walk (SRW), which has no memory (m = 0), and the self-avoiding walk (SAW), which has an infinite memory (m = ∞). We investigate the FMW on regular lattices and clarify the fundamental characteristics of the walk. We find that (1) the mean-square displacement (MSD) of the FMW shows a crossover from the SAW at a short time step to the SRW at a long time step, and the crossover time is approximately equivalent to the number of steps remembered, and that the MSD can be rescaled in terms of the time step and the size of memory; (2) the mean first-return time (MFRT) of the FMW changes significantly at the number of remembered steps that corresponds to the size of the smallest cycle in the regular lattice, where ``smallest'' indicates that the size of the cycle is the smallest in the network; (3) the relaxation time of the first-return time distribution (FRTD) decreases as the number of cycles increases. We also investigate the FMW on the Watts--Strogatz networks that can generate small-world networks, and show that the clustering coefficient of the Watts--Strogatz network is strongly related to the MFRT of the FMW that can remember two steps.

  4. Corpus Callosum Size, Reaction Time Speed and Variability in Mild Cognitive Disorders and in a Normative Sample

    ERIC Educational Resources Information Center

    Anstey, Kaarin J.; Mack, Holly A.; Christensen, Helen; Li, Shu-Chen; Reglade-Meslin, Chantal; Maller, Jerome; Kumar, Rajeev; Dear, Keith; Easteal, Simon; Sachdev, Perminder

    2007-01-01

    Intra-individual variability in reaction time increases with age and with neurological disorders, but the neural correlates of this increased variability remain uncertain. We hypothesized that both faster mean reaction time (RT) and less intra-individual RT variability would be associated with larger corpus callosum (CC) size in older adults, and…

  5. Toward a functional definition of a "rare disease" for regulatory authorities and funding agencies.

    PubMed

    Clarke, Joe T R; Coyle, Doug; Evans, Gerald; Martin, Janet; Winquist, Eric

    2014-12-01

    The designation of a disease as "rare" is associated with some substantial benefits for companies involved in new drug development, including expedited review by regulatory authorities and relaxed criteria for reimbursement. How "rare disease" is defined therefore has major financial implications, both for pharmaceutical companies and for insurers or public drug reimbursement programs. All existing definitions are based, somewhat arbitrarily, on disease incidence or prevalence. What is proposed here is a functional definition of rare based on an assessment of the feasibility of measuring the efficacy of a new treatment in conventional randomized controlled trials, to inform regulatory authorities and funding agencies charged with assessing new therapies being considered for public funding. It involves a five-step process, involving significant negotiations between patient advocacy groups, pharmaceutical companies, physicians, and public drug reimbursement programs, designed to establish the feasibility of carrying out a randomized controlled trial with sufficient statistical power to show a clinically significant treatment effect. The steps are as follows: 1) identification of a specific disease, including appropriate genetic definition; 2) identification of clinically relevant outcomes to evaluate efficacy; 3) establishment of the inherent variability of measurements of clinically relevant outcomes; 4) calculation of the sample size required to assess the efficacy of a new treatment with acceptable statistical power; and 5) estimation of the difficulty of recruiting an adequate sample size given the estimated prevalence or incidence of the disorder in the population and the inclusion criteria to be used. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. [Meta analysis of variables related to attention deficit hyperactivity disorder in school-age children].

    PubMed

    Park, Wan Ju; Seo, Ji Yeong; Kim, Mi Ye

    2011-04-01

    The purpose of this study was to use meta-analysis to examine recent domestic articles related to attention deficit hyperactivity disorder (ADHD) in school-age children. After reviewing 213 articles published between 1990 and 2009 from and cited in RISS, KISS, and DBpia, the researchers identified 24 studies with 440 research variables that had appropriate data for methodological study. SPSS 17.0 program was used. The outcome variables were divided into five types: Inattention, hyperactive impulsive, intrinsic, extrinsic, and academic ability variables. Effects size of overall core symptoms was 0.47 which is moderate level in terms of Cohen criteria and effects size of overall negative variables related ADHD was 0.27 which is small level. The most dominant variable related to ADHD was obtained from hyperactive-impulsive (0.70). Also academic ability (0.45), inattention (0.37), and intrinsic variables (0.29) had a small effect whereas extrinsic variables (0.13) had little effect on descriptive ADHD study. The results reveal that ADHD core symptoms have moderate effect size and peripheral negative variables related ADHD have small effect size. To improve the reliability of the meta-analysis results by minimizing publication bias, more intervention studies using appropriate study designs should be done.

  7. Impact of some field factors on inhalation exposure levels to bitumen emissions during road paving operations.

    PubMed

    Deygout, François; Auburtin, Guy

    2015-03-01

    Variability in occupational exposure levels to bitumen emissions has been observed during road paving operations. This is due to recurrent field factors impacting the level of exposure experienced by workers during paving. The present study was undertaken in order to quantify the impact of such factors. Pre-identified variables currently encountered in the field were monitored and recorded during paving surveys, and were conducted randomly covering current applications performed by road crews. Multivariate variance analysis and regressions were then used on computerized field data. The statistical investigations were limited due to the relatively small size of the study (36 data). Nevertheless, the particular use of the step-wise regression tool enabled the quantification of the impact of several predictors despite the existing collinearity between variables. The two bitumen organic fractions (particulates and volatiles) are associated with different field factors. The process conditions (machinery used and delivery temperature) have a significant impact on the production of airborne particulates and explain up to 44% of variability. This confirms the outcomes described by previous studies. The influence of the production factors is limited though, and should be complemented by studying factors involving the worker such as work style and the mix of tasks. The residual volatile compounds, being part of the bituminous binder and released during paving operations, control the volatile emissions; 73% of the encountered field variability is explained by the composition of the bitumen batch. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  8. Temperature and size variabilities of the Western Pacific Warm Pool

    NASA Technical Reports Server (NTRS)

    Yan, Xiao-Hai; Ho, Chung-Ru; Zheng, Quanan; Klemas, Vic

    1992-01-01

    Variabilities in sea-surface temperature and size of the Western Pacific Warm Pool were tracked with 10 years of satellite multichannel sea-surface temperature observations from 1982 to 1991. The results show that both annual mean sea-surface temperature and the size of the warm pool increased from 1983 to 1987 and fluctuated after 1987. Possible causes of these variations include solar irradiance variabilities, El Nino-Southern Oscillaton events, volcanic activities, and global warming.

  9. Canonical correlation analysis of infant's size at birth and maternal factors: a study in rural northwest Bangladesh.

    PubMed

    Kabir, Alamgir; Merrill, Rebecca D; Shamim, Abu Ahmed; Klemn, Rolf D W; Labrique, Alain B; Christian, Parul; West, Keith P; Nasser, Mohammed

    2014-01-01

    This analysis was conducted to explore the association between 5 birth size measurements (weight, length and head, chest and mid-upper arm [MUAC] circumferences) as dependent variables and 10 maternal factors as independent variables using canonical correlation analysis (CCA). CCA considers simultaneously sets of dependent and independent variables and, thus, generates a substantially reduced type 1 error. Data were from women delivering a singleton live birth (n = 14,506) while participating in a double-masked, cluster-randomized, placebo-controlled maternal vitamin A or β-carotene supplementation trial in rural Bangladesh. The first canonical correlation was 0.42 (P<0.001), demonstrating a moderate positive correlation mainly between the 5 birth size measurements and 5 maternal factors (preterm delivery, early pregnancy MUAC, infant sex, age and parity). A significant interaction between infant sex and preterm delivery on birth size was also revealed from the score plot. Thirteen percent of birth size variability was explained by the composite score of the maternal factors (Redundancy, RY/X = 0.131). Given an ability to accommodate numerous relationships and reduce complexities of multiple comparisons, CCA identified the 5 maternal variables able to predict birth size in this rural Bangladesh setting. CCA may offer an efficient, practical and inclusive approach to assessing the association between two sets of variables, addressing the innate complexity of interactions.

  10. Item usage in a multidimensional computerized adaptive test (MCAT) measuring health-related quality of life.

    PubMed

    Paap, Muirne C S; Kroeze, Karel A; Terwee, Caroline B; van der Palen, Job; Veldkamp, Bernard P

    2017-11-01

    Examining item usage is an important step in evaluating the performance of a computerized adaptive test (CAT). We study item usage for a newly developed multidimensional CAT which draws items from three PROMIS domains, as well as a disease-specific one. The multidimensional item bank used in the current study contained 194 items from four domains: the PROMIS domains fatigue, physical function, and ability to participate in social roles and activities, and a disease-specific domain (the COPD-SIB). The item bank was calibrated using the multidimensional graded response model and data of 795 patients with chronic obstructive pulmonary disease. To evaluate the item usage rates of all individual items in our item bank, CAT simulations were performed on responses generated based on a multivariate uniform distribution. The outcome variables included active bank size and item overuse (usage rate larger than the expected item usage rate). For average θ-values, the overall active bank size was 9-10%; this number quickly increased as θ-values became more extreme. For values of -2 and +2, the overall active bank size equaled 39-40%. There was 78% overlap between overused items and active bank size for average θ-values. For more extreme θ-values, the overused items made up a much smaller part of the active bank size: here the overlap was only 35%. Our results strengthen the claim that relatively short item banks may suffice when using polytomous items (and no content constraints/exposure control mechanisms), especially when using MCAT.

  11. MSFC Stream Model Preliminary Results: Modeling Recent Leonid and Perseid Encounters

    NASA Technical Reports Server (NTRS)

    Cooke, William J.; Moser, Danielle E.

    2004-01-01

    The cometary meteoroid ejection model of Jones and Brown (1996b) was used to simulate ejection from comets 55P/Tempel-Tuttle during the last 12 revolutions, and the last 9 apparitions of 109P/Swift-Tuttle. Using cometary ephemerides generated by the Jet Propulsion Laboratory s (JPL) HORIZONS Solar System Data and Ephemeris Computation Service, two independent ejection schemes were simulated. In the first case, ejection was simulated in 1 hour time steps along the comet s orbit while it was within 2.5 AU of the Sun. In the second case, ejection was simulated to occur at the hour the comet reached perihelion. A 4th order variable step-size Runge-Kutta integrator was then used to integrate meteoroid position and velocity forward in time, accounting for the effects of radiation pressure, Poynting-Robertson drag, and the gravitational forces of the planets, which were computed using JPL s DE406 planetary ephemerides. An impact parameter was computed for each particle approaching the Earth to create a flux profile, and the results compared to observations of the 1998 and 1999 Leonid showers, and the 1993 and 2004 Perseids.

  12. MSFC Stream Model Preliminary Results: Modeling Recent Leonid and Perseid Encounters

    NASA Astrophysics Data System (ADS)

    Moser, Danielle E.; Cooke, William J.

    2004-12-01

    The cometary meteoroid ejection model of Jones and Brown [ Physics, Chemistry, and Dynamics of Interplanetary Dust, ASP Conference Series 104 (1996b) 137] was used to simulate ejection from comets 55P/Tempel-Tuttle during the last 12 revolutions, and the last 9 apparitions of 109P/Swift-Tuttle. Using cometary ephemerides generated by the Jet Propulsion Laboratory’s (JPL) HORIZONS Solar System Data and Ephemeris Computation Service, two independent ejection schemes were simulated. In the first case, ejection was simulated in 1 h time steps along the comet’s orbit while it was within 2.5 AU of the Sun. In the second case, ejection was simulated to occur at the hour the comet reached perihelion. A 4th order variable step-size Runge Kutta integrator was then used to integrate meteoroid position and velocity forward in time, accounting for the effects of radiation pressure, Poynting Robertson drag, and the gravitational forces of the planets, which were computed using JPL’s DE406 planetary ephemerides. An impact parameter (IP) was computed for each particle approaching the Earth to create a flux profile, and the results compared to observations of the 1998 and 1999 Leonid showers, and the 1993 and 2004 Perseids.

  13. Gravity waves generated by a tropical cyclone during the STEP tropical field program - A case study

    NASA Technical Reports Server (NTRS)

    Pfister, L.; Chan, K. R.; Bui, T. P.; Bowen, S.; Legg, M.; Gary, B.; Kelly, K.; Proffitt, M.; Starr, W.

    1993-01-01

    Overflights of a tropical cyclone during the Australian winter monsoon field experiment of the Stratosphere-Troposphere Exchange Project (STEP) show the presence of two mesoscale phenomena: a vertically propagating gravity wave with a horizontal wavelength of about 110 km and a feature with a horizontal scale comparable to that of the cyclone's entire cloud shield. The larger feature is fairly steady, though its physical interpretation is ambiguous. The 110-km gravity wave is transient, having maximum amplitude early in the flight and decreasing in amplitude thereafter. Its scale is comparable to that of 100-to 150-km-diameter cells of low satellite brightness temperatures within the overall cyclone cloud shield; these cells have lifetimes of 4.5 to 6 hrs. These cells correspond to regions of enhanced convection, higher cloud altitude, and upwardly displaced potential temperature surfaces. The temporal and spatial distribution of meteorological variables associated with the 110-km gravity wave can be simulated by a slowly moving transient forcing at the anvil top having an amplitude of 400-600 m, a lifetime of 4.5-6 hrs, and a size comparable to the cells of low brightness temperature.

  14. Single Droplet Combustion of Decane in Microgravity: Experiments and Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Dietrich, D. L.; Struk, P. M.; Ikegam, M.; Xu, G.

    2004-01-01

    This paper presents experimental data on single droplet combustion of decane in microgravity and compares the results to a numerical model. The primary independent experiment variables are the ambient pressure and oxygen mole fraction, pressure, droplet size (over a relatively small range) and ignition energy. The droplet history (D(sup 2) history) is non-linear with the burning rate constant increasing throughout the test. The average burning rate constant, consistent with classical theory, increased with increasing ambient oxygen mole fraction and was nearly independent of pressure, initial droplet size and ignition energy. The flame typically increased in size initially, and then decreased in size, in response to the shrinking droplet. The flame standoff increased linearly for the majority of the droplet lifetime. The flame surrounding the droplet extinguished at a finite droplet size at lower ambient pressures and an oxygen mole fraction of 0.15. The extinction droplet size increased with decreasing pressure. The model is transient and assumes spherical symmetry, constant thermo-physical properties (specific heat, thermal conductivity and species Lewis number) and single step chemistry. The model includes gas-phase radiative loss and a spherically symmetric, transient liquid phase. The model accurately predicts the droplet and flame histories of the experiments. Good agreement requires that the ignition in the experiment be reasonably approximated in the model and that the model accurately predict the pre-ignition vaporization of the droplet. The model does not accurately predict the dependence of extinction droplet diameter on pressure, a result of the simplified chemistry in the model. The transient flame behavior suggests the potential importance of fuel vapor accumulation. The model results, however, show that the fractional mass consumption rate of fuel in the flame relative to fuel vaporized is close to 1.0 for all but the lowest ambient oxygen mole fractions.

  15. Investigating Reliabilities of Intraindividual Variability Indicators

    ERIC Educational Resources Information Center

    Wang, Lijuan; Grimm, Kevin J.

    2012-01-01

    Reliabilities of the two most widely used intraindividual variability indicators, "ISD[superscript 2]" and "ISD", are derived analytically. Both are functions of the sizes of the first and second moments of true intraindividual variability, the size of the measurement error variance, and the number of assessments within a burst. For comparison,…

  16. Variability of total step activity in children with cerebral palsy: influence of definition of a day on participant retention within the study.

    PubMed

    Wilson, Nichola C; Mudge, Suzie; Stott, N Susan

    2016-08-20

    Activity monitoring is important to establish accurate daily physical activity levels in children with cerebral palsy (CP). However, few studies address issues around inclusion or exclusion of step count data; in particular, how a valid day should be defined and what impact different lengths of monitoring have on retention of participant data within a study. This study assessed how different 'valid day' definitions influenced inclusion of participant data in final analyses and the subsequent variability of the data. Sixty-nine children with CP were fitted with a StepWatch™ Activity Monitor and instructed to wear it for a week. Data analysis used two broad definitions of a day, based on either number of steps in a 24 h monitoring period or the number of hours of recorded activity in a 24 h monitoring period. Eight children either did not use the monitor, or used it for only 1 day. The remaining 61 children provided 2 valid days of monitoring defined as >100 recorded steps per 24 h period and 55 (90 %) completed 2 valid days of monitoring with ≥10 h recorded activity per 24 h period. Performance variability in daily step count was lower across 2 days of monitoring when a valid day was defined as ≥10 h recorded activity per 24 h period (ICC = 0.765) and, higher when the definition >100 recorded steps per 24 h period (ICC = 0.62). Only 46 participants (75 %) completed 5 days of monitoring with >100 recorded steps per 24 h period and only 23 (38 %) achieved 5 days of monitoring with ≥10 h recorded activity per 24 h period. Datasets of participants who functioned at GMFCS level II were differentially excluded when the criteria for inclusion in final analysis was 5 valid days of ≥10 h recorded activity per 24 h period, leaving datasets available for only 8 of 32 participant datasets retained in the study. We conclude that changes in definition of a valid day have significant impacts on both inclusion of participant data in final analysis and measured variability of total step count.

  17. Balance confidence is related to features of balance and gait in individuals with chronic stroke

    PubMed Central

    Schinkel-Ivy, Alison; Wong, Jennifer S.; Mansfield, Avril

    2016-01-01

    Reduced balance confidence is associated with impairments in features of balance and gait in individuals with sub-acute stroke. However, an understanding of these relationships in individuals at the chronic stage of stroke recovery is lacking. This study aimed to quantify relationships between balance confidence and specific features of balance and gait in individuals with chronic stroke. Participants completed a balance confidence questionnaire and clinical balance assessment (quiet standing, walking, and reactive stepping) at 6 months post-discharge from inpatient stroke rehabilitation. Regression analyses were performed using balance confidence as a predictor variable and quiet standing, walking, and reactive stepping outcome measures as the dependent variables. Walking velocity was positively correlated with balance confidence, while medio-lateral centre of pressure excursion (quiet standing) and double support time, step width variability, and step time variability (walking) were negatively correlated with balance confidence. This study provides insight into the relationships between balance confidence and balance and gait measures in individuals with chronic stroke, suggesting that individuals with low balance confidence exhibited impaired control of quiet standing as well as walking characteristics associated with cautious gait strategies. Future work should identify the direction of these relationships to inform community-based stroke rehabilitation programs for individuals with chronic stroke, and determine the potential utility of incorporating interventions to improve balance confidence into these programs. PMID:27955809

  18. Physical pretreatment – woody biomass size reduction – for forest biorefinery

    Treesearch

    J.Y. Zhu

    2011-01-01

    Physical pretreatment of woody biomass or wood size reduction is a prerequisite step for further chemical or biochemical processing in forest biorefinery. However, wood size reduction is very energy intensive which differentiates woody biomass from herbaceous biomass for biorefinery. This chapter discusses several critical issues related to wood size reduction: (1)...

  19. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  20. Study on characteristics of printed circuit board liberation and its crushed products.

    PubMed

    Quan, Cui; Li, Aimin; Gao, Ningbo

    2012-11-01

    Recycling printed circuit board waste (PCBW) waste is a hot issue of environmental protection and resource recycling. Mechanical and thermo-chemical methods are two traditional recycling processes for PCBW. In the present research, a two-step crushing process combined with a coarse-crushing step and a fine-pulverizing step was adopted, and then the crushed products were classified into seven different fractions with a standard sieve. The liberation situation and particle shape in different size fractions were observed. Properties of different size fractions, such as heating value, thermogravimetric, proximate, ultimate and chemical analysis were determined. The Rosin-Rammler model was applied to analyze the particle size distribution of crushed material. The results indicated that complete liberation of metals from the PCBW was achieved at a size less than 0.59 mm, but the nonmetal particle in the smaller-than-0.15 mm fraction is liable to aggregate. Copper was the most prominent metal in PCBW and mainly enriched in the 0.42-0.25 mm particle size. The Rosin-Rammler equation adequately fit particle size distribution data of crushed PCBW with a correlation coefficient of 0.9810. The results of heating value and proximate analysis revealed that the PCBW had a low heating value and high ash content. The combustion and pyrolysis process of PCBW was different and there was an obvious oxidation peak of Cu in combustion runs.

  1. Workshop II On Unsteady Separated Flow Proceedings

    DTIC Science & Technology

    1988-07-28

    was static stall angle of 12 ° . achieved by injecting diluted food coloring at the apex through a 1.5 mm diameter tube placed The response of the wing...differences with uniform step size in q, and trailing -. 75 three- pront differences with uniform step size in ,, ,,as used The nonlinearity of the...flow prop- "Kutta condition." erties for slender 3D wings are addressed. To begin the The present paper emphasizes recent progress in the de- study

  2. The GRAM-3 model

    NASA Technical Reports Server (NTRS)

    Justus, C. G.

    1987-01-01

    The Global Reference Atmosphere Model (GRAM) is under continuous development and improvement. GRAM data were compared with Middle Atmosphere Program (MAP) predictions and with shuttle data. An important note: Users should employ only step sizes in altitude that give vertical density gradients consistent with shuttle-derived density data. Using too small a vertical step size (finer then 1 km) will result in what appears to be unreasonably high values of density shears but what in reality is noise in the model.

  3. A meta-analysis of research on science teacher education practices associated with inquiry strategy

    NASA Astrophysics Data System (ADS)

    Sweitzer, Gary L.; Anderson, Ronald D.

    A meta-analysis was conducted of studies of teacher education having as measured outcomes one or more variables associated with inquiry teaching. Inquiry addresses those teacher behaviors that facilitate student acquisition of concepts and processes through strategies such as problem solving, uses of evidence, logical and analytical reasoning, clarification of values, and decision making. Studies which contained sufficient data for the calculation of an effect size were coded for 114 variables. These variables were divided into the following six major categories: study information and design characteristics, teacher and teacher trainee characteristics, student characteristics, treatment description, outcome description, and effect size calculation. A total of 68 studies resulting in 177 effect size calculations were coded. Mean effect sizes broken across selected variables were calculated.

  4. Projected changes in distributions of Australian tropical savanna birds under climate change using three dispersal scenarios

    PubMed Central

    Reside, April E; VanDerWal, Jeremy; Kutt, Alex S

    2012-01-01

    Identifying the species most vulnerable to extinction as a result of climate change is a necessary first step in mitigating biodiversity decline. Species distribution modeling (SDM) is a commonly used tool to assess potential climate change impacts on distributions of species. We use SDMs to predict geographic ranges for 243 birds of Australian tropical savannas, and to project changes in species richness and ranges under a future climate scenario between 1990 and 2080. Realistic predictions require recognition of the variability in species capacity to track climatically suitable environments. Here we assess the effect of dispersal on model results by using three approaches: full dispersal, no dispersal and a partial-dispersal scenario permitting species to track climate change at a rate of 30 km per decade. As expected, the projected distributions and richness patterns are highly sensitive to the dispersal scenario. Projected future range sizes decreased for 66% of species if full dispersal was assumed, but for 89% of species when no dispersal was assumed. However, realistic future predictions should not assume a single dispersal scenario for all species and as such, we assigned each species to the most appropriate dispersal category based on individual mobility and habitat specificity; this permitted the best estimates of where species will be in the future. Under this “realistic” dispersal scenario, projected ranges sizes decreased for 67% of species but showed that migratory and tropical-endemic birds are predicted to benefit from climate change with increasing distributional area. Richness hotspots of tropical savanna birds are expected to move, increasing in southern savannas and southward along the east coast of Australia, but decreasing in the arid zone. Understanding the complexity of effects of climate change on species’ range sizes by incorporating dispersal capacities is a crucial step toward developing adaptation policies for the conservation of vulnerable species. PMID:22837819

  5. Solution of elliptic partial differential equations by fast Poisson solvers using a local relaxation factor. 2: Two-step method

    NASA Technical Reports Server (NTRS)

    Chang, S. C.

    1986-01-01

    A two-step semidirect procedure is developed to accelerate the one-step procedure described in NASA TP-2529. For a set of constant coefficient model problems, the acceleration factor increases from 1 to 2 as the one-step procedure convergence rate decreases from + infinity to 0. It is also shown numerically that the two-step procedure can substantially accelerate the convergence of the numerical solution of many partial differential equations (PDE's) with variable coefficients.

  6. Between-monitor differences in step counts are related to body size: implications for objective physical activity measurement.

    PubMed

    Pomeroy, Jeremy; Brage, Søren; Curtis, Jeffrey M; Swan, Pamela D; Knowler, William C; Franks, Paul W

    2011-04-27

    The quantification of the relationships between walking and health requires that walking is measured accurately. We correlated different measures of step accumulation to body size, overall physical activity level, and glucose regulation. Participants were 25 men and 25 women American Indians without diabetes (Age: 20-34 years) in Phoenix, Arizona, USA. We assessed steps/day during 7 days of free living, simultaneously with three different monitors (Accusplit-AX120, MTI-ActiGraph, and Dynastream-AMP). We assessed total physical activity during free-living with doubly labeled water combined with resting metabolic rate measured by expired gas indirect calorimetry. Glucose tolerance was determined during an oral glucose tolerance test. Based on observed counts in the laboratory, the AMP was the most accurate device, followed by the MTI and the AX120, respectively. The estimated energy cost of 1000 steps per day was lower in the AX120 than the MTI or AMP. The correlation between AX120-assessed steps/day and waist circumference was significantly higher than the correlation between AMP steps and waist circumference. The difference in steps per day between the AX120 and both the AMP and the MTI were significantly related to waist circumference. Between-monitor differences in step counts influence the observed relationship between walking and obesity-related traits.

  7. Efficient Integration of Coupled Electrical-Chemical Systems in Multiscale Neuronal Simulations

    PubMed Central

    Brocke, Ekaterina; Bhalla, Upinder S.; Djurfeldt, Mikael; Hellgren Kotaleski, Jeanette; Hanke, Michael

    2016-01-01

    Multiscale modeling and simulations in neuroscience is gaining scientific attention due to its growing importance and unexplored capabilities. For instance, it can help to acquire better understanding of biological phenomena that have important features at multiple scales of time and space. This includes synaptic plasticity, memory formation and modulation, homeostasis. There are several ways to organize multiscale simulations depending on the scientific problem and the system to be modeled. One of the possibilities is to simulate different components of a multiscale system simultaneously and exchange data when required. The latter may become a challenging task for several reasons. First, the components of a multiscale system usually span different spatial and temporal scales, such that rigorous analysis of possible coupling solutions is required. Then, the components can be defined by different mathematical formalisms. For certain classes of problems a number of coupling mechanisms have been proposed and successfully used. However, a strict mathematical theory is missing in many cases. Recent work in the field has not so far investigated artifacts that may arise during coupled integration of different approximation methods. Moreover, in neuroscience, the coupling of widely used numerical fixed step size solvers may lead to unexpected inefficiency. In this paper we address the question of possible numerical artifacts that can arise during the integration of a coupled system. We develop an efficient strategy to couple the components comprising a multiscale test problem in neuroscience. We introduce an efficient coupling method based on the second-order backward differentiation formula (BDF2) numerical approximation. The method uses an adaptive step size integration with an error estimation proposed by Skelboe (2000). The method shows a significant advantage over conventional fixed step size solvers used in neuroscience for similar problems. We explore different coupling strategies that define the organization of computations between system components. We study the importance of an appropriate approximation of exchanged variables during the simulation. The analysis shows a substantial impact of these aspects on the solution accuracy in the application to our multiscale neuroscientific test problem. We believe that the ideas presented in the paper may essentially contribute to the development of a robust and efficient framework for multiscale brain modeling and simulations in neuroscience. PMID:27672364

  8. Efficient Integration of Coupled Electrical-Chemical Systems in Multiscale Neuronal Simulations.

    PubMed

    Brocke, Ekaterina; Bhalla, Upinder S; Djurfeldt, Mikael; Hellgren Kotaleski, Jeanette; Hanke, Michael

    2016-01-01

    Multiscale modeling and simulations in neuroscience is gaining scientific attention due to its growing importance and unexplored capabilities. For instance, it can help to acquire better understanding of biological phenomena that have important features at multiple scales of time and space. This includes synaptic plasticity, memory formation and modulation, homeostasis. There are several ways to organize multiscale simulations depending on the scientific problem and the system to be modeled. One of the possibilities is to simulate different components of a multiscale system simultaneously and exchange data when required. The latter may become a challenging task for several reasons. First, the components of a multiscale system usually span different spatial and temporal scales, such that rigorous analysis of possible coupling solutions is required. Then, the components can be defined by different mathematical formalisms. For certain classes of problems a number of coupling mechanisms have been proposed and successfully used. However, a strict mathematical theory is missing in many cases. Recent work in the field has not so far investigated artifacts that may arise during coupled integration of different approximation methods. Moreover, in neuroscience, the coupling of widely used numerical fixed step size solvers may lead to unexpected inefficiency. In this paper we address the question of possible numerical artifacts that can arise during the integration of a coupled system. We develop an efficient strategy to couple the components comprising a multiscale test problem in neuroscience. We introduce an efficient coupling method based on the second-order backward differentiation formula (BDF2) numerical approximation. The method uses an adaptive step size integration with an error estimation proposed by Skelboe (2000). The method shows a significant advantage over conventional fixed step size solvers used in neuroscience for similar problems. We explore different coupling strategies that define the organization of computations between system components. We study the importance of an appropriate approximation of exchanged variables during the simulation. The analysis shows a substantial impact of these aspects on the solution accuracy in the application to our multiscale neuroscientific test problem. We believe that the ideas presented in the paper may essentially contribute to the development of a robust and efficient framework for multiscale brain modeling and simulations in neuroscience.

  9. Measuring Spatial Accessibility of Health Care Providers – Introduction of a Variable Distance Decay Function within the Floating Catchment Area (FCA) Method

    PubMed Central

    Groneberg, David A.

    2016-01-01

    We integrated recent improvements within the floating catchment area (FCA) method family into an integrated ‘iFCA`method. Within this method we focused on the distance decay function and its parameter. So far only distance decay functions with constant parameters have been applied. Therefore, we developed a variable distance decay function to be used within the FCA method. We were able to replace the impedance coefficient β by readily available distribution parameter (i.e. median and standard deviation (SD)) within a logistic based distance decay function. Hence, the function is shaped individually for every single population location by the median and SD of all population-to-provider distances within a global catchment size. Theoretical application of the variable distance decay function showed conceptually sound results. Furthermore, the existence of effective variable catchment sizes defined by the asymptotic approach to zero of the distance decay function was revealed, satisfying the need for variable catchment sizes. The application of the iFCA method within an urban case study in Berlin (Germany) confirmed the theoretical fit of the suggested method. In summary, we introduced for the first time, a variable distance decay function within an integrated FCA method. This function accounts for individual travel behaviors determined by the distribution of providers. Additionally, the function inherits effective variable catchment sizes and therefore obviates the need for determining variable catchment sizes separately. PMID:27391649

  10. Short-term Time Step Convergence in a Climate Model

    DOE PAGES

    Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...

    2015-02-11

    A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less

  11. Evaluating within-population variability in behavior and demography for the adaptive potential of a dispersal-limited species to climate change

    USGS Publications Warehouse

    Muñoz, David J.; Miller Hesed, Kyle; Grant, Evan H. Campbell; Miller, David A.W.

    2016-01-01

    Multiple pathways exist for species to respond to changing climates. However, responses of dispersal-limited species will be more strongly tied to ability to adapt within existing populations as rates of environmental change will likely exceed movement rates. Here, we assess adaptive capacity in Plethodon cinereus, a dispersal-limited woodland salamander. We quantify plasticity in behavior and variation in demography to observed variation in environmental variables over a 5-year period. We found strong evidence that temperature and rainfall influence P. cinereus surface presence, indicating changes in climate are likely to affect seasonal activity patterns. We also found that warmer summer temperatures reduced individual growth rates into the autumn, which is likely to have negative demographic consequences. Reduced growth rates may delay reproductive maturity and lead to reductions in size-specific fecundity, potentially reducing population-level persistence. To better understand within-population variability in responses, we examined differences between two common color morphs. Previous evidence suggests that the color polymorphism may be linked to physiological differences in heat and moisture tolerance. We found only moderate support for morph-specific differences for the relationship between individual growth and temperature. Measuring environmental sensitivity to climatic variability is the first step in predicting species' responses to climate change. Our results suggest phenological shifts and changes in growth rates are likely responses under scenarios where further warming occurs, and we discuss possible adaptive strategies for resulting selective pressures.

  12. Echocardiographic Assessment of Left Atrial Size and Function in Warmblood Horses: Reference Intervals, Allometric Scaling, and Agreement of Different Echocardiographic Variables.

    PubMed

    Huesler, I M; Mitchell, K J; Schwarzwald, C C

    2016-07-01

    Echocardiographic assessment of left atrial (LA) size and function in horses is not standardized. The aim of this study was to establish reference intervals for echocardiographic indices of LA size and function in Warmblood horses and to provide proof of concept for allometric scaling of variables and for the clinical use of area-based indices. Thirty-one healthy Warmblood horses and 91 Warmblood horses with a primary diagnosis of mitral regurgitation (MR) or aortic regurgitation (AR). Retrospective study. Echocardiographic indices of LA size and function were measured and scaled to body weight (BWT). Reference intervals were calculated, the influence of BWT, age, and valvular regurgitation on LA size and function was investigated and agreement between different measurements of LA size was assessed. Allometric scaling of variables of LA size allowed for correction of differences in BWT. Indices of LA size documented LA enlargement with moderate and severe MR and AR, whereas most indices of LA mechanical function were not significantly altered by valvular regurgitation. Different indices of LA size were in fair to good agreement but still lead to discordant conclusions with regard to assessment of LA enlargement in individual horses. Allometric scaling of echocardiographic variables of LA size is advised to correct for differences in BWT among Warmblood horses. Assessment of LA dimensions should be based on an integrative approach combining subjective evaluation and assessment of multiple measurements, including area-based variables. The clinical relevance of indices of LA mechanical function remains unclear when used in horses with mitral or aortic regurgitation. Copyright © 2016 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  13. Uncertainty and Variability

    EPA Pesticide Factsheets

    EPA ExpoBox is a toolbox for exposure assessors. Its purpose is to provide a compendium of exposure assessment and risk characterization tools that will present comprehensive step-by-step guidance and links to relevant exposure assessment data bases

  14. Relating Linear and Volumetric Variables Through Body Scanning to Improve Human Interfaces in Space

    NASA Technical Reports Server (NTRS)

    Margerum, Sarah E.; Ferrer, Mike A.; Young, Karen S.; Rajulu, Sudhakar

    2010-01-01

    Designing space suits and vehicles for the diverse human population present unique challenges for the methods of traditional anthropometry. Space suits are bulky and allow the operator to shift position within the suit and inhibit the ability to identify body landmarks. Limited suit sizing options also cause variability in fit and performance between similarly sized individuals. Space vehicles are restrictive in volume in both the fit and the ability to collect data. NASA's Anthropometric and Biomechanics Facility (ABF) has utilized 3D scanning to shift from traditional linear anthropometry to explore and examine volumetric capabilities to provide anthropometric solutions for design. Overall, the key goals are to improve the human-system performance and develop new processes to aid in the design and evaluation of space systems. Four case studies are presented that illustrate the shift from purely linear analyses to an augmented volumetric toolset to predict and analyze the human within the space suit and vehicle. The first case study involves the calculation of maximal head volume to estimate total free volume in the helmet for proper air exchange. Traditional linear measurements resulted in an inaccurate representation of the head shape, yet limited data exists for the determination of a large head volume. Steps were first taken to identify and classify a maximum head volume and the resulting comparisons to the estimate are presented in this paper. This study illustrates the gap between linear components of anthropometry and the need for overall volume metrics in order to provide solutions. A second case study examines the overlay of the space suit scans and components onto scanned individuals to quantify fit and clearance to aid in sizing the suit to the individual. Restrictions in space suit size availability present unique challenges to optimally fit the individual within a limited sizing range while maintaining performance. Quantification of the clearance and fit between similarly sized individuals is critical in providing a greater understanding of the human body's function within the suit. The third case study presented in this paper explores the development of a conformal seat pan using scanning techniques, and details the challenges of volumetric analyses that were overcome in order to develop a universal seat pan that can be utilized across the entire user population. The final case study explores expanding volumetric capabilities through generation of boundary manikins. Boundary manikins are representative individuals from the population of interest that represent the extremes of the population spectrum. The ABF developed a technique to take three-dimensional scans of individuals and manipulate the scans to reflect the boundary manikins' anthropometry. In essence, this process generates a representative three-dimensional scan of an individual from anthropometry, using another individual's scanned image. The results from this process can be used in design process modeling and initial suit sizing work as a three dimensional, realistic example of individuals from the population, maintaining the variability between and correlation to the relevant dimensions of interest.

  15. Concurrent generation of multivariate mixed data with variables of dissimilar types.

    PubMed

    Amatya, Anup; Demirtas, Hakan

    2016-01-01

    Data sets originating from wide range of research studies are composed of multiple variables that are correlated and of dissimilar types, primarily of count, binary/ordinal and continuous attributes. The present paper builds on the previous works on multivariate data generation and develops a framework for generating multivariate mixed data with a pre-specified correlation matrix. The generated data consist of components that are marginally count, binary, ordinal and continuous, where the count and continuous variables follow the generalized Poisson and normal distributions, respectively. The use of the generalized Poisson distribution provides a flexible mechanism which allows under- and over-dispersed count variables generally encountered in practice. A step-by-step algorithm is provided and its performance is evaluated using simulated and real-data scenarios.

  16. Continuation Power Flow with Variable-Step Variable-Order Nonlinear Predictor

    NASA Astrophysics Data System (ADS)

    Kojima, Takayuki; Mori, Hiroyuki

    This paper proposes a new continuation power flow calculation method for drawing a P-V curve in power systems. The continuation power flow calculation successively evaluates power flow solutions through changing a specified value of the power flow calculation. In recent years, power system operators are quite concerned with voltage instability due to the appearance of deregulated and competitive power markets. The continuation power flow calculation plays an important role to understand the load characteristics in a sense of static voltage instability. In this paper, a new continuation power flow with a variable-step variable-order (VSVO) nonlinear predictor is proposed. The proposed method evaluates optimal predicted points confirming with the feature of P-V curves. The proposed method is successfully applied to IEEE 118-bus and IEEE 300-bus systems.

  17. Framework for Creating a Smart Growth Economic Development Strategy

    EPA Pesticide Factsheets

    This step-by-step guide can help small and mid-sized cities, particularly those that have limited population growth, areas of disinvestment, and/or a struggling economy, build a place-based economic development strategy.

  18. Mind wandering at the fingertips: automatic parsing of subjective states based on response time variability

    PubMed Central

    Bastian, Mikaël; Sackur, Jérôme

    2013-01-01

    Research from the last decade has successfully used two kinds of thought reports in order to assess whether the mind is wandering: random thought-probes and spontaneous reports. However, none of these two methods allows any assessment of the subjective state of the participant between two reports. In this paper, we present a step by step elaboration and testing of a continuous index, based on response time variability within Sustained Attention to Response Tasks (N = 106, for a total of 10 conditions). We first show that increased response time variability predicts mind wandering. We then compute a continuous index of response time variability throughout full experiments and show that the temporal position of a probe relative to the nearest local peak of the continuous index is predictive of mind wandering. This suggests that our index carries information about the subjective state of the subject even when he or she is not probed, and opens the way for on-line tracking of mind wandering. Finally we proceed a step further and infer the internal attentional states on the basis of the variability of response times. To this end we use the Hidden Markov Model framework, which allows us to estimate the durations of on-task and off-task episodes. PMID:24046753

  19. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach

    PubMed Central

    Hemming, Karla; Taljaard, Monica

    2016-01-01

    Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808

  20. Different methods to analyze stepped wedge trial designs revealed different aspects of intervention effects.

    PubMed

    Twisk, J W R; Hoogendijk, E O; Zwijsen, S A; de Boer, M R

    2016-04-01

    Within epidemiology, a stepped wedge trial design (i.e., a one-way crossover trial in which several arms start the intervention at different time points) is increasingly popular as an alternative to a classical cluster randomized controlled trial. Despite this increasing popularity, there is a huge variation in the methods used to analyze data from a stepped wedge trial design. Four linear mixed models were used to analyze data from a stepped wedge trial design on two example data sets. The four methods were chosen because they have been (frequently) used in practice. Method 1 compares all the intervention measurements with the control measurements. Method 2 treats the intervention variable as a time-independent categorical variable comparing the different arms with each other. In method 3, the intervention variable is a time-dependent categorical variable comparing groups with different number of intervention measurements, whereas in method 4, the changes in the outcome variable between subsequent measurements are analyzed. Regarding the results in the first example data set, methods 1 and 3 showed a strong positive intervention effect, which disappeared after adjusting for time. Method 2 showed an inverse intervention effect, whereas method 4 did not show a significant effect at all. In the second example data set, the results were the opposite. Both methods 2 and 4 showed significant intervention effects, whereas the other two methods did not. For method 4, the intervention effect attenuated after adjustment for time. Different methods to analyze data from a stepped wedge trial design reveal different aspects of a possible intervention effect. The choice of a method partly depends on the type of the intervention and the possible time-dependent effect of the intervention. Furthermore, it is advised to combine the results of the different methods to obtain an interpretable overall result. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Stepping reaction time and gait adaptability are significantly impaired in people with Parkinson's disease: Implications for fall risk.

    PubMed

    Caetano, Maria Joana D; Lord, Stephen R; Allen, Natalie E; Brodie, Matthew A; Song, Jooeun; Paul, Serene S; Canning, Colleen G; Menant, Jasmine C

    2018-02-01

    Decline in the ability to take effective steps and to adapt gait, particularly under challenging conditions, may be important reasons why people with Parkinson's disease (PD) have an increased risk of falling. This study aimed to determine the extent of stepping and gait adaptability impairments in PD individuals as well as their associations with PD symptoms, cognitive function and previous falls. Thirty-three older people with PD and 33 controls were assessed in choice stepping reaction time, Stroop stepping and gait adaptability tests; measurements identified as fall risk factors in older adults. People with PD had similar mean choice stepping reaction times to healthy controls, but had significantly greater intra-individual variability. In the Stroop stepping test, the PD participants were more likely to make an error (48 vs 18%), took 715 ms longer to react (2312 vs 1517 ms) and had significantly greater response variability (536 vs 329 ms) than the healthy controls. People with PD also had more difficulties adapting their gait in response to targets (poorer stepping accuracy) and obstacles (increased number of steps) appearing at short notice on a walkway. Within the PD group, higher disease severity, reduced cognition and previous falls were associated with poorer stepping and gait adaptability performances. People with PD have reduced ability to adapt gait to unexpected targets and obstacles and exhibit poorer stepping responses, particularly in a test condition involving conflict resolution. Such impaired stepping responses in Parkinson's disease are associated with disease severity, cognitive impairment and falls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Effects of two-step homogenization on precipitation behavior of Al{sub 3}Zr dispersoids and recrystallization resistance in 7150 aluminum alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Zhanying; Key Laboratory for Anisotropy and Texture of Materials, Northeastern University, Shenyang 110819, China,; Zhao, Gang

    2015-04-15

    The effect of two-step homogenization treatments on the precipitation behavior of Al{sub 3}Zr dispersoids was investigated by transmission electron microscopy (TEM) in 7150 alloys. Two-step treatments with the first step in the temperature range of 300–400 °C followed by the second step at 470 °C were applied during homogenization. Compared with the conventional one-step homogenization, both a finer particle size and a higher number density of Al{sub 3}Zr dispersoids were obtained with two-step homogenization treatments. The most effective dispersoid distribution was attained using the first step held at 300 °C. In addition, the two-step homogenization minimized the precipitate free zonesmore » and greatly increased the number density of dispersoids near dendrite grain boundaries. The effect of two-step homogenization on recrystallization resistance of 7150 alloys with different Zr contents was quantitatively analyzed using the electron backscattered diffraction (EBSD) technique. It was found that the improved dispersoid distribution through the two-step treatment can effectively inhibit the recrystallization process during the post-deformation annealing for 7150 alloys containing 0.04–0.09 wt.% Zr, resulting in a remarkable reduction of the volume fraction and grain size of recrystallization grains. - Highlights: • Effect of two-step homogenization on Al{sub 3}Zr dispersoids was investigated by TEM. • Finer and higher number of dispersoids obtained with two-step homogenization • Minimized the precipitate free zones and improved the dispersoid distribution • Recrystallization resistance with varying Zr content was quantified by EBSD. • Effectively inhibit the recrystallization through two-step treatments in 7150 alloy.« less

  3. Culture, Organizational Learning and Selected Employee Background Variables in Small-Size Business Enterprises

    ERIC Educational Resources Information Center

    Graham, Carroll M.; Nafukho, Fredrick Muyia

    2007-01-01

    Purpose: The purpose of this study is to determine the relationship between four independent variables educational level, longevity, type of enterprise, and gender and the dependent variable culture, as a dimension that explains organizational learning readiness in seven small-size business enterprises. Design/methodology/approach: An exploratory…

  4. An Effect Size for Regression Predictors in Meta-Analysis

    ERIC Educational Resources Information Center

    Aloe, Ariel M.; Becker, Betsy Jane

    2012-01-01

    A new effect size representing the predictive power of an independent variable from a multiple regression model is presented. The index, denoted as r[subscript sp], is the semipartial correlation of the predictor with the outcome of interest. This effect size can be computed when multiple predictor variables are included in the regression model…

  5. Reynolds number scaling to predict droplet size distribution in dispersed and undispersed subsurface oil releases.

    PubMed

    Li, Pu; Weng, Linlu; Niu, Haibo; Robinson, Brian; King, Thomas; Conmy, Robyn; Lee, Kenneth; Liu, Lei

    2016-12-15

    This study was aimed at testing the applicability of modified Weber number scaling with Alaska North Slope (ANS) crude oil, and developing a Reynolds number scaling approach for oil droplet size prediction for high viscosity oils. Dispersant to oil ratio and empirical coefficients were also quantified. Finally, a two-step Rosin-Rammler scheme was introduced for the determination of droplet size distribution. This new approach appeared more advantageous in avoiding the inconsistency in interfacial tension measurements, and consequently delivered concise droplet size prediction. Calculated and observed data correlated well based on Reynolds number scaling. The relation indicated that chemical dispersant played an important role in reducing the droplet size of ANS under different seasonal conditions. The proposed Reynolds number scaling and two-step Rosin-Rammler approaches provide a concise, reliable way to predict droplet size distribution, supporting decision making in chemical dispersant application during an offshore oil spill. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Control of Alginate Core Size in Alginate-Poly (Lactic-Co-Glycolic) Acid Microparticles

    NASA Astrophysics Data System (ADS)

    Lio, Daniel; Yeo, David; Xu, Chenjie

    2016-01-01

    Core-shell alginate-poly (lactic-co-glycolic) acid (PLGA) microparticles are potential candidates to improve hydrophilic drug loading while facilitating controlled release. This report studies the influence of the alginate core size on the drug release profile of alginate-PLGA microparticles and its size. Microparticles are synthesized through double-emulsion fabrication via a concurrent ionotropic gelation and solvent extraction. The size of alginate core ranges from approximately 10, 50, to 100 μm when the emulsification method at the first step is homogenization, vortexing, or magnetic stirring, respectively. The second step emulsification for all three conditions is performed with magnetic stirring. Interestingly, although the alginate core has different sizes, alginate-PLGA microparticle diameter does not change. However, drug release profiles are dramatically different for microparticles comprising different-sized alginate cores. Specifically, taking calcein as a model drug, microparticles containing the smallest alginate core (10 μm) show the slowest release over a period of 26 days with burst release less than 1 %.

  7. Classification Order of Surface-Confined Intermixing at Epitaxial Interface

    NASA Astrophysics Data System (ADS)

    Michailov, M.

    The self-organization phenomena at epitaxial interface hold special attention in contemporary material science. Being relevant to the fundamental physical problem of competing, long-range and short-range atomic interactions in systems with reduced dimensionality, these phenomena have found exacting academic interest. They are also of great technological importance for their ability to bring spontaneous formation of regular nanoscale surface patterns and superlattices with exotic properties. The basic phenomenon involved in this process is surface diffusion. That is the motivation behind the present study which deals with important details of diffusion scenarios that control the fine atomic structure of epitaxial interface. Consisting surface imperfections (terraces, steps, kinks, and vacancies), the interface offers variety of barriers for surface diffusion. Therefore, the adatoms and clusters need a certain critical energy to overcome the corresponding diffusion barriers. In the most general case the critical energies can be attained by variation of the system temperature. Hence, their values define temperature limits of system energy gaps associated with different diffusion scenarios. This systematization imply classification order of surface alloying: blocked, incomplete, and complete. On that background, two diffusion problems, related to the atomic-scale surface morphology, will be discussed. The first problem deals with diffusion of atomic clusters on atomically smooth interface. On flat domains, far from terraces and steps, we analyzed the impact of size, shape, and cluster/substrate lattice misfit on the diffusion behavior of atomic clusters (islands). We found that the lattice constant of small clusters depends on the number N of building atoms at 1 < N ≤ 10. In heteroepitaxy, this effect of variable lattice constant originates from the enhanced charge transfer and the strong influence of the surface potential on cluster atomic arrangement. At constant temperature, the variation of the lattice constant leads to variable misfit which affects the island migration. The cluster/substrate commensurability influences the oscillation behavior of the diffusion coefficient caused by variation in the cluster shape. We discuss the results in a physical model that implies cluster diffusion with size-dependent cluster/substrate misfit. The second problem is devoted to diffusion phenomena in the vicinity of atomic terraces on stepped or vicinal surfaces. Here, we develop a computational model that refines important details of diffusion behavior of adatoms accounting for the energy barriers at specific atomic sites (smooth domains, terraces, and steps) located on the crystal surface. The dynamic competition between energy gained by mixing and substrate strain energy results in diffusion scenario where adatoms form alloyed islands and alloyed stripes in the vicinity of terrace edges. Being in agreement with recent experimental findings, the observed effect of stripe and island alloy formation opens up a way regular surface patterns to be configured at different atomic levels on the crystal surface. The complete surface alloying of the entire interface layer is also briefly discussed with critical analysis and classification of experimental findings and simulation data.

  8. Controlling CH3NH3PbI(3-x)Cl(x) Film Morphology with Two-Step Annealing Method for Efficient Hybrid Perovskite Solar Cells.

    PubMed

    Liu, Dong; Wu, Lili; Li, Chunxiu; Ren, Shengqiang; Zhang, Jingquan; Li, Wei; Feng, Lianghuan

    2015-08-05

    The methylammonium lead halide perovskite solar cells have become very attractive because they can be prepared with low-cost solution-processable technology and their power conversion efficiency have been increasing from 3.9% to 20% in recent years. However, the high performance of perovskite photovoltaic devices are dependent on the complicated process to prepare compact perovskite films with large grain size. Herein, a new method is developed to achieve excellent CH3NH3PbI3-xClx film with fine morphology and crystallization based on one step deposition and two-step annealing process. This method include the spin coating deposition of the perovskite films with the precursor solution of PbI2, PbCl2, and CH3NH3I at the molar ratio 1:1:4 in dimethylformamide (DMF) and the post two-step annealing (TSA). The first annealing is achieved by solvent-induced process in DMF to promote migration and interdiffusion of the solvent-assisted precursor ions and molecules and realize large size grain growth. The second annealing is conducted by thermal-induced process to further improve morphology and crystallization of films. The compact perovskite films are successfully prepared with grain size up to 1.1 μm according to SEM observation. The PL decay lifetime, and the optic energy gap for the film with two-step annealing are 460 ns and 1.575 eV, respectively, while they are 307 and 327 ns and 1.577 and 1.582 eV for the films annealed in one-step thermal and one-step solvent process. On the basis of the TSA process, the photovoltaic devices exhibit the best efficiency of 14% under AM 1.5G irradiation (100 mW·cm(-2)).

  9. A General Approach to Defining Latent Growth Components

    ERIC Educational Resources Information Center

    Mayer, Axel; Steyer, Rolf; Mueller, Horst

    2012-01-01

    We present a 3-step approach to defining latent growth components. In the first step, a measurement model with at least 2 indicators for each time point is formulated to identify measurement error variances and obtain latent variables that are purged from measurement error. In the second step, we use contrast matrices to define the latent growth…

  10. Phenotype-Driven Therapeutics in Severe Asthma.

    PubMed

    Opina, Maria Theresa D; Moore, Wendy C

    2017-02-01

    Inhaled corticosteroids are the mainstay of asthma treatment using a step-up approach with incremental dosing and additional controller medications in order to achieve symptom control and prevent exacerbations. While most patients respond well to this treatment approach, some patients remain refractory despite high doses of inhaled corticosteroids and a long-acting β-agonist. The problem lies in the heterogeneity of severe asthma, which is further supported by the emergence of severe asthma phenotypes. This heterogeneity contributes to the variability in treatment response. Randomized controlled trials involving add-on therapies in poorly controlled asthma have challenged the idea of a "one size fits all" approach targeting specific phenotypes in their subject selection. This review discusses severe asthma phenotypes from unbiased clustering approaches and the most recent scientific evidence on novel treatments to provide a guide in personalizing severe asthma treatment.

  11. Monte Carlo modeling of single-molecule cytoplasmic dynein.

    PubMed

    Singh, Manoranjan P; Mallik, Roop; Gross, Steven P; Yu, Clare C

    2005-08-23

    Molecular motors are responsible for active transport and organization in the cell, underlying an enormous number of crucial biological processes. Dynein is more complicated in its structure and function than other motors. Recent experiments have found that, unlike other motors, dynein can take different size steps along microtubules depending on load and ATP concentration. We use Monte Carlo simulations to model the molecular motor function of cytoplasmic dynein at the single-molecule level. The theory relates dynein's enzymatic properties to its mechanical force production. Our simulations reproduce the main features of recent single-molecule experiments that found a discrete distribution of dynein step sizes, depending on load and ATP concentration. The model reproduces the large steps found experimentally under high ATP and no load by assuming that the ATP binding affinities at the secondary sites decrease as the number of ATP bound to these sites increases. Additionally, to capture the essential features of the step-size distribution at very low ATP concentration and no load, the ATP hydrolysis of the primary site must be dramatically reduced when none of the secondary sites have ATP bound to them. We make testable predictions that should guide future experiments related to dynein function.

  12. Outward Bound to the Galaxies--One Step at a Time

    ERIC Educational Resources Information Center

    Ward, R. Bruce; Miller-Friedmann, Jaimie; Sienkiewicz, Frank; Antonucci, Paul

    2012-01-01

    Less than a century ago, astronomers began to unlock the cosmic distances within and beyond the Milky Way. Understanding the size and scale of the universe is a continuing, step-by-step process that began with the remarkably accurate measurement of the distance to the Moon made by early Greeks. In part, the authors have ITEAMS (Innovative…

  13. Gait variability in community dwelling adults with Alzheimer disease.

    PubMed

    Webster, Kate E; Merory, John R; Wittwer, Joanne E

    2006-01-01

    Studies have shown that measures of gait variability are associated with falling in older adults. However, few studies have measured gait variability in people with Alzheimer disease, despite the high incidence of falls in Alzheimer disease. The purpose of this study was to compare gait variability of community-dwelling older adults with Alzheimer disease and control subjects at various walking speeds. Ten subjects with mild-moderate Alzheimer disease and ten matched control subjects underwent gait analysis using an electronic walkway. Participants were required to walk at self-selected slow, preferred, and fast speeds. Stride length and step width variability were determined using the coefficient of variation. Results showed that stride length variability was significantly greater in the Alzheimer disease group compared with the control group at all speeds. In both groups, increases in walking speed were significantly correlated with decreases in stride length variability. Step width variability was significantly reduced in the Alzheimer disease group compared with the control group at slow speed only. In conclusion, there is an increase in stride length variability in Alzheimer disease at all walking speeds that may contribute to the increased incidence of falls in Alzheimer disease.

  14. Effects of spatial structure of population size on the population dynamics of barnacles across their elevational range.

    PubMed

    Fukaya, Keiichi; Okuda, Takehiro; Nakaoka, Masahiro; Noda, Takashi

    2014-11-01

    Explanations for why population dynamics vary across the range of a species reflect two contrasting hypotheses: (i) temporal variability of populations is larger in the centre of the range compared to the margins because overcompensatory density dependence destabilizes population dynamics and (ii) population variability is larger near the margins, where populations are more susceptible to environmental fluctuations. In both of these hypotheses, positions within the range are assumed to affect population variability. In contrast, the fact that population variability is often related to mean population size implies that the spatial structure of the population size within the range of a species may also be a useful predictor of the spatial variation in temporal variability of population size over the range of the species. To explore how population temporal variability varies spatially and the underlying processes responsible for the spatial variation, we focused on the intertidal barnacle Chthamalus dalli and examined differences in its population dynamics along the tidal levels it inhabits. Changes in coverage of barnacle populations were monitored for 10.5 years at 25 plots spanning the elevational range of this species. Data were analysed by fitting a population dynamics model to estimate the effects of density-dependent and density-independent processes on population growth. We also examined the temporal mean-variance relationship of population size with parameters estimated from the population dynamics model. We found that the relative variability of populations tended to increase from the centre of the elevational range towards the margins because of an increase in the magnitude of stochastic fluctuations of growth rates. Thus, our results supported hypothesis (2). We also found that spatial variations in temporal population variability were well characterized by Taylor's power law, the relative population variability being inversely related to the mean population size. Results suggest that understanding the population dynamics of a species over its range may be facilitated by taking the spatial structure of population size into account as well as by considering changes in population processes as a function of position within the range of the species. © 2014 The Authors. Journal of Animal Ecology © 2014 British Ecological Society.

  15. 30 min of treadmill walking at self-selected speed does not increase gait variability in independent elderly.

    PubMed

    Da Rocha, Emmanuel S; Kunzler, Marcos R; Bobbert, Maarten F; Duysens, Jacques; Carpes, Felipe P

    2018-06-01

    Walking is one of the preferred exercises among elderly, but could a prolonged walking increase gait variability, a risk factor for a fall in the elderly? Here we determine whether 30 min of treadmill walking increases coefficient of variation of gait in elderly. Because gait responses to exercise depend on fitness level, we included 15 sedentary and 15 active elderly. Sedentary participants preferred a lower gait speed and made smaller steps than the actives. Step length coefficient of variation decreased ~16.9% by the end of the exercise in both the groups. Stride length coefficient of variation decreased ~9% after 10 minutes of walking, and sedentary elderly showed a slightly larger step width coefficient of variation (~2%) at 10 min than active elderly. Active elderly showed higher walk ratio (step length/cadence) than sedentary in all times of walking, but the times did not differ in both the groups. In conclusion, treadmill gait kinematics differ between sedentary and active elderly, but changes over time are similar in sedentary and active elderly. As a practical implication, 30 min of walking might be a good strategy of exercise for elderly, independently of the fitness level, because it did not increase variability in step and stride kinematics, which is considered a risk of fall in this population.

  16. Reduced high-frequency motor neuron firing, EMG fractionation, and gait variability in awake walking ALS mice

    PubMed Central

    Hadzipasic, Muhamed; Ni, Weiming; Nagy, Maria; Steenrod, Natalie; McGinley, Matthew J.; Kaushal, Adi; Thomas, Eleanor; McCormick, David A.

    2016-01-01

    Amyotrophic lateral sclerosis (ALS) is a lethal neurodegenerative disease prominently featuring motor neuron (MN) loss and paralysis. A recent study using whole-cell patch clamp recording of MNs in acute spinal cord slices from symptomatic adult ALS mice showed that the fastest firing MNs are preferentially lost. To measure the in vivo effects of such loss, awake symptomatic-stage ALS mice performing self-initiated walking on a wheel were studied. Both single-unit extracellular recordings within spinal cord MN pools for lower leg flexor and extensor muscles and the electromyograms (EMGs) of the corresponding muscles were recorded. In the ALS mice, we observed absent or truncated high-frequency firing of MNs at the appropriate time in the step cycle and step-to-step variability of the EMG, as well as flexor-extensor coactivation. In turn, kinematic analysis of walking showed step-to-step variability of gait. At the MN level, the higher frequencies absent from recordings from mutant mice corresponded with the upper range of frequencies observed for fast-firing MNs in earlier slice measurements. These results suggest that, in SOD1-linked ALS mice, symptoms are a product of abnormal MN firing due at least in part to loss of neurons that fire at high frequency, associated with altered EMG patterns and hindlimb kinematics during gait. PMID:27821773

  17. Correlates of Injury-forced Work Reduction for Massage Therapists and Bodywork Practitioners.

    PubMed

    Blau, Gary; Monos, Christopher; Boyer, Ed; Davis, Kathleen; Flanagan, Richard; Lopez, Andrea; Tatum, Donna S

    2013-01-01

    Injury-forced work reduction (IFWR) has been acknowledged as an all-too-common occurrence for massage therapists and bodywork practitioners (M & Bs). However, little prior research has specifically investigated demographic, work attitude, and perceptual correlates of IFWR among M & Bs. To test two hypotheses, H1 and H2. H1 is that the accumulated cost variables set ( e.g., accumulated costs, continuing education costs) will account for a significant amount of IFWR variance beyond control/demographic (e.g., social desirability response bias, gender, years in practice, highest education level) and work attitude/perception variables (e.g., job satisfaction, affective occupation commitment, occupation identification, limited occupation alternatives) sets. H2 is that the two exhaustion variables (i.e., physical exhaustion, work exhaustion) set will account for significant IFWR variance beyond control/demographic, work attitude/perception, and accumulated cost variables sets. An online survey sample of 2,079 complete-data M & Bs was collected. Stepwise regression analysis was used to test the study hypotheses. The research design first controlled for control/demographic (Step1) and work attitude/perception variables sets (Step 2), before then testing for the successive incremental impact of two variable sets, accumulated costs (Step 3) and exhaustion variables (Step 4) for explaining IFWR. RESULTS SUPPORTED BOTH STUDY HYPOTHESES: accumulated cost variables set (H1) and exhaustion variables set (H2) each significantly explained IFWR after the control/demographic and work attitude/perception variables sets. The most important correlate for explaining IFWR was higher physical exhaustion, but work exhaustion was also significant. It is not just physical "wear and tear", but also "mental fatigue", that can lead to IFWR for M & Bs. Being female, having more years in practice, and having higher continuing education costs were also significant correlates of IFWR. Lower overall levels of work exhaustion, physical exhaustion, and IFWR were found in the present sample. However, since both types of exhaustion significantly and positively impact IFWR, taking sufficient time between massages and, if possible, varying one's massage technique to replenish one's physical and mental energy seem important. Failure to take required continuing education units, due to high costs, also increases risk for IFWR. Study limitations and future research issues are discussed.

  18. Smart Hydrogel Particles: Biomarker Harvesting: One-step affinity purification, size exclusion, and protection against degradation

    PubMed Central

    Luchini, Alessandra; Geho, David H.; Bishop, Barney; Tran, Duy; Xia, Cassandra; Dufour, Robert; Jones, Clint; Espina, Virginia; Patanarut, Alexis; Zhu, Weidong; Ross, Mark; Tessitore, Alessandra; Petricoin, Emanuel; Liotta, Lance A.

    2010-01-01

    Disease-associated blood biomarkers exist in exceedingly low concentrations within complex mixtures of high-abundance proteins such as albumin. We have introduced an affinity bait molecule into N-isopropylacrylamide to produce a particle that will perform three independent functions within minutes, in one step, in solution: a) molecular size sieving b) affinity capture of all solution phase target molecules, and c) complete protection of harvested proteins from enzymatic degradation. The captured analytes can be readily electroeluted for analysis. PMID:18076201

  19. Establishing intensively cultured hybrid poplar plantations for fuel and fiber.

    Treesearch

    Edward Hansen; Lincoln Moore; Daniel Netzer; Michael Ostry; Howard Phipps; Jaroslav Zavitkovski

    1983-01-01

    This paper describes a step-by-step procedure for establishing commercial size intensively cultured plantations of hybrid poplar and summarizes the state-of-knowledge as developed during 10 years of field research at Rhinelander, Wisconsin.

  20. Simulation analyses of space use: Home range estimates, variability, and sample size

    USGS Publications Warehouse

    Bekoff, Marc; Mech, L. David

    1984-01-01

    Simulations of space use by animals were run to determine the relationship among home range area estimates, variability, and sample size (number of locations). As sample size increased, home range size increased asymptotically, whereas variability decreased among mean home range area estimates generated by multiple simulations for the same sample size. Our results suggest that field workers should ascertain between 100 and 200 locations in order to estimate reliably home range area. In some cases, this suggested guideline is higher than values found in the few published studies in which the relationship between home range area and number of locations is addressed. Sampling differences for small species occupying relatively small home ranges indicate that fewer locations may be sufficient to allow for a reliable estimate of home range. Intraspecific variability in social status (group member, loner, resident, transient), age, sex, reproductive condition, and food resources also have to be considered, as do season, habitat, and differences in sampling and analytical methods. Comparative data still are needed.

  1. Assistive devices alter gait patterns in Parkinson disease: advantages of the four-wheeled walker.

    PubMed

    Kegelmeyer, Deb A; Parthasarathy, Sowmya; Kostyk, Sandra K; White, Susan E; Kloos, Anne D

    2013-05-01

    Gait abnormalities are a hallmark of Parkinson's disease (PD) and contribute to fall risk. Therapy and exercise are often encouraged to increase mobility and decrease falls. As disease symptoms progress, assistive devices are often prescribed. There are no guidelines for choosing appropriate ambulatory devices. This unique study systematically examined the impact of a broad range of assistive devices on gait measures during walking in both a straight path and around obstacles in individuals with PD. Quantitative gait measures, including velocity, stride length, percent swing and double support time, and coefficients of variation were assessed in 27 individuals with PD with or without one of six different devices including canes, standard and wheeled walkers (two, four or U-Step). Data were collected using the GAITRite and on a figure-of-eight course. All devices, with the exception of four-wheeled and U-Step walkers significantly decreased gait velocity. The four-wheeled walker resulted in less variability in gait measures and had less impact on spontaneous unassisted gait patterns. The U-Step walker exhibited the highest variability across all parameters followed by the two-wheeled and standard walkers. Higher variability has been correlated with increased falls. Though subjects performed better on a figure-of-eight course using either the four-wheeled or the U-Step walker, the four-wheeled walker resulted in the most consistent improvement in overall gait variables. Laser light use on a U-Step walker did not improve gait measures or safety in figure-of-eight compared to other devices. Of the devices tested, the four-wheeled-walker offered the most consistent advantages for improving mobility and safety. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Study on experimental characterization of carbon fiber reinforced polymer panel using digital image correlation: A sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Kashfuddoja, Mohammad; Prasath, R. G. R.; Ramji, M.

    2014-11-01

    In this work, the experimental characterization of polymer-matrix and polymer based carbon fiber reinforced composite laminate by employing a whole field non-contact digital image correlation (DIC) technique is presented. The properties are evaluated based on full field data obtained from DIC measurements by performing a series of tests as per ASTM standards. The evaluated properties are compared with the results obtained from conventional testing and analytical models and they are found to closely match. Further, sensitivity of DIC parameters on material properties is investigated and their optimum value is identified. It is found that the subset size has more influence on material properties as compared to step size and their predicted optimum value for the case of both matrix and composite material is found consistent with each other. The aspect ratio of region of interest (ROI) chosen for correlation should be the same as that of camera resolution aspect ratio for better correlation. Also, an open cutout panel made of the same composite laminate is taken into consideration to demonstrate the sensitivity of DIC parameters on predicting complex strain field surrounding the hole. It is observed that the strain field surrounding the hole is much more sensitive to step size rather than subset size. Lower step size produced highly pixilated strain field, showing sensitivity of local strain at the expense of computational time in addition with random scattered noisy pattern whereas higher step size mitigates the noisy pattern at the expense of losing the details present in data and even alters the natural trend of strain field leading to erroneous maximum strain locations. The subset size variation mainly presents a smoothing effect, eliminating noise from strain field while maintaining the details in the data without altering their natural trend. However, the increase in subset size significantly reduces the strain data at hole edge due to discontinuity in correlation. Also, the DIC results are compared with FEA prediction to ascertain the suitable value of DIC parameters towards better accuracy.

  3. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  4. Spatial parameters of walking gait and footedness.

    PubMed

    Zverev, Y P

    2006-01-01

    The present study was undertaken to assess whether footedness has effects on selected spatial and angular parameters of able-bodied gait by evaluating footprints of young adults. A total of 112 males and 93 females were selected from among students and staff members of the University of Malawi using a simple random sampling method. Footedness of subjects was assessed by the Waterloo Footedness Questionnaire Revised. Gait at natural speed was recorded using the footprint method. The following spatial parameters of gait were derived from the inked footprint sequences of subjects: step and stride lengths, gait angle and base of gait. The anthropometric measurements taken were weight, height, leg and foot length, foot breadth, shoulder width, and hip and waist circumferences. The prevalence of right-, left- and mix-footedness in the whole sample of young Malawian adults was 81%, 8.3% and 10.7%, respectively. One-way analysis of variance did not reveal a statistically significant difference between footedness categories in the mean values of anthropometric measurements (p > 0.05 for all variables). Gender differences in step and stride length values were not statistically significant. Correction of these variables for stature did not change the trend. Males had significantly broader steps than females. Normalized values of base of gait had similar gender difference. The group means of step length and normalized step length of the right and left feet were similar, for males and females. There was a significant side difference in the gait angle in both gender groups of volunteers with higher mean values on the left side compared to the right one (t = 2.64, p < 0.05 for males, and t = 2.78, p < 0.05 for females). One-way analysis of variance did not demonstrate significant difference between footedness categories in the mean values of step length, gait angle, bilateral differences in step length and gait angle, stride length, gait base and normalized gait variables of male and female volunteers (p > 0.05 for all variables). The present study demonstrated that footedness does not affect spatial and angular parameters of walking gait.

  5. Differential Effects of Monovalent Cations and Anions on Key Nanoparticle Attributes

    EPA Science Inventory

    Understanding the key particle attributes such as particle size, size distribution and surface charge of both the nano- and micron-sized particles is the first step in drug formulation as such attributes are known to directly influence several characteristics of drugs including d...

  6. Neighbouring populations, opposite dynamics: influence of body size and environmental variation on the demography of stream-resident brown trout (Salmo trutta).

    PubMed

    Fernández-Chacón, Albert; Genovart, Meritxell; Álvarez, David; Cano, José M; Ojanguren, Alfredo F; Rodriguez-Muñoz, Rolando; Nicieza, Alfredo G

    2015-06-01

    In organisms such as fish, where body size is considered an important state variable for the study of their population dynamics, size-specific growth and survival rates can be influenced by local variation in both biotic and abiotic factors, but few studies have evaluated the complex relationships between environmental variability and size-dependent processes. We analysed a 6-year capture-recapture dataset of brown trout (Salmo trutta) collected at 3 neighbouring but heterogeneous mountain streams in northern Spain with the aim of investigating the factors shaping the dynamics of local populations. The influence of body size and water temperature on survival and individual growth was assessed under a multi-state modelling framework, an extension of classical capture-recapture models that considers the state (i.e. body size) of the individual in each capture occasion and allows us to obtain state-specific demographic rates and link them to continuous environmental variables. Individual survival and growth patterns varied over space and time, and evidence of size-dependent survival was found in all but the smallest stream. At this stream, the probability of reaching larger sizes was lower compared to the other wider and deeper streams. Water temperature variables performed better in the modelling of the highest-altitude population, explaining over a 99 % of the variability in maturation transitions and survival of large fish. The relationships between body size, temperature and fitness components found in this study highlight the utility of multi-state approaches to investigate small-scale demographic processes in heterogeneous environments, and to provide reliable ecological knowledge for management purposes.

  7. The Attributes of a Variable-Diameter Rotor System Applied to Civil Tiltrotor Aircraft

    NASA Technical Reports Server (NTRS)

    Brender, Scott; Mark, Hans; Aguilera, Frank

    1996-01-01

    The attributes of a variable diameter rotor concept applied to civil tiltrotor aircraft are investigated using the V/STOL aircraft sizing and performance computer program (VASCOMP). To begin, civil tiltrotor viability issues that motivate advanced rotor designs are discussed. Current work on the variable diameter rotor and a theoretical basis for the advantages of the rotor system are presented. The size and performance of variable diameter and conventional tiltrotor designs for the same baseline mission are then calculated using a modified NASA Ames version of VASCOMP. The aircraft are compared based on gross weight, fuel required, engine size, and autorotative performance for various hover disk loading values. Conclusions about the viability of the resulting designs are presented and a program for further variable diameter rotor research is recommended.

  8. Trends in Solidification Grain Size and Morphology for Additive Manufacturing of Ti-6Al-4V

    NASA Astrophysics Data System (ADS)

    Gockel, Joy; Sheridan, Luke; Narra, Sneha P.; Klingbeil, Nathan W.; Beuth, Jack

    2017-12-01

    Metal additive manufacturing (AM) is used for both prototyping and production of final parts. Therefore, there is a need to predict and control the microstructural size and morphology. Process mapping is an approach that represents AM process outcomes in terms of input variables. In this work, analytical, numerical, and experimental approaches are combined to provide a holistic view of trends in the solidification grain structure of Ti-6Al-4V across a wide range of AM process input variables. The thermal gradient is shown to vary significantly through the depth of the melt pool, which precludes development of fully equiaxed microstructure throughout the depth of the deposit within any practical range of AM process variables. A strategy for grain size control is demonstrated based on the relationship between melt pool size and grain size across multiple deposit geometries, and additional factors affecting grain size are discussed.

  9. Size, Loading Efficiency, and Cytotoxicity of Albumin-Loaded Chitosan Nanoparticles: An Artificial Neural Networks Study.

    PubMed

    Baharifar, Hadi; Amani, Amir

    2017-01-01

    When designing nanoparticles for drug delivery, many variables such as size, loading efficiency, and cytotoxicity should be considered. Usually, smaller particles are preferred in drug delivery because of longer blood circulation time and their ability to escape from immune system, whereas smaller nanoparticles often show increased toxicity. Determination of parameters which affect size of particles and factors such as loading efficiency and cytotoxicity could be very helpful in designing drug delivery systems. In this work, albumin (as a protein drug model)-loaded chitosan nanoparticles were prepared by polyelectrolyte complexation method. Simultaneously, effects of 4 independent variables including chitosan and albumin concentrations, pH, and reaction time were determined on 3 dependent variables (i.e., size, loading efficiency, and cytotoxicity) by artificial neural networks. Results showed that concentrations of initial materials are the most important factors which may affect the dependent variables. A drop in the concentrations decreases the size directly, but they simultaneously decrease loading efficiency and increase cytotoxicity. Therefore, an optimization of the independent variables is required to obtain the most useful preparation. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  10. Control of locomotor stability in stabilizing and destabilizing environments.

    PubMed

    Wu, Mengnan/Mary; Brown, Geoffrey; Gordon, Keith E

    2017-06-01

    To develop effective interventions targeting locomotor stability, it is crucial to understand how people control and modify gait in response to changes in stabilization requirements. Our purpose was to examine how individuals with and without incomplete spinal cord injury (iSCI) control lateral stability in haptic walking environments that increase or decrease stabilization demands. We hypothesized that people would adapt to walking in a predictable, stabilizing viscous force field and unpredictable destabilizing force field by increasing and decreasing feedforward control of lateral stability, respectively. Adaptations in feedforward control were measured using after-effects when fields were removed. Both groups significantly (p<0.05) decreased step width in the stabilizing field. When the stabilizing field was removed, narrower steps persisted in both groups and subjects with iSCI significantly increased movement variability (p<0.05). The after-effect of walking in the stabilizing field was a suppression of ongoing general stabilization mechanisms. In the destabilizing field, subjects with iSCI took faster steps and increased lateral margins of stability (p<0.05). Step frequency increases persisted when the destabilizing field was removed (p<0.05), suggesting that subjects with iSCI made feedforward adaptions to increase control of lateral stability. In contrast, in the destabilizing field, non-impaired subjects increased movement variability (p<0.05) and did not change step width, step frequency, or lateral margin of stability (p>0.05). When the destabilizing field was removed, increases in movement variability persisted (p<0.05), suggesting that non-impaired subjects made feedforward decreases in resistance to perturbations. Published by Elsevier B.V.

  11. Mind your step: metabolic energy cost while walking an enforced gait pattern.

    PubMed

    Wezenberg, D; de Haan, A; van Bennekom, C A M; Houdijk, H

    2011-04-01

    The energy cost of walking could be attributed to energy related to the walking movement and energy related to balance control. In order to differentiate between both components we investigated the energy cost of walking an enforced step pattern, thereby perturbing balance while the walking movement is preserved. Nine healthy subjects walked three times at comfortable walking speed on an instrumented treadmill. The first trial consisted of unconstrained walking. In the next two trials, subject walked while following a step pattern projected on the treadmill. The steps projected were either composed of the averaged step characteristics (periodic trial), or were an exact copy including the variability of the steps taken while walking unconstrained (variable trial). Metabolic energy cost was assessed and center of pressure profiles were analyzed to determine task performance, and to gain insight into the balance control strategies applied. Results showed that the metabolic energy cost was significantly higher in both the periodic and variable trial (8% and 13%, respectively) compared to unconstrained walking. The variation in center of pressure trajectories during single limb support was higher when a gait pattern was enforced, indicating a more active ankle strategy. The increased metabolic energy cost could originate from increased preparatory muscle activation to ensure proper foot placement and a more active ankle strategy to control for lateral balance. These results entail that metabolic energy cost of walking can be influenced significantly by control strategies that do not necessary alter global gait characteristics. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Process Variability and Capability in Candy Production and Packaging

    ERIC Educational Resources Information Center

    Lembke, Ronald S.

    2016-01-01

    In this short, in-class activity, students use fun size packages of M&Ms to study process variability, including a real-world application of C[subscript pk]. How process variability and legal requirements force the company to put "Not Labeled for Individual Retail Sale" on each fun size package is discussed, as is the economics of…

  13. Variability in human body size

    NASA Technical Reports Server (NTRS)

    Annis, J. F.

    1978-01-01

    The range of variability found among homogeneous groups is described and illustrated. Those trends that show significantly marked differences between sexes and among a number of racial/ethnic groups are also presented. Causes of human-body size variability discussed include genetic endowment, aging, nutrition, protective garments, and occupation. The information is presented to aid design engineers of space flight hardware and equipment.

  14. The Effect of Camera Angle and Image Size on Source Credibility and Interpersonal Attraction.

    ERIC Educational Resources Information Center

    McCain, Thomas A.; Wakshlag, Jacob J.

    The purpose of this study was to examine the effects of two nonverbal visual variables (camera angle and image size) on variables developed in a nonmediated context (source credibility and interpersonal attraction). Camera angle and image size were manipulated in eight video taped television newscasts which were subsequently presented to eight…

  15. A time to search: finding the meaning of variable activation energy.

    PubMed

    Vyazovkin, Sergey

    2016-07-28

    This review deals with the phenomenon of variable activation energy frequently observed when studying the kinetics in the liquid or solid phase. This phenomenon commonly manifests itself through nonlinear Arrhenius plots or dependencies of the activation energy on conversion computed by isoconversional methods. Variable activation energy signifies a multi-step process and has a meaning of a collective parameter linked to the activation energies of individual steps. It is demonstrated that by using appropriate models of the processes, the link can be established in algebraic form. This allows one to analyze experimentally observed dependencies of the activation energy in a quantitative fashion and, as a result, to obtain activation energies of individual steps, to evaluate and predict other important parameters of the process, and generally to gain deeper kinetic and mechanistic insights. This review provides multiple examples of such analysis as applied to the processes of crosslinking polymerization, crystallization and melting of polymers, gelation, and solid-solid morphological and glass transitions. The use of appropriate computational techniques is discussed as well.

  16. Individual Colorimetric Observer Model

    PubMed Central

    Asano, Yuta; Fairchild, Mark D.; Blondé, Laurent

    2016-01-01

    This study proposes a vision model for individual colorimetric observers. The proposed model can be beneficial in many color-critical applications such as color grading and soft proofing to assess ranges of color matches instead of a single average match. We extended the CIE 2006 physiological observer by adding eight additional physiological parameters to model individual color-normal observers. These eight parameters control lens pigment density, macular pigment density, optical densities of L-, M-, and S-cone photopigments, and λmax shifts of L-, M-, and S-cone photopigments. By identifying the variability of each physiological parameter, the model can simulate color matching functions among color-normal populations using Monte Carlo simulation. The variabilities of the eight parameters were identified through two steps. In the first step, extensive reviews of past studies were performed for each of the eight physiological parameters. In the second step, the obtained variabilities were scaled to fit a color matching dataset. The model was validated using three different datasets: traditional color matching, applied color matching, and Rayleigh matches. PMID:26862905

  17. Trainer variability during step training after spinal cord injury: Implications for robotic gait-training device design.

    PubMed

    Galvez, Jose A; Budovitch, Amy; Harkema, Susan J; Reinkensmeyer, David J

    2011-01-01

    Robotic devices are being developed to automate repetitive aspects of walking retraining after neurological injuries, in part because they might improve the consistency and quality of training. However, it is unclear how inconsistent manual training actually is or whether stepping quality depends strongly on the trainers' manual skill. The objective of this study was to quantify trainer variability of manual skill during step training using body-weight support on a treadmill and assess factors of trainer skill. We attached a sensorized orthosis to one leg of each patient with spinal cord injury and measured the shank kinematics and forces exerted by different trainers during six training sessions. An expert trainer rated the trainers' skill level based on videotape recordings. Between-trainer force variability was substantial, about two times greater than within-trainer variability. Trainer skill rating correlated strongly with two gait features: better knee extension during stance and fewer episodes of toe dragging. Better knee extension correlated directly with larger knee horizontal assistance force, but better toe clearance did not correlate with larger ankle push-up force; rather, it correlated with better knee and hip extension. These results are useful to inform robotic gait-training design.

  18. A multilayer concentric filter device to diminish clogging for separation of particles and microalgae based on size.

    PubMed

    Chen, Chih-Chung; Chen, Yu-An; Liu, Yi-Ju; Yao, Da-Jeng

    2014-04-21

    Microalgae species have great economic importance; they are a source of medicines, health foods, animal feeds, industrial pigments, cosmetic additives and biodiesel. Specific microalgae species collected from the environment must be isolated for examination and further application, but their varied size and culture conditions make their isolation using conventional methods, such as filtration, streaking plate and flow cytometric sorting, labour-intensive and costly. A separation device based on size is one of the most rapid, simple and inexpensive methods to separate microalgae, but this approach encounters major disadvantages of clogging and multiple filtration steps when the size of microalgae varies over a wide range. In this work, we propose a multilayer concentric filter device with varied pore size and is driven by a centrifugation force. The device, which includes multiple filter layers, was employed to separate a heterogeneous population of microparticles into several subpopulations by filtration in one step. A cross-flow to attenuate prospective clogging was generated by altering the rate of rotation instantly through the relative motion between the fluid and the filter according to the structural design of the device. Mixed microparticles of varied size were tested to demonstrate that clogging was significantly suppressed due to a highly efficient separation. Microalgae in a heterogeneous population collected from an environmental soil collection were separated and enriched into four subpopulations according to size in a one step filtration process. A microalgae sample contaminated with bacteria and insect eggs was also tested to prove the decontamination capability of the device.

  19. Body size as a latent variable in a structural equation model: thermal acclimation and energetics of the leaf-eared mouse.

    PubMed

    Nespolo, Roberto F; Arim, Matías; Bozinovic, Francisco

    2003-07-01

    Body size is one of the most important determinants of energy metabolism in mammals. However, the usual physiological variables measured to characterize energy metabolism and heat dissipation in endotherms are strongly affected by thermal acclimation, and are also correlated among themselves. In addition to choosing the appropriate measurement of body size, these problems create additional complications when analyzing the relationships among physiological variables such as basal metabolism, non-shivering thermogenesis, thermoregulatory maximum metabolic rate and minimum thermal conductance, body size dependence, and the effect of thermal acclimation on them. We measured these variables in Phyllotis darwini, a murid rodent from central Chile, under conditions of warm and cold acclimation. In addition to standard statistical analyses to determine the effect of thermal acclimation on each variable and the body-mass-controlled correlation among them, we performed a Structural Equation Modeling analysis to evaluate the effects of three different measurements of body size (body mass, m(b); body length, L(b) and foot length, L(f)) on energy metabolism and thermal conductance. We found that thermal acclimation changed the correlation among physiological variables. Only cold-acclimated animals supported our a priori path models, and m(b) appeared to be the best descriptor of body size (compared with L(b) and L(f)) when dealing with energy metabolism and thermal conductance. However, while m(b) appeared to be the strongest determinant of energy metabolism, there was an important and significant contribution of L(b) (but not L(f)) to thermal conductance. This study demonstrates how additional information can be drawn from physiological ecology and general organismal studies by applying Structural Equation Modeling when multiple variables are measured in the same individuals.

  20. Eye-size variability in deep-sea lanternfishes (Myctophidae): an ecological and phylogenetic study.

    PubMed

    de Busserolles, Fanny; Fitzpatrick, John L; Paxton, John R; Marshall, N Justin; Collin, Shaun P

    2013-01-01

    One of the most common visual adaptations seen in the mesopelagic zone (200-1000 m), where the amount of light diminishes exponentially with depth and where bioluminescent organisms predominate, is the enlargement of the eye and pupil area. However, it remains unclear how eye size is influenced by depth, other environmental conditions and phylogeny. In this study, we determine the factors influencing variability in eye size and assess whether this variability is explained by ecological differences in habitat and lifestyle within a family of mesopelagic fishes characterized by broad intra- and interspecific variance in depth range and luminous patterns. We focus our study on the lanternfish family (Myctophidae) and hypothesise that lanternfishes with a deeper distribution and/or a reduction of bioluminescent emissions have smaller eyes and that ecological factors rather than phylogenetic relationships will drive the evolution of the visual system. Eye diameter and standard length were measured in 237 individuals from 61 species of lanternfishes representing all the recognised tribes within the family in addition to compiling an ecological dataset including depth distribution during night and day and the location and sexual dimorphism of luminous organs. Hypotheses were tested by investigating the relationship between the relative size of the eye (corrected for body size) and variations in depth and/or patterns of luminous-organs using phylogenetic comparative analyses. Results show a great variability in relative eye size within the Myctophidae at all taxonomic levels (from subfamily to genus), suggesting that this character may have evolved several times. However, variability in eye size within the family could not be explained by any of our ecological variables (bioluminescence and depth patterns), and appears to be driven solely by phylogenetic relationships.

  1. Eye-Size Variability in Deep-Sea Lanternfishes (Myctophidae): An Ecological and Phylogenetic Study

    PubMed Central

    de Busserolles, Fanny; Fitzpatrick, John L.; Paxton, John R.; Marshall, N. Justin; Collin, Shaun P.

    2013-01-01

    One of the most common visual adaptations seen in the mesopelagic zone (200–1000 m), where the amount of light diminishes exponentially with depth and where bioluminescent organisms predominate, is the enlargement of the eye and pupil area. However, it remains unclear how eye size is influenced by depth, other environmental conditions and phylogeny. In this study, we determine the factors influencing variability in eye size and assess whether this variability is explained by ecological differences in habitat and lifestyle within a family of mesopelagic fishes characterized by broad intra- and interspecific variance in depth range and luminous patterns. We focus our study on the lanternfish family (Myctophidae) and hypothesise that lanternfishes with a deeper distribution and/or a reduction of bioluminescent emissions have smaller eyes and that ecological factors rather than phylogenetic relationships will drive the evolution of the visual system. Eye diameter and standard length were measured in 237 individuals from 61 species of lanternfishes representing all the recognised tribes within the family in addition to compiling an ecological dataset including depth distribution during night and day and the location and sexual dimorphism of luminous organs. Hypotheses were tested by investigating the relationship between the relative size of the eye (corrected for body size) and variations in depth and/or patterns of luminous-organs using phylogenetic comparative analyses. Results show a great variability in relative eye size within the Myctophidae at all taxonomic levels (from subfamily to genus), suggesting that this character may have evolved several times. However, variability in eye size within the family could not be explained by any of our ecological variables (bioluminescence and depth patterns), and appears to be driven solely by phylogenetic relationships. PMID:23472203

  2. Dietary behaviors and portion sizes of Black women who enrolled in SisterTalk and variation by demographic characteristics

    PubMed Central

    Gans, Kim M.; Risica, Patricia Markham; Kirtania, Usree; Jennings, Alishia; Strolla, Leslie O.; Steiner-Asiedu, Matilda; Hardy, Norma; Lasater, Thomas M.

    2009-01-01

    Objective To describe the dietary behaviors of Black women who enrolled in the SisterTalk weight control study. Design Baseline data collected via telephone survey and in-person screening. Setting Boston, MA and surrounding areas. Participants A total of 461 Black women completed the baseline. Variables Measured Measured height and weight; self reported demographics, risk factors, and dietary variables including fat-related eating behaviors, food portion size, fruit, vegetable, and beverage intake. Analysis Descriptive analyses for demographic, risk factors and dietary variables; ANOVA models with Food Habits Questionnaire (FHQ) scores as the dependent variable and demographic categories as the independent variables; ANOVA models with individual FHQ item scores as the dependent variable, and ethnic identification as the independent variable. Results The data indicate a low prevalence of many fat lowering behaviors. More than 60% reported eating less than five servings of fruits and vegetables per day. Self-reported portion sizes were large for most foods. Older age, being born outside the US, living without children and being retired were significantly associated with a higher prevalence of fat-lowering behaviors. The frequency of specific fat-lowering behaviors and portion size also differed by ethnic identification. Conclusions and Implications The findings support the need for culturally appropriate interventions to improve the dietary intake of Black Americans. Further studies should examine the dietary habits, food preparation methods and portion sizes of diverse groups of Black women and how such habits may differ by demographics. PMID:19161918

  3. Spatial variability in plankton biomass and hydrographic variables along an axial transect in Chesapeake Bay

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Roman, M.; Kimmel, D.; McGilliard, C.; Boicourt, W.

    2006-05-01

    High-resolution, axial sampling surveys were conducted in Chesapeake Bay during April, July, and October from 1996 to 2000 using a towed sampling device equipped with sensors for depth, temperature, conductivity, oxygen, fluorescence, and an optical plankton counter (OPC). The results suggest that the axial distribution and variability of hydrographic and biological parameters in Chesapeake Bay were primarily influenced by the source and magnitude of freshwater input. Bay-wide spatial trends in the water column-averaged values of salinity were linear functions of distance from the main source of freshwater, the Susquehanna River, at the head of the bay. However, spatial trends in the water column-averaged values of temperature, dissolved oxygen, chlorophyll-a and zooplankton biomass were nonlinear along the axis of the bay. Autocorrelation analysis and the residuals of linear and quadratic regressions between each variable and latitude were used to quantify the patch sizes for each axial transect. The patch sizes of each variable depended on whether the data were detrended, and the detrending techniques applied. However, the patch size of each variable was generally larger using the original data compared to the detrended data. The patch sizes of salinity were larger than those for dissolved oxygen, chlorophyll-a and zooplankton biomass, suggesting that more localized processes influence the production and consumption of plankton. This high-resolution quantification of the zooplankton spatial variability and patch size can be used for more realistic assessments of the zooplankton forage base for larval fish species.

  4. Study of mesoporous CdS-quantum-dot-sensitized TiO2 films by using X-ray photoelectron spectroscopy and AFM

    PubMed Central

    Wojcieszak, Robert; Raj, Gijo

    2014-01-01

    Summary CdS quantum dots were grown on mesoporous TiO2 films by successive ionic layer adsorption and reaction processes in order to obtain CdS particles of various sizes. AFM analysis shows that the growth of the CdS particles is a two-step process. The first step is the formation of new crystallites at each deposition cycle. In the next step the pre-deposited crystallites grow to form larger aggregates. Special attention is paid to the estimation of the CdS particle size by X-ray photoelectron spectroscopy (XPS). Among the classical methods of characterization the XPS model is described in detail. In order to make an attempt to validate the XPS model, the results are compared to those obtained from AFM analysis and to the evolution of the band gap energy of the CdS nanoparticles as obtained by UV–vis spectroscopy. The results showed that XPS technique is a powerful tool in the estimation of the CdS particle size. In conjunction with these results, a very good correlation has been found between the number of deposition cycles and the particle size. PMID:24605274

  5. Impact of encoding depth on awareness of perceptual effects in recognition memory.

    PubMed

    Gardiner, J M; Gregg, V H; Mashru, R; Thaman, M

    2001-04-01

    Pictorial stimuli are more likely to be recognized if they are the same size, rather than a different size, at study and at test. This size congruency effect was replicated in two experiments in which the encoding variables were respectively undivided versus divided attention and level of processing. In terms of performance, these variables influenced recognition and did not influence size congruency effects. But in terms of awareness, measured by remember and know responses, these variables did influence size congruency effects. With undivided attention and with a deep level of processing, size congruency effects occurred only in remembering. With divided attention and with a shallow level of processing, size congruency effects occurred only in knowing. The results show that effects that occur in remembering may also occur independently in knowing. They support theories in which remembering and knowing reflect different memory processes or systems. They do not support the theory that remembering and knowing reflect differences in trace strength.

  6. Analytical Derivation of Power Laws in Firm Size Variables from Gibrat's Law and Quasi-inversion Symmetry: A Geomorphological Approach

    NASA Astrophysics Data System (ADS)

    Ishikawa, Atushi; Fujimoto, Shouji; Mizuno, Takayuki; Watanabe, Tsutomu

    2014-03-01

    We start from Gibrat's law and quasi-inversion symmetry for three firm size variables (i.e., tangible fixed assets K, number of employees L, and sales Y) and derive a partial differential equation to be satisfied by the joint probability density function of K and L. We then transform K and L, which are correlated, into two independent variables by applying surface openness used in geomorphology and provide an analytical solution to the partial differential equation. Using worldwide data on the firm size variables for companies, we confirm that the estimates on the power-law exponents of K, L, and Y satisfy a relationship implied by the theory.

  7. Modeling solute clustering in the diffusion layer around a growing crystal.

    PubMed

    Shiau, Lie-Ding; Lu, Yung-Fang

    2009-03-07

    The mechanism of crystal growth from solution is often thought to consist of a mass transfer diffusion step followed by a surface reaction step. Solute molecules might form clusters in the diffusion step before incorporating into the crystal lattice. A model is proposed in this work to simulate the evolution of the cluster size distribution due to the simultaneous aggregation and breakage of solute molecules in the diffusion layer around a growing crystal in the stirred solution. The crystallization of KAl(SO(4))(2)12H(2)O from aqueous solution is studied to illustrate the effect of supersaturation and diffusion layer thickness on the number-average degree of clustering and the size distribution of solute clusters in the diffusion layer.

  8. Monte-Carlo simulation of a stochastic differential equation

    NASA Astrophysics Data System (ADS)

    Arif, ULLAH; Majid, KHAN; M, KAMRAN; R, KHAN; Zhengmao, SHENG

    2017-12-01

    For solving higher dimensional diffusion equations with an inhomogeneous diffusion coefficient, Monte Carlo (MC) techniques are considered to be more effective than other algorithms, such as finite element method or finite difference method. The inhomogeneity of diffusion coefficient strongly limits the use of different numerical techniques. For better convergence, methods with higher orders have been kept forward to allow MC codes with large step size. The main focus of this work is to look for operators that can produce converging results for large step sizes. As a first step, our comparative analysis has been applied to a general stochastic problem. Subsequently, our formulization is applied to the problem of pitch angle scattering resulting from Coulomb collisions of charge particles in the toroidal devices.

  9. Impact of Stepping Stones Triple P on Parents with a Child Diagnosed with Autism Spectrum Disorder: Implications for School Psychologists

    ERIC Educational Resources Information Center

    VanVoorhis, Richard W.; Miller, Kenneth L.; Miller, Susan M.; Stull, Judith C.

    2015-01-01

    The Stepping Stones Positive Parenting Program (Stepping Stones Triple P; SSTP) was designed for caregivers of children with disabilities to improve select parental variables such as parenting styles, parental satisfaction, and parental competency, and to reduce parental stress and child problem behaviors. This study focused on SSTP training for…

  10. Associations between the Objectively Measured Office Environment and Workplace Step Count and Sitting Time: Cross-Sectional Analyses from the Active Buildings Study.

    PubMed

    Fisher, Abi; Ucci, Marcella; Smith, Lee; Sawyer, Alexia; Spinney, Richard; Konstantatou, Marina; Marmot, Alexi

    2018-06-01

    Office-based workers spend a large proportion of the day sitting and tend to have low overall activity levels. Despite some evidence that features of the external physical environment are associated with physical activity, little is known about the influence of the spatial layout of the internal environment on movement, and the majority of data use self-report. This study investigated associations between objectively-measured sitting time and activity levels and the spatial layout of office floors in a sample of UK office-based workers. Participants wore activPAL accelerometers for at least three consecutive workdays. Primary outcomes were steps and proportion of sitting time per working hour. Primary exposures were office spatial layout, which was objectively-measured by deriving key spatial variables: 'distance from each workstation to key office destinations', 'distance from participant's workstation to all other workstations', 'visibility of co-workers', and workstation 'closeness'. 131 participants from 10 organisations were included. Fifty-four per cent were female, 81% were white, and the majority had a managerial or professional role (72%) in their organisation. The average proportion of the working hour spent sitting was 0.7 (SD 0.15); participants took on average 444 (SD 210) steps per working hour. Models adjusted for confounders revealed significant negative associations between step count and distance from each workstation to all other office destinations (e.g., B = -4.66, 95% CI: -8.12, -1.12, p < 0.01) and nearest office destinations (e.g., B = -6.45, 95% CI: -11.88, -0.41, p < 0.05) and visibility of workstations when standing (B = -2.35, 95% CI: -3.53, -1.18, p < 0.001). The magnitude of these associations was small. There were no associations between spatial variables and sitting time per work hour. Contrary to our hypothesis, the further participants were from office destinations the less they walked, suggesting that changing the relative distance between workstations and other destinations on the same floor may not be the most fruitful target for promoting walking and reducing sitting in the workplace. However, reported effect sizes were very small and based on cross-sectional analyses. The approaches developed in this study could be applied to other office buildings to establish whether a specific office typology may yield more promising results.

  11. Unstable vicinal crystal growth from cellular automata

    NASA Astrophysics Data System (ADS)

    Krasteva, A.; Popova, H.; KrzyŻewski, F.; Załuska-Kotur, M.; Tonchev, V.

    2016-03-01

    In order to study the unstable step motion on vicinal crystal surfaces we devise vicinal Cellular Automata. Each cell from the colony has value equal to its height in the vicinal, initially the steps are regularly distributed. Another array keeps the adatoms, initially distributed randomly over the surface. The growth rule defines that each adatom at right nearest neighbor position to a (multi-) step attaches to it. The update of whole colony is performed at once and then time increases. This execution of the growth rule is followed by compensation of the consumed particles and by diffusional update(s) of the adatom population. Two principal sources of instability are employed - biased diffusion and infinite inverse Ehrlich-Schwoebel barrier (iiSE). Since these factors are not opposed by step-step repulsion the formation of multi-steps is observed but in general the step bunches preserve a finite width. We monitor the developing surface patterns and quantify the observations by scaling laws with focus on the eventual transition from diffusion-limited to kinetics-limited phenomenon. The time-scaling exponent of the bunch size N is 1/2 for the case of biased diffusion and 1/3 for the case of iiSE. Additional distinction is possible based on the time-scaling exponents of the sizes of multi-step Nmulti, these are 0.36÷0.4 (for biased diffusion) and 1/4 (iiSE).

  12. Unified gas-kinetic scheme with multigrid convergence for rarefied flow study

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2017-09-01

    The unified gas kinetic scheme (UGKS) is based on direct modeling of gas dynamics on the mesh size and time step scales. With the modeling of particle transport and collision in a time-dependent flux function in a finite volume framework, the UGKS can connect the flow physics smoothly from the kinetic particle transport to the hydrodynamic wave propagation. In comparison with the direct simulation Monte Carlo (DSMC) method, the current equation-based UGKS can implement implicit techniques in the updates of macroscopic conservative variables and microscopic distribution functions. The implicit UGKS significantly increases the convergence speed for steady flow computations, especially in the highly rarefied and near continuum regimes. In order to further improve the computational efficiency, for the first time, a geometric multigrid technique is introduced into the implicit UGKS, where the prediction step for the equilibrium state and the evolution step for the distribution function are both treated with multigrid acceleration. More specifically, a full approximate nonlinear system is employed in the prediction step for fast evaluation of the equilibrium state, and a correction linear equation is solved in the evolution step for the update of the gas distribution function. As a result, convergent speed has been greatly improved in all flow regimes from rarefied to the continuum ones. The multigrid implicit UGKS (MIUGKS) is used in the non-equilibrium flow study, which includes microflow, such as lid-driven cavity flow and the flow passing through a finite-length flat plate, and high speed one, such as supersonic flow over a square cylinder. The MIUGKS shows 5-9 times efficiency increase over the previous implicit scheme. For the low speed microflow, the efficiency of MIUGKS is several orders of magnitude higher than the DSMC. Even for the hypersonic flow at Mach number 5 and Knudsen number 0.1, the MIUGKS is still more than 100 times faster than the DSMC method for obtaining a convergent steady state solution.

  13. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  14. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  15. Is the size of the useful field of view affected by postural demands associated with standing and stepping?

    PubMed

    Reed-Jones, James G; Reed-Jones, Rebecca J; Hollands, Mark A

    2014-04-30

    The useful field of view (UFOV) is the visual area from which information is obtained at a brief glance. While studies have examined the effects of increased cognitive load on the visual field, no one has specifically looked at the effects of postural control or locomotor activity on the UFOV. The current study aimed to examine the effects of postural demand and locomotor activity on UFOV performance in healthy young adults. Eleven participants were tested on three modified UFOV tasks (central processing, peripheral processing, and divided-attention) while seated, standing, and stepping in place. Across all postural conditions, participants showed no difference in their central or peripheral processing. However, in the divided-attention task (reporting the letter in central vision and target location in peripheral vision amongst distracter items) a main effect of posture condition on peripheral target accuracy was found for targets at 57° of eccentricity (p=.037). The mean accuracy reduced from 80.5% (standing) to 74% (seated) to 56.3% (stepping). These findings show that postural demands do affect UFOV divided-attention performance. In particular, the size of the useful field of view significantly decreases when stepping. This finding has important implications for how the results of a UFOV test are used to evaluate the general size of the UFOV during varying activities, as the traditional seated test procedure may overestimate the size of the UFOV during locomotor activities. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  17. Common factor analysis versus principal component analysis: choice for symptom cluster research.

    PubMed

    Kim, Hee-Ju

    2008-03-01

    The purpose of this paper is to examine differences between two factor analytical methods and their relevance for symptom cluster research: common factor analysis (CFA) versus principal component analysis (PCA). Literature was critically reviewed to elucidate the differences between CFA and PCA. A secondary analysis (N = 84) was utilized to show the actual result differences from the two methods. CFA analyzes only the reliable common variance of data, while PCA analyzes all the variance of data. An underlying hypothetical process or construct is involved in CFA but not in PCA. PCA tends to increase factor loadings especially in a study with a small number of variables and/or low estimated communality. Thus, PCA is not appropriate for examining the structure of data. If the study purpose is to explain correlations among variables and to examine the structure of the data (this is usual for most cases in symptom cluster research), CFA provides a more accurate result. If the purpose of a study is to summarize data with a smaller number of variables, PCA is the choice. PCA can also be used as an initial step in CFA because it provides information regarding the maximum number and nature of factors. In using factor analysis for symptom cluster research, several issues need to be considered, including subjectivity of solution, sample size, symptom selection, and level of measure.

  18. "What Is a Step?" Differences in How a Step Is Detected among Three Popular Activity Monitors That Have Impacted Physical Activity Research.

    PubMed

    John, Dinesh; Morton, Alvin; Arguello, Diego; Lyden, Kate; Bassett, David

    2018-04-15

    (1) Background: This study compared manually-counted treadmill walking steps from the hip-worn DigiwalkerSW200 and OmronHJ720ITC, and hip and wrist-worn ActiGraph GT3X+ and GT9X; determined brand-specific acceleration amplitude (g) and/or frequency (Hz) step-detection thresholds; and quantified key features of the acceleration signal during walking. (2) Methods: Twenty participants (Age: 26.7 ± 4.9 years) performed treadmill walking between 0.89-to-1.79 m/s (2-4 mph) while wearing a hip-worn DigiwalkerSW200, OmronHJ720ITC, GT3X+ and GT9X, and a wrist-worn GT3X+ and GT9X. A DigiwalkerSW200 and OmronHJ720ITC underwent shaker testing to determine device-specific frequency and amplitude step-detection thresholds. Simulated signal testing was used to determine thresholds for the ActiGraph step algorithm. Steps during human testing were compared using bias and confidence intervals. (3) Results: The OmronHJ720ITC was most accurate during treadmill walking. Hip and wrist-worn ActiGraph outputs were significantly different from the criterion. The DigiwalkerSW200 records steps for movements with a total acceleration of ≥1.21 g. The OmronHJ720ITC detects a step when movement has an acceleration ≥0.10 g with a dominant frequency of ≥1 Hz. The step-threshold for the ActiLife algorithm is variable based on signal frequency. Acceleration signals at the hip and wrist have distinctive patterns during treadmill walking. (4) Conclusions: Three common research-grade physical activity monitors employ different step-detection strategies, which causes variability in step output.

  19. “What Is a Step?” Differences in How a Step Is Detected among Three Popular Activity Monitors That Have Impacted Physical Activity Research

    PubMed Central

    John, Dinesh; Arguello, Diego; Lyden, Kate; Bassett, David

    2018-01-01

    (1) Background: This study compared manually-counted treadmill walking steps from the hip-worn DigiwalkerSW200 and OmronHJ720ITC, and hip and wrist-worn ActiGraph GT3X+ and GT9X; determined brand-specific acceleration amplitude (g) and/or frequency (Hz) step-detection thresholds; and quantified key features of the acceleration signal during walking. (2) Methods: Twenty participants (Age: 26.7 ± 4.9 years) performed treadmill walking between 0.89-to-1.79 m/s (2–4 mph) while wearing a hip-worn DigiwalkerSW200, OmronHJ720ITC, GT3X+ and GT9X, and a wrist-worn GT3X+ and GT9X. A DigiwalkerSW200 and OmronHJ720ITC underwent shaker testing to determine device-specific frequency and amplitude step-detection thresholds. Simulated signal testing was used to determine thresholds for the ActiGraph step algorithm. Steps during human testing were compared using bias and confidence intervals. (3) Results: The OmronHJ720ITC was most accurate during treadmill walking. Hip and wrist-worn ActiGraph outputs were significantly different from the criterion. The DigiwalkerSW200 records steps for movements with a total acceleration of ≥1.21 g. The OmronHJ720ITC detects a step when movement has an acceleration ≥0.10 g with a dominant frequency of ≥1 Hz. The step-threshold for the ActiLife algorithm is variable based on signal frequency. Acceleration signals at the hip and wrist have distinctive patterns during treadmill walking. (4) Conclusions: Three common research-grade physical activity monitors employ different step-detection strategies, which causes variability in step output. PMID:29662048

  20. 16 CFR 642.3 - Prescreen opt-out notice.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... size that is larger than the type size of the principal text on the same page, but in no event smaller than 12-point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (ii) On the...

Top